BPELPower—A BPEL execution engine for geospatial web services
NASA Astrophysics Data System (ADS)
Yu, Genong (Eugene); Zhao, Peisheng; Di, Liping; Chen, Aijun; Deng, Meixia; Bai, Yuqi
2012-10-01
The Business Process Execution Language (BPEL) has become a popular choice for orchestrating and executing workflows in the Web environment. As one special kind of scientific workflow, geospatial Web processing workflows are data-intensive, deal with complex structures in data and geographic features, and execute automatically with limited human intervention. To enable the proper execution and coordination of geospatial workflows, a specially enhanced BPEL execution engine is required. BPELPower was designed, developed, and implemented as a generic BPEL execution engine with enhancements for executing geospatial workflows. The enhancements are especially in its capabilities in handling Geography Markup Language (GML) and standard geospatial Web services, such as the Web Processing Service (WPS) and the Web Feature Service (WFS). BPELPower has been used in several demonstrations over the decade. Two scenarios were discussed in detail to demonstrate the capabilities of BPELPower. That study showed a standard-compliant, Web-based approach for properly supporting geospatial processing, with the only enhancement at the implementation level. Pattern-based evaluation and performance improvement of the engine are discussed: BPELPower directly supports 22 workflow control patterns and 17 workflow data patterns. In the future, the engine will be enhanced with high performance parallel processing and broad Web paradigms.
Doyle, John
2007-01-01
This paper discusses the topic of judicial execution from the perspective of the intersection of the technological issues and the professional ethics issues. Although physicians are generally ethically forbidden from any involvement in the judicial execution process, this does not appear to be the case for engineering professionals. This creates an interesting but controversial opportunity for the engineering community (especially biomedical engineers) to improve the humaneness and reliability of the judicial execution process.
ERIC Educational Resources Information Center
Dixon, Raymond A.; Johnson, Scott D.
2012-01-01
A cognitive construct that is important when solving engineering design problems is executive control process, or metacognition. It is a central feature of human consciousness that enables one "to be aware of, monitor, and control mental processes." The framework for this study was conceptualized by integrating the model for creative design, which…
Reducing acquisition risk through integrated systems of systems engineering
NASA Astrophysics Data System (ADS)
Gross, Andrew; Hobson, Brian; Bouwens, Christina
2016-05-01
In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.
System Re-engineering Project Executive Summary
1991-11-01
Management Information System (STAMIS) application. This project involved reverse engineering, evaluation of structured design and object-oriented design, and re- implementation of the system in Ada. This executive summary presents the approach to re-engineering the system, the lessons learned while going through the process, and issues to be considered in future tasks of this nature.... Computer-Aided Software Engineering (CASE), Distributed Software, Ada, COBOL, Systems Analysis, Systems Design, Life Cycle Development, Functional Decomposition, Object-Oriented
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Design Knowledge Management System (DKMS) Beta Test Report
1992-11-01
design process. These problems, which include knowledge representation, constraint propagation, model design, and information integration, are...effective delivery of life-cycle engineering knowledge assistance and information to the design/engineering activities. It does not matter whether these...platfomi. 4. Reuse - existing data, information , and knowledge can be reused. 5. Remote Execution -- automatically handles remote execution without
NASA Astrophysics Data System (ADS)
Gold, Zachary Samuel
Engineering play is a new perspective on preschool education that views constructive play as an engineering design process that parallels the way engineers think and work when they develop engineered solutions to human problems (Bairaktarova, Evangelou, Bagiati, & Brophy, 2011). Early research from this perspective supports its use in framing play as a key learning context. However, no research to date has examined associations between engineering play and other factors linked with early school success, such as executive function, mathematical ability, and spatial ability. Additionally, more research is needed to further validate a new engineering play observational measure. This study had two main goals: (1) to gather early validity data on the engineering play measure as a potentially useful instrument for documenting the occurrence of children's engineering play behaviors in educational contexts, such as block play. This was done by testing the factor structure of the engineering play behaviors in this sample and their association with preschoolers' planning, a key aspect of the engineering design process; (2) to explore associations between preschoolers' engineering play and executive function, mathematical ability, and spatial ability. Participants included 110 preschoolers (62 girls; 48 boys; M = 58.47 months) from 10 classrooms in the Midwest United States coded for their frequency of engagement in each of the nine engineering play behaviors. A confirmatory factor analysis resulted in one engineering play factor including six of the engineering play behaviors. A series of marginal regression models revealed that the engineering play factor was significantly and positively associated with the spatial horizontal rotation transformation. However, engineering play was not significantly related to planning ability, executive function, informal mathematical abilities, or other spatial transformation skills. Follow-up analyses revealed significant positive associations between engineering play and planning, executive function, and geometry for only a subgroup of children (n = 27) who had individualized education program (IEP) status. This was the first of a series of studies planned to evaluate the potential of the engineering play perspective as a tool for understanding young children's development and learning across multiple developmental domains. Although most hypotheses regarding engineering play and cognitive skills were not supported, the study provided partial evidence for the reliability and validity of the engineering play observation measure. Future research should include larger sample sizes with more statistical power, continued refinement of the engineering play observation measure, examination of potential associations with specific early learning domains, including spatial ability and language, and more comparisons of engineering play between typically developing children and children with disabilities.
ERIC Educational Resources Information Center
Gold, Zachary Samuel
2017-01-01
Engineering play is a new perspective on preschool education that views constructive play as an engineering design process that parallels the way engineers think and work when they develop engineered solutions to human problems (Bairaktarova, Evangelou, Bagiati, & Brophy, 2011). Early research from this perspective supports its use in framing…
CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008
2008-10-01
proprietary modeling offerings, there is considerable conver- gence around Business Process Modeling Notation ( BPMN ). The research also found strong...support across vendors for the Business Process Execution Language standard, though there is also emerging support for direct execution of BPMN through...the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also provide the needed moni- toring of those processes at
Jafarpour, Borna; Abidi, Samina Raza; Abidi, Syed Sibte Raza
2016-01-01
Computerizing paper-based CPG and then executing them can provide evidence-informed decision support to physicians at the point of care. Semantic web technologies especially web ontology language (OWL) ontologies have been profusely used to represent computerized CPG. Using semantic web reasoning capabilities to execute OWL-based computerized CPG unties them from a specific custom-built CPG execution engine and increases their shareability as any OWL reasoner and triple store can be utilized for CPG execution. However, existing semantic web reasoning-based CPG execution engines suffer from lack of ability to execute CPG with high levels of expressivity, high cognitive load of computerization of paper-based CPG and updating their computerized versions. In order to address these limitations, we have developed three CPG execution engines based on OWL 1 DL, OWL 2 DL and OWL 2 DL + semantic web rule language (SWRL). OWL 1 DL serves as the base execution engine capable of executing a wide range of CPG constructs, however for executing highly complex CPG the OWL 2 DL and OWL 2 DL + SWRL offer additional executional capabilities. We evaluated the technical performance and medical correctness of our execution engines using a range of CPG. Technical evaluations show the efficiency of our CPG execution engines in terms of CPU time and validity of the generated recommendation in comparison to existing CPG execution engines. Medical evaluations by domain experts show the validity of the CPG-mediated therapy plans in terms of relevance, safety, and ordering for a wide range of patient scenarios.
Independent technical review, handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction,more » and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.« less
Multidisciplinary optimization for engineering systems - Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Multidisciplinary optimization for engineering systems: Achievements and potential
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1989-01-01
The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.
Leveraging the BPEL Event Model to Support QoS-aware Process Execution
NASA Astrophysics Data System (ADS)
Zaid, Farid; Berbner, Rainer; Steinmetz, Ralf
Business processes executed using compositions of distributed Web Services are susceptible to different fault types. The Web Services Business Process Execution Language (BPEL) is widely used to execute such processes. While BPEL provides fault handling mechanisms to handle functional faults like invalid message types, it still lacks a flexible native mechanism to handle non-functional exceptions associated with violations of QoS levels that are typically specified in a governing Service Level Agreement (SLA), In this paper, we present an approach to complement BPEL's fault handling, where expected QoS levels and necessary recovery actions are specified declaratively in form of Event-Condition-Action (ECA) rules. Our main contribution is leveraging BPEL's standard event model which we use as an event space for the created ECA rules. We validate our approach by an extension to an open source BPEL engine.
Dual compile strategy for parallel heterogeneous execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Tyler Barratt; Perry, James Thomas
2012-06-01
The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less
Composable Framework Support for Software-FMEA Through Model Execution
NASA Astrophysics Data System (ADS)
Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco
2016-08-01
Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.
Automatic Earth observation data service based on reusable geo-processing workflow
NASA Astrophysics Data System (ADS)
Chen, Nengcheng; Di, Liping; Gong, Jianya; Yu, Genong; Min, Min
2008-12-01
A common Sensor Web data service framework for Geo-Processing Workflow (GPW) is presented as part of the NASA Sensor Web project. This framework consists of a data service node, a data processing node, a data presentation node, a Catalogue Service node and BPEL engine. An abstract model designer is used to design the top level GPW model, model instantiation service is used to generate the concrete BPEL, and the BPEL execution engine is adopted. The framework is used to generate several kinds of data: raw data from live sensors, coverage or feature data, geospatial products, or sensor maps. A scenario for an EO-1 Sensor Web data service for fire classification is used to test the feasibility of the proposed framework. The execution time and influences of the service framework are evaluated. The experiments show that this framework can improve the quality of services for sensor data retrieval and processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2008-04-01
The CARB Executive Order Exemption Process for a Hydrogen-fueled Internal Combustion Engine Vehicle was undertaken to define the requirements to achieve a California Air Resource Board Executive Order for a hydrogenfueled vehicle retrofit kit. A 2005 to 2006 General Motors Company Sierra/Chevrolet Silverado 1500HD pickup was assumed to be the build-from vehicle for the retrofit kit. The emissions demonstration was determined not to pose a significant hurdle due to the non-hydrocarbon-based fuel and lean-burn operation. However, significant work was determined to be necessary for Onboard Diagnostics Level II compliance. Therefore, it is recommended that an Experimental Permit be obtained frommore » the California Air Resource Board to license and operate the vehicles for the durability of the demonstration in support of preparing a fully compliant and certifiable package that can be submitted.« less
Debugging expert systems using a dynamically created hypertext network
NASA Technical Reports Server (NTRS)
Boyle, Craig D. B.; Schuette, John F.
1991-01-01
The labor intensive nature of expert system writing and debugging motivated this study. The hypothesis is that a hypertext based debugging tool is easier and faster than one traditional tool, the graphical execution trace. HESDE (Hypertext Expert System Debugging Environment) uses Hypertext nodes and links to represent the objects and their relationships created during the execution of a rule based expert system. HESDE operates transparently on top of the CLIPS (C Language Integrated Production System) rule based system environment and is used during the knowledge base debugging process. During the execution process HESDE builds an execution trace. Use of facts, rules, and their values are automatically stored in a Hypertext network for each execution cycle. After the execution process, the knowledge engineer may access the Hypertext network and browse the network created. The network may be viewed in terms of rules, facts, and values. An experiment was conducted to compare HESDE with a graphical debugging environment. Subjects were given representative tasks. For speed and accuracy, in eight of the eleven tasks given to subjects, HESDE was significantly better.
A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems
1988-03-01
The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.
2008-12-01
A SYSTEMS ENGINEERING PROCESS SUPPORTING THE DEVELOPMENT OF OPERATIONAL REQUIREMENTS DRIVEN FEDERATIONS Andreas Tolk & Thomas G. Litwin ...c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Tolk, Litwin and Kewley Executive Office (PEO...capabilities and their relative changes 1297 Tolk, Litwin and Kewley based on the system to be evaluated as well, in particular when it comes to
Application driven interface generation for EASIE. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kao, Ya-Chen
1992-01-01
The Environment for Application Software Integration and Execution (EASIE) provides a user interface and a set of utility programs which support the rapid integration and execution of analysis programs about a central relational database. EASIE provides users with two basic modes of execution. One of them is a menu-driven execution mode, called Application-Driven Execution (ADE), which provides sufficient guidance to review data, select a menu action item, and execute an application program. The other mode of execution, called Complete Control Execution (CCE), provides an extended executive interface which allows in-depth control of the design process. Currently, the EASIE system is based on alphanumeric techniques only. It is the purpose of this project to extend the flexibility of the EASIE system in the ADE mode by implementing it in a window system. Secondly, a set of utilities will be developed to assist the experienced engineer in the generation of an ADE application.
Leadership processes for re-engineering changes to the health care industry.
Guo, Kristina L
2004-01-01
As health care organizations seek innovative ways to change financing and delivery mechanisms due to escalated health care costs and increased competition, drastic changes are being sought in the form of re-engineering. This study discusses the leader's role of re-engineering in health care. It specifically addresses the reasons for failures in re-engineering and argues that success depends on senior level leaders playing a critical role. Existing studies lack comprehensiveness in establishing models of re-engineering and management guidelines. This research focuses on integrating re-engineering and leadership processes in health care by creating a step-by-step model. Particularly, it illustrates the four Es: Examination, Establishment, Execution and Evaluation, as a comprehensive re-engineering process that combines managerial roles and activities to result in successfully changed and reengineered health care organizations.
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
Boston-Fleischhauer, Carol
2008-02-01
The demand to redesign healthcare processes that achieve efficient, effective, and safe results is never-ending. Part 1 of this 2-part series introduced human factors engineering and reliability science as important knowledge to enhance existing operational and clinical process design methods in healthcare organizations. In part 2, the author applies this knowledge to one of the most common operational processes in healthcare: clinical documentation. Specific implementation strategies and anticipated results are discussed, along with organizational challenges and recommended executive responses.
Selecting the Parameters of the Orientation Engine for a Technological Spacecraft
NASA Astrophysics Data System (ADS)
Belousov, A. I.; Sedelnikov, A. V.
2018-01-01
This work provides a solution to the issues of providing favorable conditions for carrying out gravitationally sensitive technological processes on board a spacecraft. It is noted that an important role is played by the optimal choice of the orientation system of the spacecraft and the main parameters of the propulsion system as the most important executive organ of the system of orientation and control of the orbital motion of the spacecraft. Advantages and disadvantages of two different orientation systems are considered. One of them assumes the periodic impulsive inclusion of a low thrust liquid rocket engines, the other is based on the continuous operation of the executing elements. A conclusion is drawn on the need to take into account the composition of gravitationally sensitive processes when choosing the orientation system of the spacecraft.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
RT-25: Requirements Management for Net-Centric Enterprises. Phase 1
2011-04-28
software systems. These include Business Process Modeling Notation ( BPMN ) (White and Miers 2008) and Business Process Execution Language (BPEL) (Sarang...Engineering with SysML/UML: Modeling, Analysis, Design, Morgan Kaufmann/The OMG Press. White, S. A. and D. Miers (2008). BPMN Modeling and Reference
Engineering Elegant Systems: Postulates, Principles, and Hypotheses of Systems Engineering
NASA Technical Reports Server (NTRS)
Watson, Michael D.
2018-01-01
Definition: System Engineering is the engineering discipline which integrates the system functions, system environment, and the engineering disciplines necessary to produce and/or operate an elegant system; Elegant System - A system that is robust in application, fully meeting specified and adumbrated intent, is well structured, and is graceful in operation. Primary Focus: System Design and Integration: Identify system couplings and interactions; Identify system uncertainties and sensitivities; Identify emergent properties; Manage the effectiveness of the system. Engineering Discipline Integration: Manage flow of information for system development and/or operations; Maintain system activities within budget and schedule. Supporting Activities: Process application and execution.
Improvement of Selected Logistics Processes Using Quality Engineering Tools
NASA Astrophysics Data System (ADS)
Zasadzień, Michał; Žarnovský, Jozef
2018-03-01
Increase in the number of orders, the increasing quality requirements and the speed of order preparation require implementation of new solutions and improvement of logistics processes. Any disruption that occurs during execution of an order often leads to customer dissatisfaction, as well as loss of his/her confidence. The article presents a case study of the use of quality engineering methods and tools to improve the e-commerce logistic process. This made it possible to identify and prioritize key issues, identify their causes, and formulate improvement and prevention measures.
Advanced Turbine Technology Applications Project (ATTAP)
NASA Technical Reports Server (NTRS)
1989-01-01
ATTAP activities during the past year were highlighted by an extensive materials assessment, execution of a reference powertrain design, test-bed engine design and development, ceramic component design, materials and component characterization, ceramic component process development and fabrication, component rig design and fabrication, test-bed engine fabrication, and hot gasifier rig and engine testing. Materials assessment activities entailed engine environment evaluation of domestically supplied radial gasifier turbine rotors that were available at the conclusion of the Advanced Gas Turbine (AGT) Technology Development Project as well as an extensive survey of both domestic and foreign ceramic suppliers and Government laboratories performing ceramic materials research applicable to advanced heat engines. A reference powertrain design was executed to reflect the selection of the AGT-5 as the ceramic component test-bed engine for the ATTAP. Test-bed engine development activity focused on upgrading the AGT-5 from a 1038 C (1900 F) metal engine to a durable 1371 C (2500 F) structural ceramic component test-bed engine. Ceramic component design activities included the combustor, gasifier turbine static structure, and gasifier turbine rotor. The materials and component characterization efforts have included the testing and evaluation of several candidate ceramic materials and components being developed for use in the ATTAP. Ceramic component process development and fabrication activities were initiated for the gasifier turbine rotor, gasifier turbine vanes, gasifier turbine scroll, extruded regenerator disks, and thermal insulation. Component rig development activities included combustor, hot gasifier, and regenerator rigs. Test-bed engine fabrication activities consisted of the fabrication of an all-new AGT-5 durability test-bed engine and support of all engine test activities through instrumentation/build/repair. Hot gasifier rig and test-bed engine testing activities were performed.
Liquid rocket booster integration study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1988-01-01
The impacts of introducing liquid rocket booster engines (LRB) into the Space Transportation System (STS)/Kennedy Space Center (KSC) launch environment are identified and evaluated. Proposed ground systems configurations are presented along with a launch site requirements summary. Prelaunch processing scenarios are described and the required facility modifications and new facility requirements are analyzed. Flight vehicle design recommendations to enhance launch processing are discussed. Processing approaches to integrate LRB with existing STS launch operations are evaluated. The key features and significance of launch site transition to a new STS configuration in parallel with ongoing launch activities are enumerated. This volume is the executive summary of the five volume series.
A distributed version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.; Curlett, Brian P.
1993-01-01
Distributed NEPP, a version of the NASA Engine Performance Program, uses the original NEPP code but executes it in a distributed computer environment. Multiple workstations connected by a network increase the program's speed and, more importantly, the complexity of the cases it can handle in a reasonable time. Distributed NEPP uses the public domain software package, called Parallel Virtual Machine, allowing it to execute on clusters of machines containing many different architectures. It includes the capability to link with other computers, allowing them to process NEPP jobs in parallel. This paper discusses the design issues and granularity considerations that entered into programming Distributed NEPP and presents the results of timing runs.
76 FR 54143 - Airworthiness Directives; Turbomeca Arriel 1B Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
... & Propeller Directorate, 12 New England Executive Park, Burlington, MA 01803; phone: 781-238-7772; fax: 781... rotor; and (2) The free rotation of the gas generator rotor; and (3) No grinding noise during the... Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 12 New England Executive Park...
Interfacing modules for integrating discipline specific structural mechanics codes
NASA Technical Reports Server (NTRS)
Endres, Ned M.
1989-01-01
An outline of the organization and capabilities of the Engine Structures Computational Simulator (Simulator) at NASA Lewis Research Center is given. One of the goals of the research at Lewis is to integrate various discipline specific structural mechanics codes into a software system which can be brought to bear effectively on a wide range of engineering problems. This system must possess the qualities of being effective and efficient while still remaining user friendly. The simulator was initially designed for the finite element simulation of gas jet engine components. Currently, the simulator has been restricted to only the analysis of high pressure turbine blades and the accompanying rotor assembly, although the current installation can be expanded for other applications. The simulator presently assists the user throughout its procedures by performing information management tasks, executing external support tasks, organizing analysis modules and executing these modules in the user defined order while maintaining processing continuity.
Engineering the Business of Defense Acquisition: An Analysis of Program Office Processes
2015-05-01
Information Technology and Business Process Redesign | MIT Sloan Management Review . MIT Sloan Management Review . Retrieved from http://sloanreview.mit.edu...links systems management to process execution Three Phases/ Multi-Year Effort (This Phase) Literature review Model development— Formal and...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining
Integration of rocket turbine design and analysis through computer graphics
NASA Technical Reports Server (NTRS)
Hsu, Wayne; Boynton, Jim
1988-01-01
An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.
Applications for General Purpose Command Buffers: The Emergency Conjunction Avoidance Maneuver
Scheid, Robert J; England, Martin
2016-01-01
A case study is presented for the use of Relative Operation Sequence (ROS) command buffers to quickly execute a propulsive maneuver to avoid a collision with space debris. In this process, a ROS is custom-built with a burn time and magnitude, uplinked to the spacecraft, and executed in 15 percent of the time of the previous method. This new process provides three primary benefits. First, the planning cycle can be delayed until it is certain a burn must be performed, reducing team workload. Second, changes can be made to the burn parameters almost up to the point of execution while still allowing the normal uplink product review process, reducing the risk of leaving the operational orbit because of outdated burn parameters, and minimizing the chance of accidents from human error, such as missed commands, in a high-stress situation. Third, the science impacts can be customized and minimized around the burn, and in the event of an abort can be eliminated entirely in some circumstances. The result is a compact burn process that can be executed in as few as four hours and can be aborted seconds before execution. Operational, engineering, planning, and flight dynamics perspectives are presented, as well as a functional overview of the code and workflow required to implement the process. Future expansions and capabilities are also discussed.
Artemis: Integrating Scientific Data on the Grid (Preprint)
2004-07-01
Theseus execution engine [Barish and Knoblock 03] to efficiently execute the generated datalog program. The Theseus execution engine has a wide...variety of operations to query databases, web sources, and web services. Theseus also contains a wide variety of relational operations, such as...selection, union, or projection. Furthermore, Theseus optimizes the execution of an integration plan by querying several data sources in parallel and
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Davis, John S.
1989-01-01
The Environment for Application Software Integration and Execution (EASIE) provides a methodology and a set of software utility programs to ease the task of coordinating engineering design and analysis codes. EASIE was designed to meet the needs of conceptual design engineers that face the task of integrating many stand-alone engineering analysis programs. Using EASIE, programs are integrated through a relational database management system. Volume 1, Executive Overview, gives an overview of the functions provided by EASIE and describes their use. Three operational design systems based upon the EASIE software are briefly described.
Distributed process manager for an engineering network computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gait, J.
1987-08-01
MP is a manager for systems of cooperating processes in a local area network of engineering workstations. MP supports transparent continuation by maintaining multiple copies of each process on different workstations. Computational bandwidth is optimized by executing processes in parallel on different workstations. Responsiveness is high because workstations compete among themselves to respond to requests. The technique is to select a master from among a set of replicates of a process by a competitive election between the copies. Migration of the master when a fault occurs or when response slows down is effected by inducing the election of a newmore » master. Competitive response stabilizes system behavior under load, so MP exhibits realtime behaviors.« less
Earth Science Mining Web Services
NASA Astrophysics Data System (ADS)
Pham, L. B.; Lynnes, C. S.; Hegde, M.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2008-12-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at the GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADaM components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestrates the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to this infusion is the loosely coupled, Web- Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Earth Science Mining Web Services
NASA Technical Reports Server (NTRS)
Pham, Long; Lynnes, Christopher; Hegde, Mahabaleshwa; Graves, Sara; Ramachandran, Rahul; Maskey, Manil; Keiser, Ken
2008-01-01
To allow scientists further capabilities in the area of data mining and web services, the Goddard Earth Sciences Data and Information Services Center (GES DISC) and researchers at the University of Alabama in Huntsville (UAH) have developed a system to mine data at the source without the need of network transfers. The system has been constructed by linking together several pre-existing technologies: the Simple Scalable Script-based Science Processor for Measurements (S4PM), a processing engine at he GES DISC; the Algorithm Development and Mining (ADaM) system, a data mining toolkit from UAH that can be configured in a variety of ways to create customized mining processes; ActiveBPEL, a workflow execution engine based on BPEL (Business Process Execution Language); XBaya, a graphical workflow composer; and the EOS Clearinghouse (ECHO). XBaya is used to construct an analysis workflow at UAH using ADam components, which are also installed remotely at the GES DISC, wrapped as Web Services. The S4PM processing engine searches ECHO for data using space-time criteria, staging them to cache, allowing the ActiveBPEL engine to remotely orchestras the processing workflow within S4PM. As mining is completed, the output is placed in an FTP holding area for the end user. The goals are to give users control over the data they want to process, while mining data at the data source using the server's resources rather than transferring the full volume over the internet. These diverse technologies have been infused into a functioning, distributed system with only minor changes to the underlying technologies. The key to the infusion is the loosely coupled, Web-Services based architecture: All of the participating components are accessible (one way or another) through (Simple Object Access Protocol) SOAP-based Web Services.
Intelligent Signal Processing for Active Control
1992-06-17
FUNDING NUMSI Intelligent Signal Processing for Active Control C-NO001489-J-1633 G. AUTHOR(S) P.A. Ramamoorthy 7. P2RFORMING ORGANIZATION NAME(S) AND...unclassified .unclassified unclassified L . I mu-. W UNIVERSITY OF CINCINNATI COLLEGE OF ENGINEERING Intelligent Signal Processing For Rctiue Control...NAURI RESEARCH Conkact No: NO1489-J-1633 P.L: P.A.imoodh Intelligent Signal Processing For Active Control 1 Executive Summary The thrust of this
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
DALiuGE: A graph execution framework for harnessing the astronomical data deluge
NASA Astrophysics Data System (ADS)
Wu, C.; Tobar, R.; Vinsen, K.; Wicenec, A.; Pallot, D.; Lao, B.; Wang, R.; An, T.; Boulton, M.; Cooper, I.; Dodson, R.; Dolensky, M.; Mei, Y.; Wang, F.
2017-07-01
The Data Activated Liu Graph Engine - DALiuGE- is an execution framework for processing large astronomical datasets at a scale required by the Square Kilometre Array Phase 1 (SKA1). It includes an interface for expressing complex data reduction pipelines consisting of both datasets and algorithmic components and an implementation run-time to execute such pipelines on distributed resources. By mapping the logical view of a pipeline to its physical realisation, DALiuGE separates the concerns of multiple stakeholders, allowing them to collectively optimise large-scale data processing solutions in a coherent manner. The execution in DALiuGE is data-activated, where each individual data item autonomously triggers the processing on itself. Such decentralisation also makes the execution framework very scalable and flexible, supporting pipeline sizes ranging from less than ten tasks running on a laptop to tens of millions of concurrent tasks on the second fastest supercomputer in the world. DALiuGE has been used in production for reducing interferometry datasets from the Karl E. Jansky Very Large Array and the Mingantu Ultrawide Spectral Radioheliograph; and is being developed as the execution framework prototype for the Science Data Processor (SDP) consortium of the Square Kilometre Array (SKA) telescope. This paper presents a technical overview of DALiuGE and discusses case studies from the CHILES and MUSER projects that use DALiuGE to execute production pipelines. In a companion paper, we provide in-depth analysis of DALiuGE's scalability to very large numbers of tasks on two supercomputing facilities.
Creating system engineering products with executable models in a model-based engineering environment
NASA Astrophysics Data System (ADS)
Karban, Robert; Dekens, Frank G.; Herzig, Sebastian; Elaasar, Maged; Jankevičius, Nerijus
2016-08-01
Applying systems engineering across the life-cycle results in a number of products built from interdependent sources of information using different kinds of system level analysis. This paper focuses on leveraging the Executable System Engineering Method (ESEM) [1] [2], which automates requirements verification (e.g. power and mass budget margins and duration analysis of operational modes) using executable SysML [3] models. The particular value proposition is to integrate requirements, and executable behavior and performance models for certain types of system level analysis. The models are created with modeling patterns that involve structural, behavioral and parametric diagrams, and are managed by an open source Model Based Engineering Environment (named OpenMBEE [4]). This paper demonstrates how the ESEM is applied in conjunction with OpenMBEE to create key engineering products (e.g. operational concept document) for the Alignment and Phasing System (APS) within the Thirty Meter Telescope (TMT) project [5], which is under development by the TMT International Observatory (TIO) [5].
Elements of Engineering Excellence
NASA Technical Reports Server (NTRS)
Blair, J. C.; Ryan, R. S.; Schutzenhofer
2012-01-01
The inspiration for this Contract Report (CR) originated in discussions with the director of Marshall Space Flight Center (MSFC) Engineering who asked that we investigate the question: "How do you achieve excellence in aerospace engineering?" Engineering a space system is a complex activity. Avoiding its inherent potential pitfalls and achieving a successful product is a challenge. This CR presents one approach to answering the question of how to achieve Engineering Excellence. We first investigated the root causes of NASA major failures as a basis for developing a proposed answer to the question of Excellence. The following discussions integrate a triad of Technical Understanding and Execution, Partnership with the Project, and Individual and Organizational Culture. The thesis is that you must focus on the whole process and its underlying culture, not just on the technical aspects. In addition to the engineering process, emphasis is given to the need and characteristics of a Learning Organization as a mechanism for changing the culture.
NASA Astrophysics Data System (ADS)
Leu, Jun-Der; Lee, Larry Jung-Hsing
2017-09-01
Enterprise resource planning (ERP) is a software solution that integrates the operational processes of the business functions of an enterprise. However, implementing ERP systems is a complex process. In addition to the technical issues, companies must address problems associated with business process re-engineering, time and budget control, and organisational change. Numerous industrial studies have shown that the failure rate of ERP implementation is high, even for well-designed systems. Thus, ERP projects typically require a clear methodology to support the project execution and effectiveness. In this study, we propose a theoretical model for ERP implementation. The value engineering (VE) method forms the basis of the proposed framework, which integrates Six Sigma tools. The proposed framework encompasses five phases: knowledge generation, analysis, creation, development and execution. In the VE method, potential ERP problems related to software, hardware, consultation and organisation are analysed in a group-decision manner and in relation to value, and Six Sigma tools are applied to avoid any project defects. We validate the feasibility of the proposed model by applying it to an international manufacturing enterprise in Taiwan. The results show improvements in customer response time and operational efficiency in terms of work-in-process and turnover of materials. Based on the evidence from the case study, the theoretical framework is discussed together with the study's limitations and suggestions for future research.
Automatic programming for critical applications
NASA Technical Reports Server (NTRS)
Loganantharaj, Raj L.
1988-01-01
The important phases of a software life cycle include verification and maintenance. Usually, the execution performance is an expected requirement in a software development process. Unfortunately, the verification and the maintenance of programs are the time consuming and the frustrating aspects of software engineering. The verification cannot be waived for the programs used for critical applications such as, military, space, and nuclear plants. As a consequence, synthesis of programs from specifications, an alternative way of developing correct programs, is becoming popular. The definition, or what is understood by automatic programming, has been changed with our expectations. At present, the goal of automatic programming is the automation of programming process. Specifically, it means the application of artificial intelligence to software engineering in order to define techniques and create environments that help in the creation of high level programs. The automatic programming process may be divided into two phases: the problem acquisition phase and the program synthesis phase. In the problem acquisition phase, an informal specification of the problem is transformed into an unambiguous specification while in the program synthesis phase such a specification is further transformed into a concrete, executable program.
Stateless and stateful implementations of faithful execution
Pierson, Lyndon G; Witzke, Edward L; Tarman, Thomas D; Robertson, Perry J; Eldridge, John M; Campbell, Philip L
2014-12-16
A faithful execution system includes system memory, a target processor, and protection engine. The system memory stores a ciphertext including value fields and integrity fields. The value fields each include an encrypted executable instruction and the integrity fields each include an encrypted integrity value for determining whether a corresponding one of the value fields has been modified. The target processor executes plaintext instructions decoded from the ciphertext while the protection engine is coupled between the system memory and the target processor. The protection engine includes logic to retrieve the ciphertext from the system memory, decrypt the value fields into the plaintext instructions, perform an integrity check based on the integrity fields to determine whether any of the corresponding value fields have been modified, and provide the plaintext instructions to the target processor for execution.
Near-memory data reorganization engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokhale, Maya; Lloyd, G. Scott
A memory subsystem package is provided that has processing logic for data reorganization within the memory subsystem package. The processing logic is adapted to reorganize data stored within the memory subsystem package. In some embodiments, the memory subsystem package includes memory units, a memory interconnect, and a data reorganization engine ("DRE"). The data reorganization engine includes a stream interconnect and DRE units including a control processor and a load-store unit. The control processor is adapted to execute instructions to control a data reorganization. The load-store unit is adapted to process data move commands received from the control processor via themore » stream interconnect for loading data from a load memory address of a memory unit and storing data to a store memory address of a memory unit.« less
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M; Chute, Christopher G; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.
NASA Astrophysics Data System (ADS)
Othman, Rozmie R.; Ahmad, Mohd Zamri Zahir; Ali, Mohd Shaiful Aziz Rashid; Zakaria, Hasneeza Liza; Rahman, Md. Mostafijur
2015-05-01
Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.
Li, Dingcheng; Endle, Cory M; Murthy, Sahana; Stancl, Craig; Suesse, Dale; Sottara, Davide; Huff, Stanley M.; Chute, Christopher G.; Pathak, Jyotishman
2012-01-01
With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation’s Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system. PMID:23304325
SITE DEMONSTRATION OF THE BASIC EXTRACTIVE SLUDGE TREATMENT PROCESS
The Superfund Innovative Technology Evaluation (SITE) Program, in cooperation with EPA Region 5, the Great Lakes National Program Office (GLNPO), and the U.S. Army Corps of Engineers (COE) planned and executed a pilot-scla e evaluation of the Basic Extyractive Sludge Treatment (B...
The Aircraft Electric Taxi System: A Qualitative Multi Case Study
NASA Astrophysics Data System (ADS)
Johnson, Thomas Frank
The problem this research addresses is the airline industry, and the seemingly unwillingness attitude towards adopting ways to taxi aircraft without utilizing thrust from the main engines. The purpose of the study was to get a better understanding of the decision-making process of airline executives, in respect to investing in cost saving technology. A qualitative research method is used from personal interviews with 24 airline executives from two major U.S. airlines, related industry journal articles, and aircraft performance data. The following three research questions are addressed. RQ1. Does the cost of jet fuel influence airline executives' decision of adopting the aircraft electric taxi system technology? RQ2 Does the measurable payback period for a return on investment influence airline executives' decision of adopting ETS technology? RQ3. Does the amount of government assistance influence airline executives' decision of adopting ETS technology? A multi case research study design is used with a triangulation technique. The participant perceptions indicate the need to reduce operating costs, they have concerns about investment risk, and they are in favor of future government sponsored performance improvement projects. Based on the framework, findings and implications of this study, a future research paper could focus on the positive environmental effects of the ETS application. A study could be conducted on current airport area air quality and the effects that aircraft main engine thrust taxiing has on the surrounding air quality.
Computer-Aided Software Engineering - An approach to real-time software development
NASA Technical Reports Server (NTRS)
Walker, Carrie K.; Turkovich, John J.
1989-01-01
A new software engineering discipline is Computer-Aided Software Engineering (CASE), a technology aimed at automating the software development process. This paper explores the development of CASE technology, particularly in the area of real-time/scientific/engineering software, and a history of CASE is given. The proposed software development environment for the Advanced Launch System (ALS CASE) is described as an example of an advanced software development system for real-time/scientific/engineering (RT/SE) software. The Automated Programming Subsystem of ALS CASE automatically generates executable code and corresponding documentation from a suitably formatted specification of the software requirements. Software requirements are interactively specified in the form of engineering block diagrams. Several demonstrations of the Automated Programming Subsystem are discussed.
NASA Astrophysics Data System (ADS)
Harris, A. T.; Ramachandran, R.; Maskey, M.
2013-12-01
The Exelis-developed IDL and ENVI software are ubiquitous tools in Earth science research environments. The IDL Workbench is used by the Earth science community for programming custom data analysis and visualization modules. ENVI is a software solution for processing and analyzing geospatial imagery that combines support for multiple Earth observation scientific data types (optical, thermal, multi-spectral, hyperspectral, SAR, LiDAR) with advanced image processing and analysis algorithms. The ENVI & IDL Services Engine (ESE) is an Earth science data processing engine that allows researchers to use open standards to rapidly create, publish and deploy advanced Earth science data analytics within any existing enterprise infrastructure. Although powerful in many ways, the tools lack collaborative features out-of-box. Thus, as part of the NASA funded project, Collaborative Workbench to Accelerate Science Algorithm Development, researchers at the University of Alabama in Huntsville and Exelis have developed plugins that allow seamless research collaboration from within IDL workbench. Such additional features within IDL workbench are possible because IDL workbench is built using the Eclipse Rich Client Platform (RCP). RCP applications allow custom plugins to be dropped in for extended functionalities. Specific functionalities of the plugins include creating complex workflows based on IDL application source code, submitting workflows to be executed by ESE in the cloud, and sharing and cloning of workflows among collaborators. All these functionalities are available to scientists without leaving their IDL workbench. Because ESE can interoperate with any middleware, scientific programmers can readily string together IDL processing tasks (or tasks written in other languages like C++, Java or Python) to create complex workflows for deployment within their current enterprise architecture (e.g. ArcGIS Server, GeoServer, Apache ODE or SciFlo from JPL). Using the collaborative IDL Workbench, coupled with ESE for execution in the cloud, asynchronous workflows could be executed in batch mode on large data in the cloud. We envision that a scientist will initially develop a scientific workflow locally on a small set of data. Once tested, the scientist will deploy the workflow to the cloud for execution. Depending on the results, the scientist may share the workflow and results, allowing them to be stored in a community catalog and instantly loaded into the IDL Workbench of other scientists. Thereupon, scientists can clone and modify or execute the workflow with different input parameters. The Collaborative Workbench will provide a platform for collaboration in the cloud, helping Earth scientists solve big-data problems in the Earth and planetary sciences.
Design of a high-speed digital processing element for parallel simulation
NASA Technical Reports Server (NTRS)
Milner, E. J.; Cwynar, D. S.
1983-01-01
A prototype of a custom designed computer to be used as a processing element in a multiprocessor based jet engine simulator is described. The purpose of the custom design was to give the computer the speed and versatility required to simulate a jet engine in real time. Real time simulations are needed for closed loop testing of digital electronic engine controls. The prototype computer has a microcycle time of 133 nanoseconds. This speed was achieved by: prefetching the next instruction while the current one is executing, transporting data using high speed data busses, and using state of the art components such as a very large scale integration (VLSI) multiplier. Included are discussions of processing element requirements, design philosophy, the architecture of the custom designed processing element, the comprehensive instruction set, the diagnostic support software, and the development status of the custom design.
National meeting to review IPAD status and goals. [Integrated Programs for Aerospace-vehicle Design
NASA Technical Reports Server (NTRS)
Fulton, R. E.
1980-01-01
A joint NASA/industry project called Integrated Programs for Aerospace-vehicle Design (IPAD) is described, which has the goal of raising aerospace-industry productivity through the application of computers to integrate company-wide management of engineering data. Basically a general-purpose interactive computing system developed to support engineering design processes, the IPAD design is composed of three major software components: the executive, data management, and geometry and graphics software. Results of IPAD activities include a comprehensive description of a future representative aerospace vehicle design process and its interface to manufacturing, and requirements and preliminary design of a future IPAD software system to integrate engineering activities of an aerospace company having several products under simultaneous development.
Reverse engineering by design: using history to teach.
Fagette, Paul
2013-01-01
Engineering students rarely have an opportunity to delve into the historic antecedents of design in their craft, and this is especially true for biomedical devices. The teaching emphasis is always on the new, the innovative, and the future. Even so, over the last decade, I have coupled a research agenda with engineering special projects into a successful format that allows young biomedical engineering students to understand aspects of their history and learn the complexities of design. There is value in having knowledge of historic engineering achievements, not just for an appreciation of these accomplishments but also for understanding exactly how engineers and clinicians of the day executed their feats-in other words, how the design process works. Ultimately, this particular educational odyssey confirms that history and engineering education are not only compatible but mutually supportive.
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Spitzer Space Telescope Sequencing Operations Software, Strategies, and Lessons Learned
NASA Technical Reports Server (NTRS)
Bliss, David A.
2006-01-01
The Space Infrared Telescope Facility (SIRTF) was launched in August, 2003, and renamed to the Spitzer Space Telescope in 2004. Two years of observing the universe in the wavelength range from 3 to 180 microns has yielded enormous scientific discoveries. Since this magnificent observatory has a limited lifetime, maximizing science viewing efficiency (ie, maximizing time spent executing activities directly related to science observations) was the key operational objective. The strategy employed for maximizing science viewing efficiency was to optimize spacecraft flexibility, adaptability, and use of observation time. The selected approach involved implementation of a multi-engine sequencing architecture coupled with nondeterministic spacecraft and science execution times. This approach, though effective, added much complexity to uplink operations and sequence development. The Jet Propulsion Laboratory (JPL) manages Spitzer s operations. As part of the uplink process, Spitzer s Mission Sequence Team (MST) was tasked with processing observatory inputs from the Spitzer Science Center (SSC) into efficiently integrated, constraint-checked, and modeled review and command products which accommodated the complexity of non-deterministic spacecraft and science event executions without increasing operations costs. The MST developed processes, scripts, and participated in the adaptation of multi-mission core software to enable rapid processing of complex sequences. The MST was also tasked with developing a Downlink Keyword File (DKF) which could instruct Deep Space Network (DSN) stations on how and when to configure themselves to receive Spitzer science data. As MST and uplink operations developed, important lessons were learned that should be applied to future missions, especially those missions which employ command-intensive operations via a multi-engine sequence architecture.
FWP executive summaries: basic energy sciences materials sciences and engineering program (SNL/NM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samara, George A.; Simmons, Jerry A.
2006-07-01
This report presents an Executive Summary of the various elements of the Materials Sciences and Engineering Program which is funded by the Division of Materials Sciences and Engineering, Office of Basic Energy Sciences, U.S. Department of Energy at Sandia National Laboratories, New Mexico. A general programmatic overview is also presented.
33 CFR 385.16 - Design agreements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Design agreements. 385.16 Section... Processes § 385.16 Design agreements. (a) The Corps of Engineers shall execute a design agreement with each non-Federal sponsor for the projects of the Plan prior to initiation of design activities with that...
33 CFR 385.16 - Design agreements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 3 2011-07-01 2011-07-01 false Design agreements. 385.16 Section... Processes § 385.16 Design agreements. (a) The Corps of Engineers shall execute a design agreement with each... and the non-Federal sponsor pursuant to a design agreement shall be consistent with this part. ...
33 CFR 385.16 - Design agreements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 33 Navigation and Navigable Waters 3 2014-07-01 2014-07-01 false Design agreements. 385.16 Section... Processes § 385.16 Design agreements. (a) The Corps of Engineers shall execute a design agreement with each... and the non-Federal sponsor pursuant to a design agreement shall be consistent with this part. ...
WebLab of a DC Motor Speed Control Didactical Experiment
ERIC Educational Resources Information Center
Bauer, Karine; Mendes, Luciano
2012-01-01
Purpose: Weblabs are an additional resource in the execution of experiments in control engineering education, making learning process more flexible both in time, by allowing extra class laboratory activities, and space, bringing the learning experience to remote locations where experimentation facilities would not be available. The purpose of this…
NASA Technical Reports Server (NTRS)
Orr, James K.
2010-01-01
This presentation has shown the accomplishments of the PASS project over three decades and highlighted the lessons learned. Over the entire time, our goal has been to continuously improve our process, implement automation for both quality and increased productivity, and identify and remove all defects due to prior execution of a flawed process in addition to improving our processes following identification of significant process escapes. Morale and workforce instability have been issues, most significantly during 1993 to 1998 (period of consolidation in aerospace industry). The PASS project has also consulted with others, including the Software Engineering Institute, so as to be an early evaluator, adopter, and adapter of state-of-the-art software engineering innovations.
75 FR 5925 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
... comments, identified by Docket No. FEMA-B-1091, to Kevin C. Long, Acting Chief, Engineering Management..., Acting Chief, Engineering Management Branch, Mitigation Directorate, Federal Emergency Management Agency... federalism implications under Executive Order 13132. Executive Order 12988, Civil Justice Reform. This...
75 FR 78647 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
..., identified by Docket No. FEMA-B-1163, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal.... Executive Order 12988, Civil Justice Reform. This proposed rule meets the applicable standards of Executive...
75 FR 5909 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-05
..., identified by Docket No. FEMA-B-1085, to Kevin C. Long, Acting Chief, Engineering Management Branch... Chief, Engineering Management Branch, Mitigation Directorate, Federal Emergency Management Agency, 500 C... federalism implications under Executive Order 13132. Executive Order 12988, Civil Justice Reform. This...
75 FR 31373 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-03
..., identified by Docket No. FEMA-B-1105, to Kevin C. Long, Acting Chief, Engineering Management Branch... Chief, Engineering Management Branch, Mitigation Directorate, Federal Emergency Management Agency, 500 C... federalism implications under Executive Order 13132. Executive Order 12988, Civil Justice Reform. This...
75 FR 34415 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-17
..., identified by Docket No. FEMA-B-1114, to Kevin C. Long, Acting Chief, Engineering Management Branch... Chief, Engineering Management Branch, Mitigation Directorate, Federal Emergency Management Agency, 500 C... federalism implications under Executive Order 13132. Executive Order 12988, Civil Justice Reform. This...
2009-03-01
operational availability and modernization capability. 15. NUMBER OF PAGES 137 14. SUBJECT TERMS Systems Engineering Process, Risk Management...MASTER OF SCIENCE IN SYSTEMS ENGINEERING from the NAVAL POSTGRADUATE SCHOOL March 2009 Author: Kiah Bernard Rahming Approved by...Professor Gary O. Langford Thesis Advisor Dr. Paul V. Shebalin Second Reader Dr. David H. Olwell Chairman, Department of Systems
Foundations to the unified psycho-cognitive engine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard, Michael Lewis; Bier, Asmeret Brooke; Backus, George A.
This document outlines the key features of the SNL psychological engine. The engine is designed to be a generic presentation of cognitive entities interacting among themselves and with the external world. The engine combines the most accepted theories of behavioral psychology with those of behavioral economics to produce a unified simulation of human response from stimuli through executed behavior. The engine explicitly recognizes emotive and reasoned contributions to behavior and simulates the dynamics associated with cue processing, learning, and choice selection. Most importantly, the model parameterization can come from available media or survey information, as well subject-matter-expert information. The frameworkmore » design allows the use of uncertainty quantification and sensitivity analysis to manage confidence in using the analysis results for intervention decisions.« less
Systems Engineering Simulator (SES) Simulator Planning Guide
NASA Technical Reports Server (NTRS)
McFarlane, Michael
2011-01-01
The simulation process, milestones and inputs are unknowns to first-time users of the SES. The Simulator Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.
A planning and scheduling lexicon
NASA Technical Reports Server (NTRS)
Cruz, Jennifer W.; Eggemeyer, William C.
1989-01-01
A lexicon related to mission planning and scheduling for spacecraft is presented. Planning and scheduling work is known as sequencing. Sequencing is a multistage process of merging requests from both the science and engineering arenas to accomplish the objectives defined in the requests. The multistage process begins with the creation of science and engineering goals, continues through their integration into the sequence, and eventually concludes with command execution onboard the spacecraft. The objective of this publication is to introduce some formalism into the field of spacecraft sequencing-system technology. This formalism will make it possible for researchers and potential customers to communicate about system requirements and capabilities in a common language.
Requirements Analysis for Large Ada Programs: Lessons Learned on CCPDS- R
1989-12-01
when the design had matured and This approach was not optimal from the formal the SRS role was to be the tester’s contract, implemen- testing and...on the software development CPU processing load. These constraints primar- process is the necessity to include sufficient testing ily affect algorithm...allocations and timing requirements are by-products of the software design process when multiple CSCls are a P R StrR eSOFTWARE ENGINEERING executed within
NASA Astrophysics Data System (ADS)
Arevalo, S.; Atwood, C.; Bell, P.; Blacker, T. D.; Dey, S.; Fisher, D.; Fisher, D. A.; Genalis, P.; Gorski, J.; Harris, A.; Hill, K.; Hurwitz, M.; Kendall, R. P.; Meakin, R. L.; Morton, S.; Moyer, E. T.; Post, D. E.; Strawn, R.; Veldhuizen, D. v.; Votta, L. G.; Wynn, S.; Zelinski, G.
2008-07-01
In FY2008, the U.S. Department of Defense (DoD) initiated the Computational Research and Engineering Acquisition Tools and Environments (CREATE) program, a 360M program with a two-year planning phase and a ten-year execution phase. CREATE will develop and deploy three computational engineering tool sets for DoD acquisition programs to use to design aircraft, ships and radio-frequency antennas. The planning and execution of CREATE are based on the 'lessons learned' from case studies of large-scale computational science and engineering projects. The case studies stress the importance of a stable, close-knit development team; a focus on customer needs and requirements; verification and validation; flexible and agile planning, management, and development processes; risk management; realistic schedules and resource levels; balanced short- and long-term goals and deliverables; and stable, long-term support by the program sponsor. Since it began in FY2008, the CREATE program has built a team and project structure, developed requirements and begun validating them, identified candidate products, established initial connections with the acquisition programs, begun detailed project planning and development, and generated the initial collaboration infrastructure necessary for success by its multi-institutional, multidisciplinary teams.
The research and practice of spacecraft software engineering
NASA Astrophysics Data System (ADS)
Chen, Chengxin; Wang, Jinghua; Xu, Xiaoguang
2017-06-01
In order to ensure the safety and reliability of spacecraft software products, it is necessary to execute engineering management. Firstly, the paper introduces the problems of unsystematic planning, uncertain classified management and uncontinuous improved mechanism in domestic and foreign spacecraft software engineering management. Then, it proposes a solution for software engineering management based on system-integrated ideology in the perspective of spacecraft system. Finally, a application result of spacecraft is given as an example. The research can provides a reference for executing spacecraft software engineering management and improving software product quality.
Job Shadowing Introduces the Realities of Manufacturing
ERIC Educational Resources Information Center
Frawley, Thomas A.
2009-01-01
Engineers and skilled tradesmen stood side by side with executives and politicians as Liverpool High School technology teacher Dan Drogo welcomed parents to a one-of-a-kind graduation ceremony at New Process Gear in Syracuse, New York. The manufacturing shadow program had immersed 25 high school students in an intensive five-week experience inside…
An Overview of the Runtime Verification Tool Java PathExplorer
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2002-01-01
We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
Executable Architecture Research at Old Dominion University
NASA Technical Reports Server (NTRS)
Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.
2011-01-01
Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.
Model-Unified Planning and Execution for Distributed Autonomous System Control
NASA Technical Reports Server (NTRS)
Aschwanden, Pascal; Baskaran, Vijay; Bernardini, Sara; Fry, Chuck; Moreno, Maria; Muscettola, Nicola; Plaunt, Chris; Rijsman, David; Tompkins, Paul
2006-01-01
The Intelligent Distributed Execution Architecture (IDEA) is a real-time architecture that exploits artificial intelligence planning as the core reasoning engine for interacting autonomous agents. Rather than enforcing separate deliberation and execution layers, IDEA unifies them under a single planning technology. Deliberative and reactive planners reason about and act according to a single representation of the past, present and future domain state. The domain state behaves the rules dictated by a declarative model of the subsystem to be controlled, internal processes of the IDEA controller, and interactions with other agents. We present IDEA concepts - modeling, the IDEA core architecture, the unification of deliberation and reaction under planning - and illustrate its use in a simple example. Finally, we present several real-world applications of IDEA, and compare IDEA to other high-level control approaches.
Architectural design of heterogeneous metallic nanocrystals--principles and processes.
Yu, Yue; Zhang, Qingbo; Yao, Qiaofeng; Xie, Jianping; Lee, Jim Yang
2014-12-16
CONSPECTUS: Heterogeneous metal nanocrystals (HMNCs) are a natural extension of simple metal nanocrystals (NCs), but as a research topic, they have been much less explored until recently. HMNCs are formed by integrating metal NCs of different compositions into a common entity, similar to the way atoms are bonded to form molecules. HMNCs can be built to exhibit an unprecedented architectural diversity and complexity by programming the arrangement of the NC building blocks ("unit NCs"). The architectural engineering of HMNCs involves the design and fabrication of the architecture-determining elements (ADEs), i.e., unit NCs with precise control of shape and size, and their relative positions in the design. Similar to molecular engineering, where structural diversity is used to create more property variations for application explorations, the architectural engineering of HMNCs can similarly increase the utility of metal NCs by offering a suite of properties to support multifunctionality in applications. The architectural engineering of HMNCs calls for processes and operations that can execute the design. Some enabling technologies already exist in the form of classical micro- and macroscale fabrication techniques, such as masking and etching. These processes, when used singly or in combination, are fully capable of fabricating nanoscopic objects. What is needed is a detailed understanding of the engineering control of ADEs and the translation of these principles into actual processes. For simplicity of execution, these processes should be integrated into a common reaction system and yet retain independence of control. The key to architectural diversity is therefore the independent controllability of each ADE in the design blueprint. The right chemical tools must be applied under the right circumstances in order to achieve the desired outcome. In this Account, after a short illustration of the infinite possibility of combining different ADEs to create HMNC design variations, we introduce the fabrication processes for each ADE, which enable shape, size, and location control of the unit NCs in a particular HMNC design. The principles of these processes are discussed and illustrated with examples. We then discuss how these processes may be integrated into a common reaction system while retaining the independence of individual processes. The principles for the independent control of each ADE are discussed in detail to lay the foundation for the selection of the chemical reaction system and its operating space.
Defining and reconstructing clinical processes based on IHE and BPMN 2.0.
Strasser, Melanie; Pfeifer, Franz; Helm, Emmanuel; Schuler, Andreas; Altmann, Josef
2011-01-01
This paper describes the current status and the results of our process management system for defining and reconstructing clinical care processes, which contributes to compare, analyze and evaluate clinical processes and further to identify high cost tasks or stays. The system is founded on IHE, which guarantees standardized interfaces and interoperability between clinical information systems. At the heart of the system there is BPMN, a modeling notation and specification language, which allows the definition and execution of clinical processes. The system provides functionality to define healthcare information system independent clinical core processes and to execute the processes in a workflow engine. Furthermore, the reconstruction of clinical processes is done by evaluating an IHE audit log database, which records patient movements within a health care facility. The main goal of the system is to assist hospital operators and clinical process managers to detect discrepancies between defined and actual clinical processes and as well to identify main causes of high medical costs. Beyond that, the system can potentially contribute to reconstruct and improve clinical processes and enhance cost control and patient care quality.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CIVIL AIRCRAFT § 766.8 Procedure for review, approval, execution and distribution of aviation facility... license and Certificate of Insurance to the Commander, Naval Facilities Engineering Command or his... Facilities Engineering Command or his designated representative. (1) Upon receipt, the Commander, Naval...
77 FR 4678 - Nonconformance Penalties for On-Highway Heavy Heavy-Duty Diesel Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-31
.... Executive Order 13211 (Energy Effects) I. National Technology Transfer Advancement Act J. Executive Order... Amendments of 1977 as a response to a concern with requiring technology-forcing emissions standards for heavy-duty engines. The concern was if strict technology-forcing standards were promulgated, then some...
Index Relativity and Patron Search Strategy.
ERIC Educational Resources Information Center
Allison, DeeAnn; Childers Scott
2002-01-01
Describes a study at the University of Nebraska-Lincoln that compared searches in two different keyword indexes with similar content where search results were dependent on search strategy quality, search engine execution, and content. Results showed search engine execution had an impact on the number of matches and that users ignored search help…
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.
40 CFR Appendix C-1 to Subpart E... - Required Provisions-Consulting Engineering Agreements
Code of Federal Regulations, 2010 CFR
2010-07-01
.... Changes 5. Termination 6. Remedies 7. Payment 8. Project Design 9. Audit; Access to Records 10. Price... production techniques, methods, and processes, consistent with 40 CFR 35.936-3 and 35.936-13 in effect on the date of execution of this agreement, except to the extent to which innovative technology may be used...
Program For Simulation Of Trajectories And Events
NASA Technical Reports Server (NTRS)
Gottlieb, Robert G.
1992-01-01
Universal Simulation Executive (USE) program accelerates and eases generation of application programs for numerical simulation of continuous trajectories interrupted by or containing discrete events. Developed for simulation of multiple spacecraft trajectories with events as one spacecraft crossing the equator, two spacecraft meeting or parting, or firing rocket engine. USE also simulates operation of chemical batch processing factory. Written in Ada.
Crystal Growth and Other Materials Physical Researches in Space Environment
NASA Astrophysics Data System (ADS)
Pan, Mingxiang
Material science researches in space environment are based on reducing the effects of buoyancy driven transport, the effects of atomic oxygen, radiation, extremes of heat and cold and the ultrahigh vacuum, so as to unveil the underlying fundamental phenomena, lead maybe to new potential materials or new industrial processes and develop space techniques. Currently, research program on materials sciences in Chinese Manned Space Engineering (CMSE) is going on. More than ten projects related to crystal growth and materials processes are selected as candidates to be executed in Shenzhou spacecraft, Tiangong Space Laboratory and Chinese Space Station. In this talk, we will present some examples of the projects, which are being prepared and executed in the near future flight tasks. They are both basic and applied research, from discovery to technology.
System verification and validation: a fundamental systems engineering task
NASA Astrophysics Data System (ADS)
Ansorge, Wolfgang R.
2004-09-01
Systems Engineering (SE) is the discipline in a project management team, which transfers the user's operational needs and justifications for an Extremely Large Telescope (ELT) -or any other telescope-- into a set of validated required system performance characteristics. Subsequently transferring these validated required system performance characteris-tics into a validated system configuration, and eventually into the assembled, integrated telescope system with verified performance characteristics and provided it with "objective evidence that the particular requirements for the specified intended use are fulfilled". The latter is the ISO Standard 8402 definition for "Validation". This presentation describes the verification and validation processes of an ELT Project and outlines the key role System Engineering plays in these processes throughout all project phases. If these processes are implemented correctly into the project execution and are started at the proper time, namely at the very beginning of the project, and if all capabilities of experienced system engineers are used, the project costs and the life-cycle costs of the telescope system can be reduced between 25 and 50 %. The intention of this article is, to motivate and encourage project managers of astronomical telescopes and scientific instruments to involve the entire spectrum of Systems Engineering capabilities performed by trained and experienced SYSTEM engineers for the benefit of the project by explaining them the importance of Systems Engineering in the AIV and validation processes.
Cassini Orbit Trim Maneuvers at Saturn - Overview of Attitude Control Flight Operations
NASA Technical Reports Server (NTRS)
Burk, Thomas A.
2011-01-01
The Cassini spacecraft has been in orbit around Saturn since July 1, 2004. To remain on the planned trajectory which maximizes science data return, Cassini must perform orbit trim maneuvers using either its main engine or its reaction control system thrusters. Over 200 maneuvers have been executed on the spacecraft since arrival at Saturn. To improve performance and maintain spacecraft health, changes have been made in maneuver design command placement, in accelerometer scale factor, and in the pre-aim vector used to align the engine gimbal actuator prior to main engine burn ignition. These and other changes have improved maneuver performance execution errors significantly since 2004. A strategy has been developed to decide whether a main engine maneuver should be performed, or whether the maneuver can be executed using the reaction control system.
Master of Puppets: Cooperative Multitasking for In Situ Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Lukic, Zarija
2016-01-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. Here, we present a novel design for running multiple codes in situ: using coroutines and position-independent executables we enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. We present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. This design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The techniques we present can also be integrated into other in situ frameworks.« less
Shabo, Amnon; Peleg, Mor; Parimbelli, Enea; Quaglini, Silvana; Napolitano, Carlo
2016-12-07
Implementing a decision-support system within a healthcare organization requires integration of clinical domain knowledge with resource constraints. Computer-interpretable guidelines (CIG) are excellent instruments for addressing clinical aspects while business process management (BPM) languages and Workflow (Wf) engines manage the logistic organizational constraints. Our objective is the orchestration of all the relevant factors needed for a successful execution of patient's care pathways, especially when spanning the continuum of care, from acute to community or home care. We considered three strategies for integrating CIGs with organizational workflows: extending the CIG or BPM languages and their engines, or creating an interplay between them. We used the interplay approach to implement a set of use cases arising from a CIG implementation in the domain of Atrial Fibrillation. To provide a more scalable and standards-based solution, we explored the use of Cross-Enterprise Document Workflow Integration Profile. We describe our proof-of-concept implementation of five use cases. We utilized the Personal Health Record of the MobiGuide project to implement a loosely-coupled approach between the Activiti BPM engine and the Picard CIG engine. Changes in the PHR were detected by polling. IHE profiles were used to develop workflow documents that orchestrate cross-enterprise execution of cardioversion. Interplay between CIG and BPM engines can support orchestration of care flows within organizational settings.
The standard-based open workflow system in GeoBrain (Invited)
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Zhao, P.; Deng, M.
2013-12-01
GeoBrain is an Earth science Web-service system developed and operated by the Center for Spatial Information Science and Systems, George Mason University. In GeoBrain, a standard-based open workflow system has been implemented to accommodate the automated processing of geospatial data through a set of complex geo-processing functions for advanced production generation. The GeoBrain models the complex geoprocessing at two levels, the conceptual and concrete. At the conceptual level, the workflows exist in the form of data and service types defined by ontologies. The workflows at conceptual level are called geo-processing models and cataloged in GeoBrain as virtual product types. A conceptual workflow is instantiated into a concrete, executable workflow when a user requests a product that matches a virtual product type. Both conceptual and concrete workflows are encoded in Business Process Execution Language (BPEL). A BPEL workflow engine, called BPELPower, has been implemented to execute the workflow for the product generation. A provenance capturing service has been implemented to generate the ISO 19115-compliant complete product provenance metadata before and after the workflow execution. The generation of provenance metadata before the workflow execution allows users to examine the usability of the final product before the lengthy and expensive execution takes place. The three modes of workflow executions defined in the ISO 19119, transparent, translucent, and opaque, are available in GeoBrain. A geoprocessing modeling portal has been developed to allow domain experts to develop geoprocessing models at the type level with the support of both data and service/processing ontologies. The geoprocessing models capture the knowledge of the domain experts and are become the operational offering of the products after a proper peer review of models is conducted. An automated workflow composition has been experimented successfully based on ontologies and artificial intelligence technology. The GeoBrain workflow system has been used in multiple Earth science applications, including the monitoring of global agricultural drought, the assessment of flood damage, the derivation of national crop condition and progress information, and the detection of nuclear proliferation facilities and events.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
Collaborative Early Systems Engineering: Strategic Information Management Review
2010-09-02
Early Systems Engineering: Strategic Information Management Review 2 Table of Contents Executive Summary...5 Center for Systems Engineering (CSE) .............................................................................. 6...Collaborative Early Systems Engineering .......................................................................... 6 Development Planning
A distributed query execution engine of big attributed graphs.
Batarfi, Omar; Elshawi, Radwa; Fayoumi, Ayman; Barnawi, Ahmed; Sakr, Sherif
2016-01-01
A graph is a popular data model that has become pervasively used for modeling structural relationships between objects. In practice, in many real-world graphs, the graph vertices and edges need to be associated with descriptive attributes. Such type of graphs are referred to as attributed graphs. G-SPARQL has been proposed as an expressive language, with a centralized execution engine, for querying attributed graphs. G-SPARQL supports various types of graph querying operations including reachability, pattern matching and shortest path where any G-SPARQL query may include value-based predicates on the descriptive information (attributes) of the graph edges/vertices in addition to the structural predicates. In general, a main limitation of centralized systems is that their vertical scalability is always restricted by the physical limits of computer systems. This article describes the design, implementation in addition to the performance evaluation of DG-SPARQL, a distributed, hybrid and adaptive parallel execution engine of G-SPARQL queries. In this engine, the topology of the graph is distributed over the main memory of the underlying nodes while the graph data are maintained in a relational store which is replicated on the disk of each of the underlying nodes. DG-SPARQL evaluates parts of the query plan via SQL queries which are pushed to the underlying relational stores while other parts of the query plan, as necessary, are evaluated via indexless memory-based graph traversal algorithms. Our experimental evaluation shows the efficiency and the scalability of DG-SPARQL on querying massive attributed graph datasets in addition to its ability to outperform the performance of Apache Giraph, a popular distributed graph processing system, by orders of magnitudes.
Systems engineering and integration and management for manned space flight programs
NASA Technical Reports Server (NTRS)
Morris, Owen
1993-01-01
This paper discusses the history of SE&I management of the overall program architecture, organizational structure and the relationship of SE&I to other program organizational elements. A brief discussion of the method of executing the SE&I process, a summary of some of the major lessons learned, and identification of things that have proven successful are included.
Systems engineering and integration and management for manned space flight programs
NASA Astrophysics Data System (ADS)
Morris, Owen
This paper discusses the history of SE&I management of the overall program architecture, organizational structure and the relationship of SE&I to other program organizational elements. A brief discussion of the method of executing the SE&I process, a summary of some of the major lessons learned, and identification of things that have proven successful are included.
Separating essentials from incidentals: an execution architecture for real-time control systems
NASA Technical Reports Server (NTRS)
Dvorak, Daniel; Reinholtz, Kirk
2004-01-01
This paper describes an execution architecture that makes such systems far more analyzable and verifiable by aggressive separation of concerns. The architecture separates two key software concerns: transformations of global state, as defined in pure functions; and sequencing/timing of transformations, as performed by an engine that enforces four prime invariants. The important advantage of this architecture, besides facilitating verification, is that it encourages formal specification of systems in a vocabulary that brings systems engineering closer to software engineering.
Engineering scalable biological systems
2010-01-01
Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204
The Preparation for and Execution of Engineering Operations for the Mars Curiosity Rover Mission
NASA Technical Reports Server (NTRS)
Samuels, Jessica A.
2013-01-01
The Mars Science Laboratory Curiosity Rover mission is the most complex and scientifically packed rover that has ever been operated on the surface of Mars. The preparation leading up to the surface mission involved various tests, contingency planning and integration of plans between various teams and scientists for determining how operation of the spacecraft (s/c) would be facilitated. In addition, a focused set of initial set of health checks needed to be defined and created in order to ensure successful operation of rover subsystems before embarking on a two year science journey. This paper will define the role and responsibilities of the Engineering Operations team, the process involved in preparing the team for rover surface operations, the predefined engineering activities performed during the early portion of the mission, and the evaluation process used for initial and day to day spacecraft operational assessment.
78 FR 9005 - Airworthiness Directives; Dowty Propellers Propellers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... the FAA, Engine & Propeller Directorate, 12 New England Executive Park, Burlington, MA. For..., Aerospace Engineer, Boston Aircraft Certification Office, FAA, Engine and Propeller Directorate, 12 New... Engineer, Boston Aircraft Certification Office, FAA, Engine and Propeller Directorate, 12 New England...
Opportunities for Launch Site Integrated System Health Engineering and Management
NASA Technical Reports Server (NTRS)
Waterman, Robert D.; Langwost, Patricia E.; Waterman, Susan J.
2005-01-01
The launch site processing flow involves operations such as functional verification, preflight servicing and launch. These operations often include hazards that must be controlled to protect human life and critical space hardware assets. Existing command and control capabilities are limited to simple limit checking durig automated monitoring. Contingency actions are highly dependent on human recognition, decision making, and execution. Many opportunities for Integrated System Health Engineering and Management (ISHEM) exist throughout the processing flow. This paper will present the current human-centered approach to health management as performed today for the shuttle and space station programs. In addition, it will address some of the more critical ISHEM needs, and provide recommendations for future implementation of ISHEM at the launch site.
1989-01-01
force) per square inch to kilopascals, multiply by 6.894757. ** Flue - gas desulfurization . 27 1.0 sediment process, UCS measurements for solidified...Dredging Control Technoloqies 11 Evaluation of Conceptual Dredging and Disposal Alternatives 12 Executive Summary Destroy this report when no longer needed...solubility of metals by controlling the pH and alkalinity. Additional metal immobilization can be obtained by modify- ing the process to include
Towards a Brokering Framework for Business Process Execution
NASA Astrophysics Data System (ADS)
Santoro, Mattia; Bigagli, Lorenzo; Roncella, Roberto; Mazzetti, Paolo; Nativi, Stefano
2013-04-01
Advancing our knowledge of environmental phenomena and their interconnections requires an intensive use of environmental models. Due to the complexity of Earth system, the representation of complex environmental processes often requires the use of more than one model (often from different disciplines). The Group on Earth Observation (GEO) launched the Model Web initiative to increase present accessibility and interoperability of environmental models, allowing their flexible composition into complex Business Processes (BPs). A few, basic principles are at the base of the Model Web concept (Nativi, et al.): (i) Open access, (ii) Minimal entry-barriers, (iii) Service-driven approach, and (iv) Scalability. This work proposes an architectural solution, based on the Brokering approach for multidisciplinary interoperability, aiming to contribute to the Model Web vision. The Brokering approach is currently adopted in the new GEOSS Common Infrastructure (GCI) as was presented at the last GEO Plenary meeting in Istanbul, November 2011. We designed and prototyped a component called BP Broker. The high-level functionalities provided by the BP Broker are: • Discover the needed model implementations in an open, distributed and heterogeneous environment; • Check I/O consistency of BPs and provide suggestions for mismatches resolving: • Publish the EBP as a standard model resource for re-use. • Submit the compiled BP (EBP) to a WF-engine for execution. A BP Broker has the following features: • Support multiple abstract BP specifications; • Support encoding in multiple WF-engine languages. According to the Brokering principles, the designed system is flexible enough to support the use of multiple BP design (visual) tools, heterogeneous Web interfaces for model execution (e.g. OGC WPS, WSDL, etc.), and different Workflow engines. The present implementation makes use of BPMN 2.0 notation for BP design and jBPM workflow engine for eBP execution; however, the strong decoupling which characterizes the design of the BP Broker easily allows supporting other technologies. The main benefits of the proposed approach are: (i) no need for a composition infrastructure, (ii) alleviation from technicalities of workflow definitions, (iii) support of incomplete BPs, and (iv) the reuse of existing BPs as atomic processes. The BP Broker was designed and prototyped in the EC funded projects EuroGEOSS (http://www.eurogeoss.eu) and UncertWeb (http://www.uncertweb.org); the latter project provided also the use scenarios that were used to test the framework: the eHabitat scenario (calculation habitat similarity likelihood) and the FERA scenario (impact of climate change on land-use and crop yield). Three more scenarios are presently under development. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreements n. 248488 and n. 226487. References Nativi, S., Mazzetti, P., & Geller, G. (2012), "Environmental model access and interoperability: The GEO Model Web initiative". Environmental Modelling & Software , 1-15
A Brokering Solution for Business Process Execution
NASA Astrophysics Data System (ADS)
Santoro, M.; Bigagli, L.; Roncella, R.; Mazzetti, P.; Nativi, S.
2012-12-01
Predicting the climate change impact on biodiversity and ecosystems, advancing our knowledge of environmental phenomena interconnection, assessing the validity of simulations and other key challenges of Earth Sciences require intensive use of environmental modeling. The complexity of Earth system requires the use of more than one model (often from different disciplines) to represent complex processes. The identification of appropriate mechanisms for reuse, chaining and composition of environmental models is considered a key enabler for an effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. The Group on Earth Observation (GEO) Model Web initiative aims to increase present accessibility and interoperability of environmental models, allowing their flexible composition into complex Business Processes (BPs). A few, basic principles are at the base of the Model Web concept (Nativi, et al.): 1. Open access 2. Minimal entry-barriers 3. Service-driven approach 4. Scalability In this work we propose an architectural solution aiming to contribute to the Model Web vision. This solution applies the Brokering approach for facilitiating complex multidisciplinary interoperability. The Brokering approach is currently adopted in the new GEOSS Common Infrastructure (GCI) as was presented at the last GEO Plenary meeting in Istanbul, November 2011. According to the Brokering principles, the designed system is flexible enough to support the use of multiple BP design (visual) tools, heterogeneous Web interfaces for model execution (e.g. OGC WPS, WSDL, etc.), and different Workflow engines. We designed and prototyped a component called BP Broker that is able to: (i) read an abstract BP, (ii) "compile" the abstract BP into an executable one (eBP) - in this phase the BP Broker might also provide recommendations for incomplete BPs and parameter mismatch resolution - and (iii) finally execute the eBP using a Workflow engine. The present implementation makes use of BPMN 2.0 notation for BP design and jBPM workflow engine for eBP execution; however, the strong decoupling which characterizes the design of the BP Broker easily allows supporting other technologies. The main benefits of the proposed approach are: (i) no need for a composition infrastructure, (ii) alleviation from technicalities of workflow definitions, (iii) support of incomplete BPs, and (iv) the reuse of existing BPs as atomic processes. The BP Broker was designed and prototyped in the EC funded projects EuroGEOSS (http://www.eurogeoss.eu) and UncertWeb (http://www.uncertweb.org); the latter project provided also the use scenarios that were used to test the framework: the eHabitat scenario (calculation habitat similarity likelihood) and the FERA scenario (impact of climate change on land-use and crop yield). Three more scenarios are presently under development. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreements n. 248488 and n. 226487. References Nativi, S., Mazzetti, P., & Geller, G. (2012), "Environmental model access and interoperability: The GEO Model Web initiative". Environmental Modelling & Software , 1-15
77 FR 1009 - Airworthiness Directives; Turbomeca Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-09
... Directives; Turbomeca Turboshaft Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Final... the Federal Register. That AD applies to Turbomeca Arriel 1 series turboshaft engines. The AD number...: Frederick Zink, Aerospace Engineer, Engine Certification Office, FAA, 12 New England Executive Park...
78 FR 41283 - Airworthiness Directives; Dowty Propellers Propellers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-10
... service information at the FAA, Engine & Propeller Directorate, 12 New England Executive Park, Burlington... Engineer, Boston Aircraft Certification Office, FAA, Engine and Propeller Directorate, 12 New England... Engineer, Boston Aircraft Certification Office, FAA, Engine and Propeller Directorate, 12 New England...
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
Rattanatamrong, Prapaporn; Matsunaga, Andrea; Raiturkar, Pooja; Mesa, Diego; Zhao, Ming; Mahmoudi, Babak; Digiovanna, Jack; Principe, Jose; Figueiredo, Renato; Sanchez, Justin; Fortes, Jose
2010-01-01
The CyberWorkstation (CW) is an advanced cyber-infrastructure for Brain-Machine Interface (BMI) research. It allows the development, configuration and execution of BMI computational models using high-performance computing resources. The CW's concept is implemented using a software structure in which an "experiment engine" is used to coordinate all software modules needed to capture, communicate and process brain signals and motor-control commands. A generic BMI-model template, which specifies a common interface to the CW's experiment engine, and a common communication protocol enable easy addition, removal or replacement of models without disrupting system operation. This paper reviews the essential components of the CW and shows how templates can facilitate the processes of BMI model development, testing and incorporation into the CW. It also discusses the ongoing work towards making this process infrastructure independent.
Requirements analysis for a hardware, discrete-event, simulation engine accelerator
NASA Astrophysics Data System (ADS)
Taylor, Paul J., Jr.
1991-12-01
An analysis of a general Discrete Event Simulation (DES), executing on the distributed architecture of an eight mode Intel PSC/2 hypercube, was performed. The most time consuming portions of the general DES algorithm were determined to be the functions associated with message passing of required simulation data between processing nodes of the hypercube architecture. A behavioral description, using the IEEE standard VHSIC Hardware Description and Design Language (VHDL), for a general DES hardware accelerator is presented. The behavioral description specifies the operational requirements for a DES coprocessor to augment the hypercube's execution of DES simulations. The DES coprocessor design implements the functions necessary to perform distributed discrete event simulations using a conservative time synchronization protocol.
A self-referential HOWTO on release engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Galassi, Mark C.
Release engineering is a fundamental part of the software development cycle: it is the point at which quality control is exercised and bug fixes are integrated. The way in which software is released also gives the end user her first experience of a software package, while in scientific computing release engineering can guarantee reproducibility. For these reasons and others, the release process is a good indicator of the maturity and organization of a development team. Software teams often do not put in place a release process at the beginning. This is unfortunate because the team does not have early andmore » continuous execution of test suites, and it does not exercise the software in the same conditions as the end users. I describe an approach to release engineering based on the software tools developed and used by the GNU project, together with several specific proposals related to packaging and distribution. I do this in a step-by-step manner, demonstrating how this very paper is written and built using proper release engineering methods. Because many aspects of release engineering are not exercised in the building of the paper, the accompanying software repository also contains examples of software libraries.« less
Process Materialization Using Templates and Rules to Design Flexible Process Models
NASA Astrophysics Data System (ADS)
Kumar, Akhil; Yao, Wen
The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.
Execution Of Systems Integration Principles During Systems Engineering Design
2016-09-01
This thesis discusses integration failures observed by DOD and non - DOD systems as, inadequate stakeholder analysis, incomplete problem space and design ... design , development, test and deployment of a system. A lifecycle structure consists of phases within a methodology or process model. There are many...investigate design decisions without the need to commit to physical forms; “ experimental investigation using a model yields design or operational
2013-04-01
Forces can be computed at specific angular positions, and geometrical parameters can be evaluated. Much higher resolution models are required, along...composition engines (C#, C++, Python, Java ) Desert operates on the CyPhy model, converting from a design space alternative structure to a set of design...consists of scripts to execute dymola, post-processing of results to create metrics, and general management of the job sequence. An earlier version created
Airland Battlefield Environment (ALBE) Tactical Decision Aid (TDA) Demonstration Program,
1987-11-12
Management System (DBMS) software, GKS graphics libraries, and user interface software. These components of the ATB system software architecture will be... knowlede base ano auqent the decision mak:n• process by providing infocr-mation useful in the formulation and execution of battlefield strategies...Topographic Laboratories as an Engineer. Ms. Capps is managing the software development of the AirLand Battlefield Environment (ALBE) geographic
Thermodynamic considerations on Ca2+-induced biochemical reactions in living cells
NASA Astrophysics Data System (ADS)
Lucia, Umberto; Ponzetto, Antonio
2016-02-01
Cells can be regarded as complex engines that execute a series of chemical reactions. Energy transformations, thermo-electro-chemical processes and transport phenomena can occur across cell membranes. Different, related thermo-electro-biochemical behaviour can occur between health and disease states. Analysis of the irreversibility related to ion fluxes can represent a new approach to study and control the biochemical behaviour of living cells.
75 FR 44725 - Airworthiness Directives; Pratt & Whitney PW4000 Series Turbofan Engines; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-29
... Engineer, Engine Certification Office, FAA, Engine and Propeller Directorate, 12 New England Executive Park..., Massachusetts, on July 23, 2010. Francis A. Favara, Manager, Engine and Propeller Directorate, Aircraft... PW4000 Series Turbofan Engines; Correction AGENCY: Federal Aviation Administration (FAA), DOT. ACTION...
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.
NASA Technical Reports Server (NTRS)
Pieper, Jerry L.; Walker, Richard E.
1993-01-01
During the past three decades, an enormous amount of resources were expended in the design and development of Liquid Oxygen/Hydrocarbon and Hydrogen (LOX/HC and LOX/H2) rocket engines. A significant portion of these resources were used to develop and demonstrate the performance and combustion stability for each new engine. During these efforts, many analytical and empirical models were developed that characterize design parameters and combustion processes that influence performance and stability. Many of these models are suitable as design tools, but they have not been assembled into an industry-wide usable analytical design methodology. The objective of this program was to assemble existing performance and combustion stability models into a usable methodology capable of producing high performing and stable LOX/hydrocarbon and LOX/hydrogen propellant booster engines.
16 CFR 1000.29 - Directorate for Engineering Sciences.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Directorate for Engineering Sciences. 1000... ORGANIZATION AND FUNCTIONS § 1000.29 Directorate for Engineering Sciences. The Directorate for Engineering Sciences, which is managed by the Associate Executive Director for Engineering Sciences, is responsible for...
16 CFR 1000.29 - Directorate for Engineering Sciences.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Directorate for Engineering Sciences. 1000... ORGANIZATION AND FUNCTIONS § 1000.29 Directorate for Engineering Sciences. The Directorate for Engineering Sciences, which is managed by the Associate Executive Director for Engineering Sciences, is responsible for...
An Architecture-Centric Approach for Acquiring Software-Reliant Systems
2011-04-30
Architecture Acquisition Wednesday, May 11, 2011 11:15 a.m. – 12:45 p.m. Chair: Christopher Deegan , Executive Director, Program Executive Office for...Christopher Deegan —Executive Director, Program Executive Officer, Integrated Warfare Systems (PEO IWS). Mr. Deegan directs the development, acquisition, and... Deegan holds a Bachelor of Science degree in Industrial Engineering from Penn State University, University Park, Pennsylvania and a Master of
Engineering Technical Review Planning Briefing
NASA Technical Reports Server (NTRS)
Gardner, Terrie
2012-01-01
The general topics covered in the engineering technical planning briefing are 1) overviews of NASA, Marshall Space Flight Center (MSFC), and Engineering, 2) the NASA Systems Engineering(SE) Engine and its implementation , 3) the NASA Project Life Cycle, 4) MSFC Technical Management Branch Services in relation to the SE Engine and the Project Life Cycle , 5) Technical Reviews, 6) NASA Human Factor Design Guidance , and 7) the MSFC Human Factors Team. The engineering technical review portion of the presentation is the primary focus of the overall presentation and will address the definition of a design review, execution guidance, the essential stages of a technical review, and the overall review planning life cycle. Examples of a technical review plan content, review approaches, review schedules, and the review process will be provided and discussed. The human factors portion of the presentation will focus on the NASA guidance for human factors. Human factors definition, categories, design guidance, and human factor specialist roles will be addressed. In addition, the NASA Systems Engineering Engine description, definition, and application will be reviewed as background leading into the NASA Project Life Cycle Overview and technical review planning discussion.
Shuttle Abort Flight Management (SAFM) - Application Overview
NASA Technical Reports Server (NTRS)
Hu, Howard; Straube, Tim; Madsen, Jennifer; Ricard, Mike
2002-01-01
One of the most demanding tasks that must be performed by the Space Shuttle flight crew is the process of determining whether, when and where to abort the vehicle should engine or system failures occur during ascent or entry. Current Shuttle abort procedures involve paging through complicated paper checklists to decide on the type of abort and where to abort. Additional checklists then lead the crew through a series of actions to execute the desired abort. This process is even more difficult and time consuming in the absence of ground communications since the ground flight controllers have the analysis tools and information that is currently not available in the Shuttle cockpit. Crew workload specifically abort procedures will be greatly simplified with the implementation of the Space Shuttle Cockpit Avionics Upgrade (CAU) project. The intent of CAU is to maximize crew situational awareness and reduce flight workload thru enhanced controls and displays, and onboard abort assessment and determination capability. SAFM was developed to help satisfy the CAU objectives by providing the crew with dynamic information about the capability of the vehicle to perform a variety of abort options during ascent and entry. This paper- presents an overview of the SAFM application. As shown in Figure 1, SAFM processes the vehicle navigation state and other guidance information to provide the CAU displays with evaluations of abort options, as well as landing site recommendations. This is accomplished by three main SAFM components: the Sequencer Executive, the Powered Flight Function, and the Glided Flight Function, The Sequencer Executive dispatches the Powered and Glided Flight Functions to evaluate the vehicle's capability to execute the current mission (or current abort), as well as more than IS hypothetical abort options or scenarios. Scenarios are sequenced and evaluated throughout powered and glided flight. Abort scenarios evaluated include Abort to Orbit (ATO), Transatlantic Abort Landing (TAL), East Coast Abort Landing (ECAL) and Return to Launch Site (RTLS). Sequential and simultaneous engine failures are assessed and landing footprint information is provided during actual entry scenarios as well as hypothetical "loss of thrust now" scenarios during ascent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monozov, Dmitriy; Lukie, Zarija
2016-04-01
Modern scientific and engineering simulations track the time evolution of billions of elements. For such large runs, storing most time steps for later analysis is not a viable strategy. It is far more efficient to analyze the simulation data while it is still in memory. The developers present a novel design for running multiple codes in situ: using coroutines and position-independent executables they enable cooperative multitasking between simulation and analysis, allowing the same executables to post-process simulation output, as well as to process it on the fly, both in situ and in transit. They present Henson, an implementation of ourmore » design, and illustrate its versatility by tackling analysis tasks with different computational requirements. Our design differs significantly from the existing frameworks and offers an efficient and robust approach to integrating multiple codes on modern supercomputers. The presented techniques can also be integrated into other in situ frameworks.« less
Running SINDA '85/FLUINT interactive on the VAX
NASA Technical Reports Server (NTRS)
Simmonds, Boris
1992-01-01
Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.
Virtual manufacturing work cell for engineering
NASA Astrophysics Data System (ADS)
Watanabe, Hideo; Ohashi, Kazushi; Takahashi, Nobuyuki; Kato, Kiyotaka; Fujita, Satoru
1997-12-01
The life cycles of products have been getting shorter. To meet this rapid turnover, manufacturing systems must be frequently changed as well. In engineering to develop manufacturing systems, there are several tasks such as process planning, layout design, programming, and final testing using actual machines. This development of manufacturing systems takes a long time and is expensive. To aid the above engineering process, we have developed the virtual manufacturing workcell (VMW). This paper describes a concept of VMW and design method through computer aided manufacturing engineering using VMW (CAME-VMW) related to the above engineering tasks. The VMW has all design data, and realizes a behavior of equipment and devices using a simulator. The simulator has logical and physical functionality. The one simulates a sequence control and the other simulates motion control, shape movement in 3D space. The simulator can execute the same control software made for actual machines. Therefore we can verify the behavior precisely before the manufacturing workcell will be constructed. The VMW creates engineering work space for several engineers and offers debugging tools such as virtual equipment and virtual controllers. We applied this VMW to development of a transfer workcell for vaporization machine in actual manufacturing system to produce plasma display panel (PDP) workcell and confirmed its effectiveness.
Survey of Command Execution Systems for NASA Spacecraft and Robots
NASA Technical Reports Server (NTRS)
Verma, Vandi; Jonsson, Ari; Simmons, Reid; Estlin, Tara; Levinson, Rich
2005-01-01
NASA spacecraft and robots operate at long distances from Earth Command sequences generated manually, or by automated planners on Earth, must eventually be executed autonomously onboard the spacecraft or robot. Software systems that execute commands onboard are known variously as execution systems, virtual machines, or sequence engines. Every robotic system requires some sort of execution system, but the level of autonomy and type of control they are designed for varies greatly. This paper presents a survey of execution systems with a focus on systems relevant to NASA missions.
16 CFR § 1000.29 - Directorate for Engineering Sciences.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Directorate for Engineering Sciences. Â... ORGANIZATION AND FUNCTIONS § 1000.29 Directorate for Engineering Sciences. The Directorate for Engineering Sciences, which is managed by the Associate Executive Director for Engineering Sciences, is responsible for...
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George I.; Stetson, Howard K.
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto-Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner- TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders
Autonomous Real Time Requirements Tracing
NASA Technical Reports Server (NTRS)
Plattsmier, George; Stetson, Howard
2014-01-01
One of the more challenging aspects of software development is the ability to verify and validate the functional software requirements dictated by the Software Requirements Specification (SRS) and the Software Detail Design (SDD). Insuring the software has achieved the intended requirements is the responsibility of the Software Quality team and the Software Test team. The utilization of Timeliner-TLX(sup TM) Auto- Procedures for relocating ground operations positions to ISS automated on-board operations has begun the transition that would be required for manned deep space missions with minimal crew requirements. This transition also moves the auto-procedures from the procedure realm into the flight software arena and as such the operational requirements and testing will be more structured and rigorous. The autoprocedures would be required to meet NASA software standards as specified in the Software Safety Standard (NASASTD- 8719), the Software Engineering Requirements (NPR 7150), the Software Assurance Standard (NASA-STD-8739) and also the Human Rating Requirements (NPR-8705). The Autonomous Fluid Transfer System (AFTS) test-bed utilizes the Timeliner-TLX(sup TM) Language for development of autonomous command and control software. The Timeliner-TLX(sup TM) system has the unique feature of providing the current line of the statement in execution during real-time execution of the software. The feature of execution line number internal reporting unlocks the capability of monitoring the execution autonomously by use of a companion Timeliner-TLX(sup TM) sequence as the line number reporting is embedded inside the Timeliner-TLX(sup TM) execution engine. This negates I/O processing of this type data as the line number status of executing sequences is built-in as a function reference. This paper will outline the design and capabilities of the AFTS Autonomous Requirements Tracker, which traces and logs SRS requirements as they are being met during real-time execution of the targeted system. It is envisioned that real time requirements tracing will greatly assist the movement of autoprocedures to flight software enhancing the software assurance of auto-procedures and also their acceptance as reliable commanders.
Rapid Prototyping and the Human Factors Engineering Process
2016-08-29
8217 without the effort and cost associated with conventional man -in-the-loop simulation. Advocates suggest that rapid prototyping is compatible with...use should be made of man -in-the loop simulation to supplement those analyses, but that such simulation is expensive and time consuming, precluding...conventional man -in-the- loop simulation. Rapid prototyping involves the construction and use of an executable model of a human-machine interface
Systematic and Scalable Testing of Concurrent Programs
2013-12-16
The evaluation of CHESS [107] checked eight different programs ranging from process management libraries to a distributed execution engine to a research...tool (§3.1) targets systematic testing of scheduling nondeterminism in multi- threaded components of the Omega cluster management system [129], while...tool for systematic testing of multithreaded com- ponents of the Omega cluster management system [129]. In particular, §3.1.1 defines a model for
Long term trending of engineering data for the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Cox, Ross M.
1993-01-01
A major goal in spacecraft engineering analysis is the detection of component failures before the fact. Trending is the process of monitoring subsystem states to discern unusual behaviors. This involves reducing vast amounts of data about a component or subsystem into a form that helps humans discern underlying patterns and correlations. A long term trending system has been developed for the Hubble Space Telescope. Besides processing the data for 988 distinct telemetry measurements each day, it produces plots of 477 important parameters for the entire 24 hours. Daily updates to the trend files also produce 339 thirty day trend plots each month. The total system combines command procedures to control the execution of the C-based data processing program, user-written FORTRAN routines, and commercial off-the-shelf plotting software. This paper includes a discussion the performance of the trending system and of its limitations.
40 CFR 35.937-12 - Subcontracts under subagreements for architectural or engineering services.
Code of Federal Regulations, 2011 CFR
2011-07-01
... architectural or engineering services. 35.937-12 Section 35.937-12 Protection of Environment ENVIRONMENTAL... engineering services. (a) Neither award and execution of subcontracts under a prime contract for architectural or engineering services, nor the procurement and negotiation procedures used by the engineer in...
40 CFR 35.937-12 - Subcontracts under subagreements for architectural or engineering services.
Code of Federal Regulations, 2012 CFR
2012-07-01
... architectural or engineering services. 35.937-12 Section 35.937-12 Protection of Environment ENVIRONMENTAL... engineering services. (a) Neither award and execution of subcontracts under a prime contract for architectural or engineering services, nor the procurement and negotiation procedures used by the engineer in...
40 CFR 35.937-12 - Subcontracts under subagreements for architectural or engineering services.
Code of Federal Regulations, 2013 CFR
2013-07-01
... architectural or engineering services. 35.937-12 Section 35.937-12 Protection of Environment ENVIRONMENTAL... engineering services. (a) Neither award and execution of subcontracts under a prime contract for architectural or engineering services, nor the procurement and negotiation procedures used by the engineer in...
40 CFR 35.937-12 - Subcontracts under subagreements for architectural or engineering services.
Code of Federal Regulations, 2010 CFR
2010-07-01
... architectural or engineering services. 35.937-12 Section 35.937-12 Protection of Environment ENVIRONMENTAL... engineering services. (a) Neither award and execution of subcontracts under a prime contract for architectural or engineering services, nor the procurement and negotiation procedures used by the engineer in...
40 CFR 35.937-12 - Subcontracts under subagreements for architectural or engineering services.
Code of Federal Regulations, 2014 CFR
2014-07-01
... architectural or engineering services. 35.937-12 Section 35.937-12 Protection of Environment ENVIRONMENTAL... engineering services. (a) Neither award and execution of subcontracts under a prime contract for architectural or engineering services, nor the procurement and negotiation procedures used by the engineer in...
Optical Measurements at the Combustor Exit of the HIFiRE 2 Ground Test Engine
NASA Technical Reports Server (NTRS)
Brown, Michael S.; Herring, Gregory C.; Cabell, Karen; Hass, Neal; Barhorst, Todd F.; Gruber, Mark
2012-01-01
The development of optical techniques capable of measuring in-stream flow properties of air breathing hypersonic engines is a goal of the Aerospace Propulsion Division at AFRL. Of particular interest are techniques such as tunable diode laser absorption spectroscopy that can be implemented in both ground and flight test efforts. We recently executed a measurement campaign at the exit of the combustor of the HIFiRE 2 ground test engine during Phase II operation of the engine. Data was collected in anticipation of similar data sets to be collected during the flight experiment. The ground test optical data provides a means to evaluate signal processing algorithms particularly those associated with limited line of sight tomography. Equally important, this in-stream data was collected to compliment data acquired with surface-mounted instrumentation and the accompanying flowpath modeling efforts-both CFD and lower order modeling. Here we discuss the specifics of hardware and data collection along with a coarse-grained look at the acquired data and our approach to processing and analyzing it.
Davis, J P; Akella, S; Waddell, P H
2004-01-01
Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.
NASA Technical Reports Server (NTRS)
Miller, R. E., Jr.; Hansen, S. D.; Redhed, D. D.; Southall, J. W.; Kawaguchi, A. S.
1974-01-01
Evaluation of the cost-effectiveness of integrated analysis/design systems with particular attention to Integrated Program for Aerospace-Vehicle Design (IPAD) project. An analysis of all the ingredients of IPAD indicates the feasibility of a significant cost and flowtime reduction in the product design process involved. It is also concluded that an IPAD-supported design process will provide a framework for configuration control, whereby the engineering costs for design, analysis and testing can be controlled during the air vehicle development cycle.
Executive control systems in the engineering design environment
NASA Technical Reports Server (NTRS)
Hurst, P. W.; Pratt, T. W.
1985-01-01
Executive Control Systems (ECSs) are software structures for the unification of various engineering design application programs into comprehensive systems with a central user interface (uniform access) method and a data management facility. Attention is presently given to the most significant determinations of a research program conducted for 24 ECSs, used in government and industry engineering design environments to integrate CAD/CAE applications programs. Characterizations are given for the systems' major architectural components and the alternative design approaches considered in their development. Attention is given to ECS development prospects in the areas of interdisciplinary usage, standardization, knowledge utilization, and computer science technology transfer.
Huser, Vojtech; Rasmussen, Luke V; Oberg, Ryan; Starren, Justin B
2011-04-10
Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform.
Task Management in the New ATLAS Production System
NASA Astrophysics Data System (ADS)
De, K.; Golubkov, D.; Klimentov, A.; Potekhin, M.; Vaniachine, A.; Atlas Collaboration
2014-06-01
This document describes the design of the new Production System of the ATLAS experiment at the LHC [1]. The Production System is the top level workflow manager which translates physicists' needs for production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System [2]. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1980-01-01
Engine control techniques were established and new technology requirements were identified. The designs of the components and engine were prepared in sufficient depth to calculate engine and component weights and envelopes, turbopump efficiencies and recirculation leakage rates, and engine performance. Engine design assumptions are presented along with the structural design criteria.
Real-time control for manufacturing space shuttle main engines: Work in progress
NASA Technical Reports Server (NTRS)
Ruokangas, Corinne C.
1988-01-01
During the manufacture of space-based assemblies such as Space Shuttle Main Engines, flexibility is required due to the high-cost and low-volume nature of the end products. Various systems have been developed pursuing the goal of adaptive, flexible manufacturing for several space applications, including an Advanced Robotic Welding System for the manufacture of complex components of the Space Shuttle Main Engines. The Advanced Robotic Welding System (AROWS) is an on-going joint effort, funded by NASA, between NASA/Marshall Space Flight Center, and two divisions of Rockwell International: Rocketdyne and the Science Center. AROWS includes two levels of flexible control of both motion and process parameters: Off-line programming using both geometric and weld-process data bases, and real-time control incorporating multiple sensors during weld execution. Both control systems were implemented using conventional hardware and software architectures. The feasibility of enhancing the real-time control system using the problem-solving architecture of Schemer is investigated and described.
NASA Astrophysics Data System (ADS)
Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco
2018-05-01
Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.
NASA Technical Reports Server (NTRS)
1981-01-01
The objective of the study was to generate the system design of a performance-optimized, advanced LOX/hydrogen expander cycle space engine. The engine requirements are summarized, and the development and operational experience with the expander cycle RL10 engine were reviewed. The engine development program is outlined.
ATLAS Distributed Computing Experience and Performance During the LHC Run-2
NASA Astrophysics Data System (ADS)
Filipčič, A.;
2017-10-01
ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable ATLAS sites are now able to store unique or primary copies of the datasets. ATLAS Distributed Computing is further evolving to speed up request processing by introducing network awareness, using machine learning and optimisation of the latencies during the execution of the full chain of tasks. The Event Service, a new workflow and job execution engine, is designed around check-pointing at the level of event processing to use opportunistic resources more efficiently. ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The exploitation of opportunistic resources was at an early stage throughout 2015, at the level of 10% of the total ATLAS computing power, but in the next few years it is expected to deliver much more. In addition, demonstrating the ability to use an opportunistic resource can lead to securing ATLAS allocations on the facility, hence the importance of this work goes beyond merely the initial CPU cycles gained. In this paper, we give an overview and compare the performance, development effort, flexibility and robustness of the various approaches.
OPAD-EDIFIS Real-Time Processing
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1997-01-01
The Optical Plume Anomaly Detection (OPAD) detects engine hardware degradation of flight vehicles through identification and quantification of elemental species found in the plume by analyzing the plume emission spectra in a real-time mode. Real-time performance of OPAD relies on extensive software which must report metal amounts in the plume faster than once every 0.5 sec. OPAD software previously written by NASA scientists performed most necessary functions at speeds which were far below what is needed for real-time operation. The research presented in this report improved the execution speed of the software by optimizing the code without changing the algorithms and converting it into a parallelized form which is executed in a shared-memory multiprocessor system. The resulting code was subjected to extensive timing analysis. The report also provides suggestions for further performance improvement by (1) identifying areas of algorithm optimization, (2) recommending commercially available multiprocessor architectures and operating systems to support real-time execution and (3) presenting an initial study of fault-tolerance requirements.
ERIC Educational Resources Information Center
Pierce, Preston E., Comp.
A compilation of resources is provided for those interested in examining action taken by the executive branch of the federal government to foster scientific and engineering excellence in the United States in the nineteenth century. The resources are intended for use by pre-college secondary science and social studies teachers. Each of the…
Human factors opportunities to improve Ohio's transportation system : executive summary report.
DOT National Transportation Integrated Search
2005-06-01
Human factors engineering or ergonomics is the : area of engineering concerned with the humanmachine : interface. As Ohios road systems are : driven on by people, human factors engineering : is certainly relevant. However, human factors : have oft...
2012-09-20
CAPE CANAVERAL, Fla. -- At NASA’s Kennedy Space Center in Florida, a groundbreaking was held to mark the start of construction on the Antenna Test Bed Array for the Ka-Band Objects Observation and Monitoring, or Ka-BOOM system. Using ceremonial shovels to mark the site, from left are Michael Le, lead design engineer and construction manager Sue Vingris, Cape Design Engineer Co. project manager Kannan Rengarajan, chief executive officer of Cape Design Engineer Co. Lutfi Mized, president of Cape Design Engineer Co. David Roelandt, construction site superintendent with Cape Design Engineer Co. Marc Seibert, NASA project manager Michael Miller, NASA project manager Peter Aragona, KSC’s Electromagnetic Lab manager Stacy Hopper, KSCs master planning supervisor Dr. Bary Geldzabler, NASA chief scientist and KSC’s Chief Technologist Karen Thompson. The construction site is near the former Vertical Processing Facility, which has been demolished. Workers will begin construction on the pile foundations for the 40-foot-diameter dish antenna arrays and their associated utilities, and prepare the site for the operations command center facility. Photo credit: NASA/Charisse Nahser
2012-09-20
CAPE CANAVERAL, Fla. -- At NASA’s Kennedy Space Center in Florida, a groundbreaking was held to mark the start of construction on the Antenna Test Bed Array for the Ka-Band Objects Observation and Monitoring, or Ka-BOOM system. Holding ceremonial shovels, from left are Michael Le, lead design engineer and construction manager Sue Vingris, Cape Design Engineer Co. project manager Kannan Rengarajan, chief executive officer of Cape Design Engineer Co. Lutfi Mized, president of Cape Design Engineer Co. David Roelandt, construction site superintendent with Cape Design Engineer Co. Marc Seibert, NASA project manager Michael Miller, NASA project manager Peter Aragona, KSC’s Electromagnetic Lab manager Stacy Hopper, KSCs master planning supervisor Dr. Bary Geldzabler, NASA chief scientist and KSC’s Chief Technologist Karen Thompson. The construction site is near the former Vertical Processing Facility, which has been demolished. Workers will begin construction on the pile foundations for the 40-foot-diameter dish antenna arrays and their associated utilities, and prepare the site for the operations command center facility. Photo credit: NASA/Charisse Nahser
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
Design and implementation of the GLIF3 guideline execution engine.
Wang, Dongwen; Peleg, Mor; Tu, Samson W; Boxwala, Aziz A; Ogunyemi, Omolola; Zeng, Qing; Greenes, Robert A; Patel, Vimla L; Shortliffe, Edward H
2004-10-01
We have developed the GLIF3 Guideline Execution Engine (GLEE) as a tool for executing guidelines encoded in the GLIF3 format. In addition to serving as an interface to the GLIF3 guideline representation model to support the specified functions, GLEE provides defined interfaces to electronic medical records (EMRs) and other clinical applications to facilitate its integration with the clinical information system at a local institution. The execution model of GLEE takes the "system suggests, user controls" approach. A tracing system is used to record an individual patient's state when a guideline is applied to that patient. GLEE can also support an event-driven execution model once it is linked to the clinical event monitor in a local environment. Evaluation has shown that GLEE can be used effectively for proper execution of guidelines encoded in the GLIF3 format. When using it to execute each guideline in the evaluation, GLEE's performance duplicated that of the reference systems implementing the same guideline but taking different approaches. The execution flexibility and generality provided by GLEE, and its integration with a local environment, need to be further evaluated in clinical settings. Integration of GLEE with a specific event-monitoring and order-entry environment is the next step of our work to demonstrate its use for clinical decision support. Potential uses of GLEE also include quality assurance, guideline development, and medical education.
Personality Characteristics of Engineers
ERIC Educational Resources Information Center
van der Molen, Henk T.; Schmidt, Henk G.; Kruisman, Gerard
2007-01-01
The objective of the current study was to investigate the personality characteristics of a group of engineers with a variety of years of experience. It was executed to remedy shortcomings of the literature concerning this issue and to produce suggestions for a postgraduate training programme for engineers. A total of 103 engineers were tested with…
77 FR 66767 - Airworthiness Directives; Pratt & Whitney Canada Corp. Turboshaft Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... Canada Corp. Turboshaft Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION: Notice of..., PW207D2, and PW207E turboshaft engines. This proposed AD was prompted by the discovery that certain power... the FAA, Engine & Propeller Directorate, 12 New England Executive Park, Burlington, MA. For...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-12
...; Information Collection; Architect-Engineer Qualifications (SF 330) AGENCIES: Department of Defense (DOD... approve an extension of a currently approved information collection requirement for the Architect-Engineer... Standard Form 330, Part I is used by all Executive agencies to obtain information from architect-engineer...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
...; Submission for OMB Review; Architect-Engineer Qualifications (SF 330) AGENCIES: Department of Defense (DOD... extension of a previously approved information collection requirement for the Architect-Engineer... 330, Part I is used by all Executive agencies to obtain information from architect-engineer firms...
77 FR 9869 - Airworthiness Directives; Rolls-Royce Deutschland Ltd & Co KG (RRD) Turbofan Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... Deutschland Ltd & Co KG (RRD) Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... directive (AD) for RRD BR700-715A1-30, BR700-715B1-30, and BR700-715C1-30 turbofan engines. The existing AD... Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 12 New England Executive Park...
Lean Mixture Engines Testing and Evaluation Program : Volume 1. Executive Summary.
DOT National Transportation Integrated Search
1975-01-01
This report is aimed at defining analytically and demonstrating experimentally the potential of the 'lean-burn concept'. Fuel consumption and emissions data are obtained on the engine dynamometer for the baseline engine, and two lean-burn configurati...
Passenger Car Spark Ignition Data Base : Volume 1. Executive Summary.
DOT National Transportation Integrated Search
1979-12-01
Test data was obtained from spark ignition production and preproduction engines at the engine and vehicle level. The engines were applicable for vehicles 2000 to 3000 pounds in weight. The data obtained provided trade-offs between fuel economy, power...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzo, Davinia B.; Blackburn, Mark R.
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Rizzo, Davinia B.; Blackburn, Mark R.
2018-03-30
As systems become more complex, systems engineers rely on experts to inform decisions. There are few experts and limited data in many complex new technologies. This challenges systems engineers as they strive to plan activities such as qualification in an environment where technical constraints are coupled with the traditional cost, risk, and schedule constraints. Bayesian network (BN) models provide a framework to aid systems engineers in planning qualification efforts with complex constraints by harnessing expert knowledge and incorporating technical factors. By quantifying causal factors, a BN model can provide data about the risk of implementing a decision supplemented with informationmore » on driving factors. This allows a systems engineer to make informed decisions and examine “what-if” scenarios. This paper discusses a novel process developed to define a BN model structure based primarily on expert knowledge supplemented with extremely limited data (25 data sets or less). The model was developed to aid qualification decisions—specifically to predict the suitability of six degrees of freedom (6DOF) vibration testing for qualification. The process defined the model structure with expert knowledge in an unbiased manner. Finally, validation during the process execution and of the model provided evidence the process may be an effective tool in harnessing expert knowledge for a BN model.« less
Spacecraft systems engineering: An introduction to the process at GSFC
NASA Technical Reports Server (NTRS)
Fragomeni, Tony; Ryschkewitsch, Michael G.
1993-01-01
The main objective in systems engineering is to devise a coherent total system design capable of achieving the stated requirements. Requirements should be rigid. However, they should be continuously challenged, rechallenged and/or validated. The systems engineer must specify every requirement in order to design, document, implement and conduct the mission. Each and every requirement must be logically considered, traceable and evaluated through various analysis and trade studies in a total systems design. Margins must be determined to be realistic as well as adequate. The systems engineer must also continuously close the loop and verify system performance against the requirements. The fundamental role of the systems engineer, however, is to engineer, not manage. Yet, in large, complex missions, where more than one systems engineer is required, someone needs to manage the systems engineers, and we call them 'systems managers.' Systems engineering management is an overview function which plans, guides, monitors and controls the technical execution of a project as implemented by the systems engineers. As the project moves on through Phases A and B into Phase C/D, the systems engineering tasks become a small portion of the total effort. The systems management role increases since discipline subsystem engineers are conducting analyses and reviewing test data for final review and acceptance by the systems managers.
Control Data ICEM: A vendors IPAD-like system
NASA Technical Reports Server (NTRS)
Feldman, H. D.
1984-01-01
The IPAD program's goal which was to integrate aerospace applications used in support of the engineering design process is discussed. It is still the key goal, and has evolved into a design centered around the use of data base management, networking, and global user executive technology. An integrated CAD/CAM system modeled in part after the IPAD program and containing elements of the program's goals was developed. The integrated computer aided engineering and manufacturing (ICEM) program started with the acquisition of AD-2000 and Synthavision. The AD-2000 has evolved to a production geometry creation and drafting system which is called CD/2000. Synthavision has grown to be a full scale 3-dimensional modeling system, the ICEM Modeler.
Bridging the gap: simulations meet knowledge bases
NASA Astrophysics Data System (ADS)
King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.
2003-09-01
Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.
Japanese project aims at supercomputer that executes 10 gflops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burskey, D.
1984-05-03
Dubbed supercom by its multicompany design team, the decade-long project's goal is an engineering supercomputer that can execute 10 billion floating-point operations/s-about 20 times faster than today's supercomputers. The project, guided by Japan's Ministry of International Trade and Industry (MITI) and the Agency of Industrial Science and Technology encompasses three parallel research programs, all aimed at some angle of the superconductor. One program should lead to superfast logic and memory circuits, another to a system architecture that will afford the best performance, and the last to the software that will ultimately control the computer. The work on logic and memorymore » chips is based on: GAAS circuit; Josephson junction devices; and high electron mobility transistor structures. The architecture will involve parallel processing.« less
76 FR 55553 - Airworthiness Standards; Rotor Overspeed Requirements; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-08
... concerning this final rule, contact Tim Mouzakis, Engine and Propeller Directorate Standards Staff, ANE-111, Engine and Propeller Directorate, Federal Aviation Administration, 12 New England Executive Park...
Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine
NASA Astrophysics Data System (ADS)
Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.
2006-12-01
The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.
Hands-on Summer Camp to Attract K-12 Students to Engineering Fields
ERIC Educational Resources Information Center
Yilmaz, Muhittin; Ren, Jianhong; Custer, Sheryl; Coleman, Joyce
2010-01-01
This paper explains the organization and execution of a summer engineering outreach camp designed to attract and motivate high school students as well as increase their awareness of various engineering fields. The camp curriculum included hands-on, competitive design-oriented engineering projects from several disciplines: the electrical,…
NASA Technical Reports Server (NTRS)
Waid, Michael
2011-01-01
Manufacturing process, milestones and inputs are unknowns to first-time users of the manufacturing facilities. The Manufacturing Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their project engineering personnel in manufacturing planning and execution. Material covered includes a roadmap of the manufacturing process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, products, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
2003-06-01
delivery Data Access (1980s) "What were unit sales in New England last March?" Relational databases (RDBMS), Structured Query Language ( SQL ...macros written in Visual Basic for Applications ( VBA ). 32 Iteration Two: Class Diagram Tech OASIS Export ScriptImport Filter Data ProcessingMethod 1...MS Excel * 1 VBA Macro*1 contains sends data to co nt ai ns executes * * 1 1 contains contains Figure 20. Iteration two class diagram The
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
... DEPARTMENT OF COMMERCE International Trade Administration Executive-led Business Development... Commerce's International Trade Administration is organizing a business development trade mission to Kabul... sectors include: construction (including engineering, architecture, transportation and logistics, and...
Executive control systems in the engineering design environment. M.S. Thesis
NASA Technical Reports Server (NTRS)
Hurst, P. W.
1985-01-01
An executive control system (ECS) is a software structure for unifying various applications codes into a comprehensive system. It provides a library of applications, a uniform access method through a cental user interface, and a data management facility. A survey of twenty-four executive control systems designed to unify various CAD/CAE applications for use in diverse engineering design environments within government and industry was conducted. The goals of this research were to establish system requirements to survey state-of-the-art architectural design approaches, and to provide an overview of the historical evolution of these systems. Foundations for design are presented and include environmental settings, system requirements, major architectural components, and a system classification scheme based on knowledge of the supported engineering domain(s). An overview of the design approaches used in developing the major architectural components of an ECS is presented with examples taken from the surveyed systems. Attention is drawn to four major areas of ECS development: interdisciplinary usage; standardization; knowledge utilization; and computer science technology transfer.
A BPMN solution for chaining OGC services to quality assure location-based crowdsourced data
NASA Astrophysics Data System (ADS)
Meek, Sam; Jackson, Mike; Leibovici, Didier G.
2016-02-01
The Open Geospatial Consortium (OGC) Web Processing Service (WPS) standard enables access to a centralized repository of processes and services from compliant clients. A crucial part of the standard includes the provision to chain disparate processes and services to form a reusable workflow. To date this has been realized by methods such as embedding XML requests, using Business Process Execution Language (BPEL) engines and other external orchestration engines. Although these allow the user to define tasks and data artifacts as web services, they are often considered inflexible and complicated, often due to vendor specific solutions and inaccessible documentation. This paper introduces a new method of flexible service chaining using the standard Business Process Markup Notation (BPMN). A prototype system has been developed upon an existing open source BPMN suite to illustrate the advantages of the approach. The motivation for the software design is qualification of crowdsourced data for use in policy-making. The software is tested as part of a project that seeks to qualify, assure, and add value to crowdsourced data in a biological monitoring use case.
SHIWA Services for Workflow Creation and Sharing in Hydrometeorolog
NASA Astrophysics Data System (ADS)
Terstyanszky, Gabor; Kiss, Tamas; Kacsuk, Peter; Sipos, Gergely
2014-05-01
Researchers want to run scientific experiments on Distributed Computing Infrastructures (DCI) to access large pools of resources and services. To run these experiments requires specific expertise that they may not have. Workflows can hide resources and services as a virtualisation layer providing a user interface that researchers can use. There are many scientific workflow systems but they are not interoperable. To learn a workflow system and create workflows may require significant efforts. Considering these efforts it is not reasonable to expect that researchers will learn new workflow systems if they want to run workflows developed in other workflow systems. To overcome it requires creating workflow interoperability solutions to allow workflow sharing. The FP7 'Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs' (SHIWA) project developed the Coarse-Grained Interoperability concept (CGI). It enables recycling and sharing workflows of different workflow systems and executing them on different DCIs. SHIWA developed the SHIWA Simulation Platform (SSP) to implement the CGI concept integrating three major components: the SHIWA Science Gateway, the workflow engines supported by the CGI concept and DCI resources where workflows are executed. The science gateway contains a portal, a submission service, a workflow repository and a proxy server to support the whole workflow life-cycle. The SHIWA Portal allows workflow creation, configuration, execution and monitoring through a Graphical User Interface using the WS-PGRADE workflow system as the host workflow system. The SHIWA Repository stores the formal description of workflows and workflow engines plus executables and data needed to execute them. It offers a wide-range of browse and search operations. To support non-native workflow execution the SHIWA Submission Service imports the workflow and workflow engine from the SHIWA Repository. This service either invokes locally or remotely pre-deployed workflow engines or submits workflow engines with the workflow to local or remote resources to execute workflows. The SHIWA Proxy Server manages certificates needed to execute the workflows on different DCIs. Currently SSP supports sharing of ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflows. Further workflow systems can be added to the simulation platform as required by research communities. The FP7 'Building a European Research Community through Interoperable Workflows and Data' (ER-flow) project disseminates the achievements of the SHIWA project to build workflow user communities across Europe. ER-flow provides application supports to research communities within (Astrophysics, Computational Chemistry, Heliophysics and Life Sciences) and beyond (Hydrometeorology and Seismology) to develop, share and run workflows through the simulation platform. The simulation platform supports four usage scenarios: creating and publishing workflows in the repository, searching and selecting workflows in the repository, executing non-native workflows and creating and running meta-workflows. The presentation will outline the CGI concept, the SHIWA Simulation Platform, the ER-flow usage scenarios and how the Hydrometeorology research community runs simulations on SSP.
DOT National Transportation Integrated Search
2011-06-01
The Executive Steering Group (ESG) of the National Executive Committee (EXCOM) for : Space-Based Positioning, Navigation, and Timing (PNT) directed the National Space-Based : PNT Systems Engineering Forum (NPEF) to conduct an assessment of the effect...
Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik
2015-06-09
Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.
Mission Data System Java Edition Version 7
NASA Technical Reports Server (NTRS)
Reinholtz, William K.; Wagner, David A.
2013-01-01
The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.
Implementation of jump-diffusion algorithms for understanding FLIR scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1995-07-01
Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.
Students Compete in NASA's Student Launch Competition
2018-03-30
NASA's Student Launch competition challenges middle school, high school and college teams to design, build, test and fly a high-powered, reusable rocket to an altitude of one mile above ground level while carrying a payload. During the eight-month process, the selected teams will go through a series of design, test and readiness reviews that resemble the real-world process of rocket development. In addition to building and preparing their rocket and payload, the teams must also create and execute an education and outreach program that will share their work with their communities and help inspire the next generation of scientists, engineers and explorers. Student Launch is hosted by NASA's Marshall Space Flight Center in Huntsville, Alabama, and is managed by Marshall's Academic Affairs Office to further NASA’s major education goal of attracting and encouraging students to pursue degrees and careers in the STEM fields of science, technology, engineering and mathematics.
2011-01-01
Background Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. Results We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. Conclusions We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform. PMID:21477364
The Action Execution Process Implemented in Different Cognitive Architectures: A Review
NASA Astrophysics Data System (ADS)
Dong, Daqi; Franklin, Stan
2014-12-01
An agent achieves its goals by interacting with its environment, cyclically choosing and executing suitable actions. An action execution process is a reasonable and critical part of an entire cognitive architecture, because the process of generating executable motor commands is not only driven by low-level environmental information, but is also initiated and affected by the agent's high-level mental processes. This review focuses on cognitive models of action, or more specifically, of the action execution process, as implemented in a set of popular cognitive architectures. We examine the representations and procedures inside the action execution process, as well as the cooperation between action execution and other high-level cognitive modules. We finally conclude with some general observations regarding the nature of action execution.
40 CFR 52.2465 - Original identification of plan section.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice (GEP) stack height,” “hazardous air pollutant,” “nearby,” “stationary source” and... good engineering practice (GEP) stack height requirements submitted on May 12, 1986 by the Virginia... Executive Director, Virginia State Air Pollution Control Board, transmitting the revised good engineering...
77 FR 42677 - Special Conditions: General Electric CT7-2E1 Turboshaft Engine
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-20
... Mihail, ANE-111, Engine and Propeller Directorate, Aircraft Certification Service, 12 New England..., ANE-7 Engine and Propeller Directorate, Aircraft Certification Service, 12 New England Executive Park... additional requirements for the rating's definition, overspeed, controls system, and endurance test because...
ERIC Educational Resources Information Center
Shantha, S.; Mekala, S.
2017-01-01
The mastery of speaking skills in English has become a major requisite in engineering industry. Engineers are expected to possess speaking skills for executing their routine activities and career prospects. The article focuses on the experimental study conducted to improve English spoken proficiency of Indian engineering students using task-based…
Developing knowledge intensive ideas in engineering education: the application of camp methodology
NASA Astrophysics Data System (ADS)
Heidemann Lassen, Astrid; Løwe Nielsen, Suna
2011-11-01
Background: Globalization, technological advancement, environmental problems, etc. challenge organizations not just to consider cost-effectiveness, but also to develop new ideas in order to build competitive advantages. Hence, methods to deliberately enhance creativity and facilitate its processes of development must also play a central role in engineering education. However, so far the engineering education literature provides little attention to the important discussion of how to develop knowledge intensive ideas based on creativity methods and concepts. Purpose: The purpose of this article is to investigate how to design creative camps from which knowledge intensive ideas can unfold. Design/method/sample: A framework on integration of creativity and knowledge intensity is first developed, and then tested through the planning, execution and evaluation of a specialized creativity camp with focus on supply chain management. Detailed documentation of the learning processes of the participating 49 engineering and business students is developed through repeated interviews during the process as well as a survey. Results: The research illustrates the process of development of ideas, and how the participants through interdisciplinary collaboration, cognitive flexibility and joint ownership develop highly innovative and knowledge-intensive ideas, with direct relevance for the four companies whose problems they address. Conclusions: The article demonstrates how the creativity camp methodology holds the potential of combining advanced academic knowledge and creativity, to produce knowledge intensive ideas, when the design is based on ideas of experiential learning as well as creativity principles. This makes the method a highly relevant learning approach for engineering students in the search for skills to both develop and implement innovative ideas.
De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony
2010-09-13
When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.
Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E
2014-01-01
Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.
Model-Driven Engineering of Machine Executable Code
NASA Astrophysics Data System (ADS)
Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira
Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.
Development of a Computer Architecture to Support the Optical Plume Anomaly Detection (OPAD) System
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1996-01-01
The NASA OPAD spectrometer system relies heavily on extensive software which repetitively extracts spectral information from the engine plume and reports the amounts of metals which are present in the plume. The development of this software is at a sufficiently advanced stage where it can be used in actual engine tests to provide valuable data on engine operation and health. This activity will continue and, in addition, the OPAD system is planned to be used in flight aboard space vehicles. The two implementations, test-stand and in-flight, may have some differing requirements. For example, the data stored during a test-stand experiment are much more extensive than in the in-flight case. In both cases though, the majority of the requirements are similar. New data from the spectrograph is generated at a rate of once every 0.5 sec or faster. All processing must be completed within this period of time to maintain real-time performance. Every 0.5 sec, the OPAD system must report the amounts of specific metals within the engine plume, given the spectral data. At present, the software in the OPAD system performs this function by solving the inverse problem. It uses powerful physics-based computational models (the SPECTRA code), which receive amounts of metals as inputs to produce the spectral data that would have been observed, had the same metal amounts been present in the engine plume. During the experiment, for every spectrum that is observed, an initial approximation is performed using neural networks to establish an initial metal composition which approximates as accurately as possible the real one. Then, using optimization techniques, the SPECTRA code is repetitively used to produce a fit to the data, by adjusting the metal input amounts until the produced spectrum matches the observed one to within a given level of tolerance. This iterative solution to the original problem of determining the metal composition in the plume requires a relatively long period of time to execute the software in a modern single-processor workstation, and therefore real-time operation is currently not possible. A different number of iterations may be required to perform spectral data fitting per spectral sample. Yet, the OPAD system must be designed to maintain real-time performance in all cases. Although faster single-processor workstations are available for execution of the fitting and SPECTRA software, this option is unattractive due to the excessive cost associated with very fast workstations and also due to the fact that such hardware is not easily expandable to accommodate future versions of the software which may require more processing power. Initial research has already demonstrated that the OPAD software can take advantage of a parallel computer architecture to achieve the necessary speedup. Current work has improved the software by converting it into a form which is easily parallelizable. Timing experiments have been performed to establish the computational complexity and execution speed of major components of the software. This work provides the foundation of future work which will create a fully parallel version of the software executing in a shared-memory multiprocessor system.
Butt, Muhammad Arif; Akram, Muhammad
2016-01-01
We present a new intuitionistic fuzzy rule-based decision-making system based on intuitionistic fuzzy sets for a process scheduler of a batch operating system. Our proposed intuitionistic fuzzy scheduling algorithm, inputs the nice value and burst time of all available processes in the ready queue, intuitionistically fuzzify the input values, triggers appropriate rules of our intuitionistic fuzzy inference engine and finally calculates the dynamic priority (dp) of all the processes in the ready queue. Once the dp of every process is calculated the ready queue is sorted in decreasing order of dp of every process. The process with maximum dp value is sent to the central processing unit for execution. Finally, we show complete working of our algorithm on two different data sets and give comparisons with some standard non-preemptive process schedulers.
A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)
2002-01-01
The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.
Using Wearable Computers in Shuttle Processing: A Feasibility Study
NASA Technical Reports Server (NTRS)
Centeno, Martha A.; Correa, Daisy; Groh-Hammond, Marcia
2001-01-01
Shuttle processing operations are performed following prescribed instructions compiled in a Work Authorization Document (WAD). Until very recently, WADs were printed so that they could be properly executed, including the buy off of each and every step by the appropriate authorizing agent. However, with the development of EPICs, Maximo, and PeopleSoft applications, some of these documents are now available in electronic format; hence, it is possible for technicians and engineers to access them on line and buy off the steps electronically. To take full advantage of these developments, technicians need access to such documents at the point of job execution. Body wearable computers present an opportunity to develop a WAD delivery system that enables access while preserving technician's mobility, safety levels, and quality of work done. The primary objectives of this project were to determine if body wearable computers are a feasible delivery system for WADs. More specifically, identify and recommend specific brands of body wearable computers readily available on the market. Thus, this effort has field-tested this technology in two areas of shuttle processing, and it has examined the usability of the technology. Results of two field tests and a Human Factors Usability Test are presented. Section 2 provides a description of the body wearable computer technology. Section 3 presents the test at the Space Shuttle Main Engine (SSME) Shop. Section 4 presents the results of the integration test at the Solid Rocket Boosters Assembly and Refurbishing Facility (SRBARF). Section 5 presents the results of the usability test done at the Operations Support Building (OSB).
1983-06-01
28 2. TFE731 from Garrett Turbine Engine Company .... ............ 29 3. NASA QCGAT (Quiet, Clean General-Aviation Turbofan...engines, with as much as 3.67 for the Garrett TFE731 engine. Increasing the axial spacing between rotor and stator stages reduces turbo- machinery...envelope. Except for the TFE731 , none of the engines for business/executive jets had absorp- tive duct linings within the engine envelope. Because the
Teaching Problem-Solving Skills to Nuclear Engineering Students
ERIC Educational Resources Information Center
Waller, E.; Kaye, M. H.
2012-01-01
Problem solving is an essential skill for nuclear engineering graduates entering the workforce. Training in qualitative and quantitative aspects of problem solving allows students to conceptualise and execute solutions to complex problems. Solutions to problems in high consequence fields of study such as nuclear engineering require rapid and…
77 FR 4736 - Nonconformance Penalties for On-Highway Heavy-Duty Diesel Engines
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-31
... entire model year 2012 production. This manufacturer intends to use a different technology to meet the NO.... (2) Baseline Engine Technology Most manufacturers generally have never had production engines at 0.50... Risks'' H. Executive Order 13211 (Energy Effects) I. National Technology Transfer Advancement Act J...
75 FR 55393 - Aviation Rulemaking Advisory Committee Meeting on Transport Airplane and Engine Issues
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-10
... Committee Meeting on Transport Airplane and Engine Issues AGENCY: Federal Aviation Administration (FAA), DOT... Rulemaking Advisory Committee (ARAC) to discuss transport airplane and engine (TAE) issues. DATES: The...: Opening Remarks, Review Agenda and Minutes. FAA Report. ARAC Executive Committee Report. Transport Canada...
Enhanced and Conventional Project-Based Learning in an Engineering Design Module
ERIC Educational Resources Information Center
Chua, K. J.; Yang, W. M.; Leo, H. L.
2014-01-01
Engineering education focuses chiefly on students' ability to solve problems. While most engineering students are proficient in solving paper questions, they may not be proficient at providing optimal solutions to pragmatic project-based problems that require systematic learning strategy, innovation, problem-solving, and execution. The…
Engineering the ATLAS TAG Browser
NASA Astrophysics Data System (ADS)
Zhang, Qizhi; ATLAS Collaboration
2011-12-01
ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services are discussed. We also describe strategies for dealing with data that may vary over time, such as run-dependent trigger decision decoding. Along with examples, we illustrate how programming techniques in multiple languages (PHP, JAVASCRIPT, XML, AJAX, and PL/SQL) have been blended to achieve the required results. Finally, we evaluate features of the ELSSI service in terms of functionality, scalability, and performance.
EOS MLS Level 1B Data Processing, Version 2.2
NASA Technical Reports Server (NTRS)
Perun, Vincent; Jarnot, Robert; Pickett, Herbert; Cofield, Richard; Schwartz, Michael; Wagner, Paul
2009-01-01
A computer program performs level- 1B processing (the term 1B is explained below) of data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS), which is an instrument aboard the Aura spacecraft. This software accepts, as input, the raw EOS MLS scientific and engineering data and the Aura spacecraft ephemeris and attitude data. Its output consists of calibrated instrument radiances and associated engineering and diagnostic data. [This software is one of several computer programs, denoted product generation executives (PGEs), for processing EOS MLS data. Starting from level 0 (representing the aforementioned raw data, the PGEs and their data products are denoted by alphanumeric labels (e.g., 1B and 2) that signify the successive stages of processing.] At the time of this reporting, this software is at version 2.2 and incorporates improvements over a prior version that make the code more robust, improve calibration, provide more diagnostic outputs, improve the interface with the Level 2 PGE, and effect a 15-percent reduction in file sizes by use of data compression.
Method for operating a spark-ignition, direct-injection internal combustion engine
Narayanaswamy, Kushal; Koch, Calvin K.; Najt, Paul M.; Szekely, Jr., Gerald A.; Toner, Joel G.
2015-06-02
A spark-ignition, direct-injection internal combustion engine is coupled to an exhaust aftertreatment system including a three-way catalytic converter upstream of an NH3-SCR catalyst. A method for operating the engine includes operating the engine in a fuel cutoff mode and coincidentally executing a second fuel injection control scheme upon detecting an engine load that permits operation in the fuel cutoff mode.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wynne, Adam S.
2011-05-05
In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less
75 FR 883 - Environmental Impact Statement; Maricopa County, AZ
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-06
...: Kenneth Davis, Senior Engineering Manager for Operations, Federal Highway Administration, 4000 N. Central..., 2009. Kenneth H. Davis, Senior Engineering Manager for Operations, Federal Highway Administration... Research, Planning and Construction. The regulations implementing Executive Order 12372 regarding...
Antenna Test Facility (ATF): User Test Planning Guide
NASA Technical Reports Server (NTRS)
Lin, Greg
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the ATF. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Chamber B Thermal/Vacuum Chamber: User Test Planning Guide
NASA Technical Reports Server (NTRS)
Montz, Mike E.
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of Chamber B. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Audio Development Laboratory (ADL) User Test Planning Guide
NASA Technical Reports Server (NTRS)
Romero, Andy
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of the ADL. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Radiant Heat Test Facility (RHTF): User Test Planning Guide
NASA Technical Reports Server (NTRS)
DelPapa, Steven
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the RHTF. The User Test Planning Guide aids in establishing expectations for both NASA and non- NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Electronic Systems Test Laboratory (ESTL) User Test Planning Guide
NASA Technical Reports Server (NTRS)
Robinson, Neil
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the ESTL. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Communication Systems Simulation Laboratory (CSSL): Simulation Planning Guide
NASA Technical Reports Server (NTRS)
Schlesinger, Adam
2012-01-01
The simulation process, milestones and inputs are unknowns to first-time users of the CSSL. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.
Advanced Materials Laboratory User Test Planning Guide
NASA Technical Reports Server (NTRS)
Orndoff, Evelyne
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of the Advanced Materials Laboratory. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Structures Test Laboratory (STL). User Test Planning Guide
NASA Technical Reports Server (NTRS)
Zipay, John J.
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the STL. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Computational Electromagnetics (CEM) Laboratory: Simulation Planning Guide
NASA Technical Reports Server (NTRS)
Khayat, Michael A.
2011-01-01
The simulation process, milestones and inputs are unknowns to first-time users of the CEM Laboratory. The Simulation Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their engineering personnel in simulation planning and execution. Material covered includes a roadmap of the simulation process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, facility interfaces, and inputs necessary to define scope, cost, and schedule are included as an appendix to the guide.
NASA Astrophysics Data System (ADS)
Nowotarski, Piotr; Paslawski, Jerzy; Wysocki, Bartosz
2017-12-01
Ground works are one of the first processes connected with erecting structures. Based on ground conditions like the type of soil or level of underground water different types and solutions for foundations are designed. Foundations are the base for the buildings, and their proper design and execution is the key for the long and faultless use of the whole construction and might influence on the future costs of the eventual repairs (especially when ground water level is high, and there is no proper water insulation made). Article presents the introduction of chosen Lean Management tools for quality improvement of the process of ground works based on the analysis made on the construction site of vehicle control station located in Poznan, Poland. Processes assessment is made from different perspectives taking into account that 3 main groups of workers were directly involved in the process: blue collar-workers, site manager and site engineers. What is more comparison is made on the 3 points of view to the problems that might occur during this type of works, with details analysis on the causes of such situation? Authors presents also the change of approach of workers directly involved in the mentioned processes regarding introduction of Lean Management methodology, which illustrates the problem of scepticism for new ideas of the people used to perform works and actions in traditional way. Using Lean Management philosophy in construction is a good idea to streamline processes in company, get rid of constantly recurring problems, and in this way improve the productivity and quality of executed activities. Performed analysis showed that different groups of people have very different idea and opinion on the problems connected with executing the same process - ground works and only having full picture of the situation (especially in construction processes) management can take proper problems-preventing actions that consequently can influence on the amount of waste generated on the construction cite which positively influence on the external environment.
Implementation of Service-Learning in Engineering and Its Impact on Students' Attitudes and Identity
ERIC Educational Resources Information Center
Dukhan, N.; Schumack, M. R.; Daniels, J. J.
2008-01-01
The current paper outlines a concise engineering service-learning model and describes its implementation and logistics in the context of a typical heat transfer course for undergraduate engineering students. The project was executed in collaboration with a not-for-profit organisation. Summative reflections were conducted by the students by…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-28
... activities that advance the state-of-the-art as well as the scientific, technology, engineering and... utilizing science, technology, engineering and mathematics; (c) to increase the competitiveness of..., UNCFSP-RDC, in care of Engineering and Management Executive, Inc. (EME), 101 South Whiting Street, Suite...
Detecting Heap-Spraying Code Injection Attacks in Malicious Web Pages Using Runtime Execution
NASA Astrophysics Data System (ADS)
Choi, Younghan; Kim, Hyoungchun; Lee, Donghoon
The growing use of web services is increasing web browser attacks exponentially. Most attacks use a technique called heap spraying because of its high success rate. Heap spraying executes a malicious code without indicating the exact address of the code by copying it into many heap objects. For this reason, the attack has a high potential to succeed if only the vulnerability is exploited. Thus, attackers have recently begun using this technique because it is easy to use JavaScript to allocate the heap memory area. This paper proposes a novel technique that detects heap spraying attacks by executing a heap object in a real environment, irrespective of the version and patch status of the web browser. This runtime execution is used to detect various forms of heap spraying attacks, such as encoding and polymorphism. Heap objects are executed after being filtered on the basis of patterns of heap spraying attacks in order to reduce the overhead of the runtime execution. Patterns of heap spraying attacks are based on analysis of how an web browser accesses benign web sites. The heap objects are executed forcibly by changing the instruction register into the address of them after being loaded into memory. Thus, we can execute the malicious code without having to consider the version and patch status of the browser. An object is considered to contain a malicious code if the execution reaches a call instruction and then the instruction accesses the API of system libraries, such as kernel32.dll and ws_32.dll. To change registers and monitor execution flow, we used a debugger engine. A prototype, named HERAD(HEap spRAying Detector), is implemented and evaluated. In experiments, HERAD detects various forms of exploit code that an emulation cannot detect, and some heap spraying attacks that NOZZLE cannot detect. Although it has an execution overhead, HERAD produces a low number of false alarms. The processing time of several minutes is negligible because our research focuses on detecting heap spraying. This research can be applied to existing systems that collect malicious codes, such as Honeypot.
Generic-distributed framework for cloud services marketplace based on unified ontology.
Hasan, Samer; Valli Kumari, V
2017-11-01
Cloud computing is a pattern for delivering ubiquitous and on demand computing resources based on pay-as-you-use financial model. Typically, cloud providers advertise cloud service descriptions in various formats on the Internet. On the other hand, cloud consumers use available search engines (Google and Yahoo) to explore cloud service descriptions and find the adequate service. Unfortunately, general purpose search engines are not designed to provide a small and complete set of results, which makes the process a big challenge. This paper presents a generic-distrusted framework for cloud services marketplace to automate cloud services discovery and selection process, and remove the barriers between service providers and consumers. Additionally, this work implements two instances of generic framework by adopting two different matching algorithms; namely dominant and recessive attributes algorithm borrowed from gene science and semantic similarity algorithm based on unified cloud service ontology. Finally, this paper presents unified cloud services ontology and models the real-life cloud services according to the proposed ontology. To the best of the authors' knowledge, this is the first attempt to build a cloud services marketplace where cloud providers and cloud consumers can trend cloud services as utilities. In comparison with existing work, semantic approach reduced the execution time by 20% and maintained the same values for all other parameters. On the other hand, dominant and recessive attributes approach reduced the execution time by 57% but showed lower value for recall.
Somogyi, Endre; Glazier, James A.
2017-01-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment. PMID:29303160
Somogyi, Endre; Glazier, James A
2017-04-01
Biological cells are the prototypical example of active matter. Cells sense and respond to mechanical, chemical and electrical environmental stimuli with a range of behaviors, including dynamic changes in morphology and mechanical properties, chemical uptake and secretion, cell differentiation, proliferation, death, and migration. Modeling and simulation of such dynamic phenomena poses a number of computational challenges. A modeling language describing cellular dynamics must naturally represent complex intra and extra-cellular spatial structures and coupled mechanical, chemical and electrical processes. Domain experts will find a modeling language most useful when it is based on concepts, terms and principles native to the problem domain. A compiler must then be able to generate an executable model from this physically motivated description. Finally, an executable model must efficiently calculate the time evolution of such dynamic and inhomogeneous phenomena. We present a spatial hybrid systems modeling language, compiler and mesh-free Lagrangian based simulation engine which will enable domain experts to define models using natural, biologically motivated constructs and to simulate time evolution of coupled cellular, mechanical and chemical processes acting on a time varying number of cells and their environment.
Software engineering and the role of Ada: Executive seminar
NASA Technical Reports Server (NTRS)
Freedman, Glenn B.
1987-01-01
The objective was to introduce the basic terminology and concepts of software engineering and Ada. The life cycle model is reviewed. The application of the goals and principles of software engineering is applied. An introductory understanding of the features of the Ada language is gained. Topics addressed include: the software crises; the mandate of the Space Station Program; software life cycle model; software engineering; and Ada under the software engineering umbrella.
Swanson, H L
1999-01-01
This investigation explores the contribution of two working memory systems (the articulatory loop and the central executive) to the performance differences between learning-disabled (LD) and skilled readers. Performances of LD, chronological age (CA) matched, and reading level-matched children were compared on measures of phonological processing accuracy and speed (articulatory system), long-term memory (LTM) accuracy and speed, and executive processing. The results indicated that (a) LD readers were inferior on measures of articulatory, LTM, and executive processing; (b) LD readers were superior to RL readers on measures of executive processing, but were comparable to RL readers on measures of the articulatory and LTM system; (c) executive processing differences remained significant between LD and CA-matched children when measures of reading comprehension, articulatory processes, and LTM processes were partialed from the analysis; and (d) executive processing contributed significant variance to reading comprehension when measures of the articulatory and LTM systems were entered into a hierarchical regression model. In summary, LD readers experience constraints in the articulatory and LTM system, but constraints mediate only some of the influence of executive processing on reading comprehension. Further, LD readers suffer executive processing problems nonspecific to their reading comprehension problems. Copyright 1999 Academic Press.
Flexible workflow sharing and execution services for e-scientists
NASA Astrophysics Data System (ADS)
Kacsuk, Péter; Terstyanszky, Gábor; Kiss, Tamas; Sipos, Gergely
2013-04-01
The sequence of computational and data manipulation steps required to perform a specific scientific analysis is called a workflow. Workflows that orchestrate data and/or compute intensive applications on Distributed Computing Infrastructures (DCIs) recently became standard tools in e-science. At the same time the broad and fragmented landscape of workflows and DCIs slows down the uptake of workflow-based work. The development, sharing, integration and execution of workflows is still a challenge for many scientists. The FP7 "Sharing Interoperable Workflow for Large-Scale Scientific Simulation on Available DCIs" (SHIWA) project significantly improved the situation, with a simulation platform that connects different workflow systems, different workflow languages, different DCIs and workflows into a single, interoperable unit. The SHIWA Simulation Platform is a service package, already used by various scientific communities, and used as a tool by the recently started ER-flow FP7 project to expand the use of workflows among European scientists. The presentation will introduce the SHIWA Simulation Platform and the services that ER-flow provides based on the platform to space and earth science researchers. The SHIWA Simulation Platform includes: 1. SHIWA Repository: A database where workflows and meta-data about workflows can be stored. The database is a central repository to discover and share workflows within and among communities . 2. SHIWA Portal: A web portal that is integrated with the SHIWA Repository and includes a workflow executor engine that can orchestrate various types of workflows on various grid and cloud platforms. 3. SHIWA Desktop: A desktop environment that provides similar access capabilities than the SHIWA Portal, however it runs on the users' desktops/laptops instead of a portal server. 4. Workflow engines: the ASKALON, Galaxy, GWES, Kepler, LONI Pipeline, MOTEUR, Pegasus, P-GRADE, ProActive, Triana, Taverna and WS-PGRADE workflow engines are already integrated with the execution engine of the SHIWA Portal. Other engines can be added when required. Through the SHIWA Portal one can define and run simulations on the SHIWA Virtual Organisation, an e-infrastructure that gathers computing and data resources from various DCIs, including the European Grid Infrastructure. The Portal via third party workflow engines provides support for the most widely used academic workflow engines and it can be extended with other engines on demand. Such extensions translate between workflow languages and facilitate the nesting of workflows into larger workflows even when those are written in different languages and require different interpreters for execution. Through the workflow repository and the portal lonely scientists and scientific collaborations can share and offer workflows for reuse and execution. Given the integrated nature of the SHIWA Simulation Platform the shared workflows can be executed online, without installing any special client environment and downloading workflows. The FP7 "Building a European Research Community through Interoperable Workflows and Data" (ER-flow) project disseminates the achievements of the SHIWA project and use these achievements to build workflow user communities across Europe. ER-flow provides application supports to research communities within and beyond the project consortium to develop, share and run workflows with the SHIWA Simulation Platform.
Electrical Maxwell Demon and Szilard Engine Utilizing Johnson Noise, Measurement, Logic and Control
Kish, Laszlo Bela; Granqvist, Claes-Göran
2012-01-01
We introduce a purely electrical version of Maxwell's demon which does not involve mechanically moving parts such as trapdoors, etc. It consists of a capacitor, resistors, amplifiers, logic circuitry and electronically controlled switches and uses thermal noise in resistors (Johnson noise) to pump heat. The only types of energy of importance in this demon are electrical energy and heat. We also demonstrate an entirely electrical version of Szilard's engine, i.e., an information-controlled device that can produce work by employing thermal fluctuations. The only moving part is a piston that executes work, and the engine has purely electronic controls and it is free of the major weakness of the original Szilard engine in not requiring removal and repositioning the piston at the end of the cycle. For both devices, the energy dissipation in the memory and other binary informatics components are insignificant compared to the exponentially large energy dissipation in the analog part responsible for creating new information by measurement and decision. This result contradicts the view that the energy dissipation in the memory during erasure is the most essential dissipation process in a demon. Nevertheless the dissipation in the memory and information processing parts is sufficient to secure the Second Law of Thermodynamics. PMID:23077525
Electrical Maxwell demon and Szilard engine utilizing Johnson noise, measurement, logic and control.
Kish, Laszlo Bela; Granqvist, Claes-Göran
2012-01-01
We introduce a purely electrical version of Maxwell's demon which does not involve mechanically moving parts such as trapdoors, etc. It consists of a capacitor, resistors, amplifiers, logic circuitry and electronically controlled switches and uses thermal noise in resistors (Johnson noise) to pump heat. The only types of energy of importance in this demon are electrical energy and heat. We also demonstrate an entirely electrical version of Szilard's engine, i.e., an information-controlled device that can produce work by employing thermal fluctuations. The only moving part is a piston that executes work, and the engine has purely electronic controls and it is free of the major weakness of the original Szilard engine in not requiring removal and repositioning the piston at the end of the cycle. For both devices, the energy dissipation in the memory and other binary informatics components are insignificant compared to the exponentially large energy dissipation in the analog part responsible for creating new information by measurement and decision. This result contradicts the view that the energy dissipation in the memory during erasure is the most essential dissipation process in a demon. Nevertheless the dissipation in the memory and information processing parts is sufficient to secure the Second Law of Thermodynamics.
Cepeda, Nicholas J.; Blackwell, Katharine A.; Munakata, Yuko
2012-01-01
The rate at which people process information appears to influence many aspects of cognition across the lifespan. However, many commonly accepted measures of “processing speed” may require goal maintenance, manipulation of information in working memory, and decision-making, blurring the distinction between processing speed and executive control and resulting in overestimation of processing-speed contributions to cognition. This concern may apply particularly to studies of developmental change, as even seemingly simple processing speed measures may require executive processes to keep children and older adults on task. We report two new studies and a re-analysis of a published study, testing predictions about how different processing speed measures influence conclusions about executive control across the life span. We find that the choice of processing speed measure affects the relationship observed between processing speed and executive control, in a manner that changes with age, and that choice of processing speed measure affects conclusions about development and the relationship among executive control measures. Implications for understanding processing speed, executive control, and their development are discussed. PMID:23432836
Space transportation booster engine configuration study. Volume 1: Executive Summary
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and to explore innovative approaches to the follow-on full-scale development (FSD) phase for the STBE.
Macniven, J A B; Davis, C; Ho, M-Y; Bradshaw, C M; Szabadi, E; Constantinescu, C S
2008-09-01
Cognitive impairments in information processing speed, attention and executive functioning are widely reported in patients with multiple sclerosis (MS). Several studies have identified impaired performance on the Stroop test in people with MS, yet uncertainty remains over the cause of this phenomenon. In this study, 25 patients with MS were assessed with a neuropsychological test battery including a computerized Stroop test and a computerized test of information processing speed, the Graded Conditional Discrimination Tasks (GCDT). The patient group was compared with an individually age, sex and estimated premorbid IQ-matched healthy control group. The patients' reaction times (RTs) were significantly longer than those of the controls on all Stroop test trials and there was a significantly enhanced absolute (RT(incongruent)-RT(neutral)) and relative (100 x [RT(incongruent)-RT(neutral)]/RT(neutral)) Stroop interference effect for the MS group. The linear function relating RT to stimulus complexity in the GCDT was significantly steeper in the patient group, indicating slowed information processing. The results are discussed with reference to the difference engine model, a theory of diversity in speeded cognition. It is concluded that, in the assessment of people with MS, great caution must be used in the interpretation of performance on neuropsychological tests which rely on RT as the primary measure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iodice, Paolo, E-mail: paolo.iodice@unina.it; Senatore, Adolfo
In the latest years the effect of powered two-wheelers on air polluting emissions is generally noteworthy all over the world, notwithstanding advances in internal combustion engines allowed to reduce considerably both fuel consumption and exhaust emissions of SI engines. Nowadays, in fact, these vehicles represent common means of quotidian moving, serving to meet daily urban transport necessities with a significant environmental impact on air quality. Besides, the emissive behavior of the two-wheelers measured under fixed legislative driving standards (and not on the local driving conditions) might not be sufficiently representative of real world motorcycle riding. The purpose of this investigationmore » is a deeper research on emissive levels of in-use motorcycles equipped with last generation SI engines under real world driving behavior. In order to analyze the effect of vehicle instantaneous speed and acceleration on emissive behavior, instantaneous emissions of CO, HC and NO{sub X} were measured in the exhaust of a four-stroke motorcycle, equipped with a three-way catalyst and belonging to the Euro-3 legislative category. Experimental tests were executed on a chassis dynamometer bench in the laboratories of the National Research Council (Italy), during the Type Approval test cycle, at constant speed and under real-world driving cycles. This analytical-experimental investigation was executed with a methodology that improves vehicles emission assessment in comparison with the modeling approaches that are based on fixed legislative driving standards. The statistical processing results so obtained are very useful also in order to improve the database of emission models commonly used for estimating emissions from road transport sector, then they can be used to evaluate the environmental impact of last generation medium-size motorcycles under real driving behaviors.« less
Genova, Helen M.; DeLuca, John; Chiaravalloti, Nancy; Wylie, Glenn
2014-01-01
The primary purpose of the current study was to examine the relationship between performance on executive tasks and white matter integrity, assessed by diffusion tensor imaging (DTI) in Multiple Sclerosis (MS). A second aim was to examine how processing speed affects the relationship between executive functioning and FA. This relationship was examined in two executive tasks that rely heavily on processing speed: the Color-Word Interference Test and Trail-Making Test (Delis-Kaplan Executive Function System). It was hypothesized that reduced fractional anisotropy (FA) is related to poor performance on executive tasks in MS, but that this relationship would be affected by the statistical correction of processing speed from the executive tasks. 15 healthy controls and 25 persons with MS participated. Regression analyses were used to examine the relationship between executive functioning and FA, both before and after processing speed was removed from the executive scores. Before processing speed was removed from the executive scores, reduced FA was associated with poor performance on Color-Word Interference Test and Trail-Making Test in a diffuse network including corpus callosum and superior longitudinal fasciculus. However, once processing speed was removed, the relationship between executive functions and FA was no longer significant on the Trail Making test, and significantly reduced and more localized on the Color-Word Interference Test. PMID:23777468
Cognitive styles of Forest Service scientists and managers in the Pacific Northwest.
Andrew B. Carey
1997-01-01
Preferences of executives, foresters, and biologists of the Pacific Northwest Research Station and executives, District Rangers, foresters, engineers, and biologists of the Pacific Northwest Region, National Forest System (USDA Forest Service), were compared for various thinking styles. Herrmann brain dominance profiles from 230 scientists and managers were drawn from...
76 FR 26976 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-10
..., identified by Docket No. FEMA-B-1193, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal... Order 12988, Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order...
75 FR 78664 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... submit comments, identified by Docket No. FEMA-B-1169, to Luis Rodriguez, Chief, Engineering Management... INFORMATION CONTACT: Luis Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. List...
76 FR 3590 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
..., identified by Docket No. FEMA-B-1171, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. List...
76 FR 59960 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
..., identified by Docket No. FEMA-B-1220, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. [[Page...
76 FR 19018 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
..., identified by Docket No. FEMA-B-1179, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. List...
76 FR 19005 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
..., identified by Docket No. FEMA-B-1187, to Luis Rodriguez, Chief, Engineering Management Branch, Federal... Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration, Federal..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. List...
76 FR 66887 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-28
..., identified by Docket No. FEMA-B-1226, to Luis Rodriguez, Chief, Engineering Management Branch, Federal...: Luis Rodriguez, Chief, Engineering Management Branch, Federal Insurance and Mitigation Administration..., Civil Justice Reform. This proposed rule meets the applicable standards of Executive Order 12988. [[Page...
DOT National Transportation Integrated Search
2006-03-01
This document is an executive summary of the report "Driver attitudes and behaviors at intersections and potential effectiveness of engineering countermeasures", FHWA-HRT-05-078. The objective of the focus group study was to identify driver attitudes...
NASA Technical Reports Server (NTRS)
1972-01-01
An overview is presented of the results of the analyses conducted in support of the selected engine system for the pressure-fed booster stage. During initial phases of the project, a gimbaled, regeneratively cooled, fixed thrust engine having a coaxial pintle injector was selected as optimum for this configuration.
Relations between Short-term Memory Deficits, Semantic Processing, and Executive Function
Allen, Corinne M.; Martin, Randi C.; Martin, Nadine
2012-01-01
Background Previous research has suggested separable short-term memory (STM) buffers for the maintenance of phonological and lexical-semantic information, as some patients with aphasia show better ability to retain semantic than phonological information and others show the reverse. Recently, researchers have proposed that deficits to the maintenance of semantic information in STM are related to executive control abilities. Aims The present study investigated the relationship of executive function abilities with semantic and phonological short-term memory (STM) and semantic processing in such patients, as some previous research has suggested that semantic STM deficits and semantic processing abilities are critically related to specific or general executive function deficits. Method and Procedures 20 patients with aphasia and STM deficits were tested on measures of short-term retention, semantic processing, and both complex and simple executive function tasks. Outcome and Results In correlational analyses, we found no relation between semantic STM and performance on simple or complex executive function tasks. In contrast, phonological STM was related to executive function performance in tasks that had a verbal component, suggesting that performance in some executive function tasks depends on maintaining or rehearsing phonological codes. Although semantic STM was not related to executive function ability, performance on semantic processing tasks was related to executive function, perhaps due to similar executive task requirements in both semantic processing and executive function tasks. Conclusions Implications for treatment and interpretations of executive deficits are discussed. PMID:22736889
NASA Astrophysics Data System (ADS)
Darmawan, Tofiq Dwiki; Priadythama, Ilham; Herdiman, Lobes
2018-02-01
Welding and drilling are main processes of making chair frame from metal material. Commonly, chair frame construction includes many arcs which bring difficulties for its welding and drilling process. In UNS industrial engineering integrated practicum there are welding fixtures which use to fixing frame component position for welding purpose. In order to achieve exact holes position for assembling purpose, manual drilling processes were conducted after the frame was joined. Unfortunately, after it was welded the frame material become hard and increase drilling tools wear rate as well as reduce holes position accuracy. The previous welding fixture was not equipped with clamping system and cannot accommodate drilling process. To solve this problem, our idea is to reorder the drilling process so that it can be execute before welding. Thus, this research aims to propose conceptual design of modular fixture which can integrate welding and drilling process. We used Generic Product Development Process to address the design concept. We collected design requirements from 3 source, jig and fixture theoretical concepts, user requirements, and clamping part standards. From 2 alternatives fixture tables, we propose the first which equipped with mounting slots instead of holes. We test the concept by building a full sized prototype and test its works by conducting welding and drilling of a student chair frame. Result from the welding and drilling trials showed that the holes are on precise position after welding. Based on this result, we conclude that the concept can be a consideration for application in UNS Industrial Engineering Integrated Practicum.
ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows
NASA Technical Reports Server (NTRS)
McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush
2004-01-01
With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.
Development of a novel cold forging process to manufacture eccentric shafts
NASA Astrophysics Data System (ADS)
Pasler, Lukas; Liewald, Mathias
2018-05-01
Since the commercial usage of compact combustion engines, eccentric shafts have been used to transform translational into rotational motion. Over the years, several processes to manufacture these eccentric shafts or crankshafts have been developed. Especially for single-cylinder engines manufactured in small quantities, built crankshafts disclose advantages regarding tooling costs and performance. Those manufacturing processes do have one thing in common: They are all executed at elevated temperatures to enable the material to be formed to high forming degree. In this paper, a newly developed cold forging process is presented, which combines lateral extrusion and shifting for manufacturing a crank in one forming operation at room temperature. In comparison to the established upsetting and shifting methods to manufacture such components, the tool cavity or crank web thickness remains constant. Therefore, the developed new process presented in this paper consists of a combination of shifting and extrusion of the billet, which allows pushing material into the forming zone during shifting. In order to reduce the tensile stresses induced by the shifting process, compressive stresses are superimposed. It is expected that the process limits will be expanded regarding the horizontal displacement and form filling. In the following report, the simulation and design of the tooling concept are presented. Experiments were conducted and compared with corresponding simulation results afterwards.
Automated planning for intelligent machines in energy-related applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisbin, C.R.; de Saussure, G.; Barhen, J.
1984-01-01
This paper discusses the current activities of the Center for Engineering Systems Advanced Research (CESAR) program related to plan generation and execution by an intelligent machine. The system architecture for the CESAR mobile robot (named HERMIES-1) is described. The minimal cut-set approach is developed to reduce the tree search time of conventional backward chaining planning techniques. Finally, a real-time concept of an Intelligent Machine Operating System is presented in which planning and reasoning is embedded in a system for resource allocation and process management.
2017-09-01
acquisition of goods or services . The acquisition of SETA support can be accomplished using the same source selection processes and procedures available to...professional services , and education & training (OUSD[AT&L]), 2012). The USD(AT&L) issued a memorandum on 4 March 2015 to the Secretaries of the...and financial services (GSA, n.d.-b.). The GSA’s IT Schedule 70 provides access to over 5,000 vendors offering an expansive variety of IT products
Organizational transformation to improve operational efficiency at Gemini South
NASA Astrophysics Data System (ADS)
van der Hoeven, M.; Maltes, Diego; Rogers, Rolando
2016-07-01
In this paper we will describe how the Gemini South Engineering team has been reorganized from different functional units into a cross-disciplinary team while executing a transition plan that imposes several staff reductions, driven by budget reductions. Several factors are of critical importance to the success of any change in organization. Budgetary processes, staff diversity, leadership style, skill sets and planning are all important factors to take into account to achieve a successful outcome. We will analyze the organizational alignment by using some proven management models and concepts.
2011-03-31
evidence based medicine into clinical practice. It will decrease costs and enable multiple stakeholders to work in an open content/source environment to exchange clinical content, develop and test technology and explore processes in applied CDS. Design: Comparative study between the KMR infrastructure and capabilities developed as an open source, vendor agnostic solution for aCPG execution within AHLTA and the current DoD/MHS standard evaluating: H1: An open source, open standard KMR and Clinical Decision Support Engine can enable organizations to share domain
Decision problems in management of construction projects
NASA Astrophysics Data System (ADS)
Szafranko, E.
2017-10-01
In a construction business, one must oftentimes make decisions during all stages of a building process, from planning a new construction project through its execution to the stage of using a ready structure. As a rule, the decision making process is made more complicated due to certain conditions specific for civil engineering. With such diverse decision situations, it is recommended to apply various decision making support methods. Both, literature and hands-on experience suggest several methods based on analytical and computational procedures, some less and some more complex. This article presents the methods which can be helpful in supporting decision making processes in the management of civil engineering projects. These are multi-criteria methods, such as MCE, AHP or indicator methods. Because the methods have different advantages and disadvantages, whereas decision situations have their own specific nature, a brief summary of the methods alongside some recommendations regarding their practical applications has been given at the end of the paper. The main aim of this article is to review the methods of decision support and their analysis for possible use in the construction industry.
NASA Astrophysics Data System (ADS)
Merticariu, Vlad; Misev, Dimitar; Baumann, Peter
2017-04-01
While python has developed into the lingua franca in Data Science there is often a paradigm break when accessing specialized tools. In particular for one of the core data categories in science and engineering, massive multi-dimensional arrays, out-of-memory solutions typically employ their own, different models. We discuss this situation on the example of the scalable open-source array engine, rasdaman ("raster data manager") which offers access to and processing of Petascale multi-dimensional arrays through an SQL-style array query language, rasql. Such queries are executed in the server on a storage engine utilizing adaptive array partitioning and based on a processing engine implementing a "tile streaming" paradigm to allow processing of arrays massively larger than server RAM. The rasdaman QL has acted as blueprint for forthcoming ISO Array SQL and the Open Geospatial Consortium (OGC) geo analytics language, Web Coverage Processing Service, adopted in 2008. Not surprisingly, rasdaman is OGC and INSPIRE Reference Implementation for their "Big Earth Data" standards suite. Recently, rasdaman has been augmented with a python interface which allows to transparently interact with the database (credits go to Siddharth Shukla's Master Thesis at Jacobs University). Programmers do not need to know the rasdaman query language, as the operators are silently transformed, through lazy evaluation, into queries. Arrays delivered are likewise automatically transformed into their python representation. In the talk, the rasdaman concept will be illustrated with the help of large-scale real-life examples of operational satellite image and weather data services, and sample python code.
Executive Functions in Learning Processes: Do They Benefit from Physical Activity?
ERIC Educational Resources Information Center
Barenberg, Jonathan; Berse, Timo; Dutke, Stephan
2011-01-01
As executive functions play an essential role in learning processes, approaches capable of enhancing executive functioning are of particular interest to educational psychology. Recently, the hypothesis has been advanced that executive functioning may benefit from changes in neurobiological processes induced by physical activity. The present…
Inspection of Construction Works According to Polish Construction Law
NASA Astrophysics Data System (ADS)
Czemplik, A.
2015-11-01
Construction regulations are still different in many European countries, even though the European Union directives have unified many acts for construction works and construction products in the member countries. The scheme of the construction process presented in the paper could be valid for most countries, despite of detailed regulations of legal systems. The number of construction regulations to be followed in order to get the Construction Permit in Poland is rather big, so the time between the start of the investment process and the day when the Construction Permit is issued could be several months. Only licensed professional engineers can play the role of site managers, site inspectors and designers, registered for the given construction project. Duties and responsibilities (civil liability) of these engineers are strictly defined by regulations. The obligatory inspection of construction works should be executed by the licensed site inspectors. Moreover, the works can be incidentally inspected by Authority, banks, insurance companies or designers. Foreign designers and foreign site engineers in order to be allowed by respective Authority to play official roles on Polish construction sites should present documents proving that they can do the same jobs in their countries as per regulations obligatory there.
Shi, Zhenyu; Vickers, Claudia E
2016-12-01
Molecular Cloning Designer Simulator (MCDS) is a powerful new all-in-one cloning and genetic engineering design, simulation and management software platform developed for complex synthetic biology and metabolic engineering projects. In addition to standard functions, it has a number of features that are either unique, or are not found in combination in any one software package: (1) it has a novel interactive flow-chart user interface for complex multi-step processes, allowing an integrated overview of the whole project; (2) it can perform a user-defined workflow of cloning steps in a single execution of the software; (3) it can handle multiple types of genetic recombineering, a technique that is rapidly replacing classical cloning for many applications; (4) it includes experimental information to conveniently guide wet lab work; and (5) it can store results and comments to allow the tracking and management of the whole project in one platform. MCDS is freely available from https://mcds.codeplex.com.
NASA Technical Reports Server (NTRS)
1988-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery and operational support of Space Transportation System elements and payloads, Kennedy Space Center is placing emphasis on its research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safer, more efficient, and more economical execution of our current mission, we are developing the technological tools needed to execute the Center's mission relative to future programs. The Engineering Development Directorate encompasses most of the laboratories and other Center resources that are key elements of research and technology program implementation, and is responsible for implementation of the majority of the projects in this Kennedy Space Center 1988 Annual Report.
Engineering Analysis Using a Web-based Protocol
NASA Technical Reports Server (NTRS)
Schoeffler, James D.; Claus, Russell W.
2002-01-01
This paper reviews the development of a web-based framework for engineering analysis. A one-dimensional, high-speed analysis code called LAPIN was used in this study, but the approach can be generalized to any engineering analysis tool. The web-based framework enables users to store, retrieve, and execute an engineering analysis from a standard web-browser. We review the encapsulation of the engineering data into the eXtensible Markup Language (XML) and various design considerations in the storage and retrieval of application data.
Vibration and Acoustic Test Facility (VATF): User Test Planning Guide
NASA Technical Reports Server (NTRS)
Fantasia, Peter M.
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the VATF. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Six-Degree-of-Freedom Dynamic Test System (SDTS) User Test Planning Guide
NASA Technical Reports Server (NTRS)
Stokes, LeBarian
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of the SDTS. The User Test Planning Guide aids in establishing expectations for both NASA and non- NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Materials and Nondestructive Evaluation Laboratoriers: User Test Planning Guide
NASA Technical Reports Server (NTRS)
Schaschl, Leslie
2011-01-01
The Materials and Nondestructive Evaluation Laboratory process, milestones and inputs are unknowns to first-time users. The Materials and Nondestructive Evaluation Laboratory Planning Guide aids in establishing expectations for both NASA and non- NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware developers. It is intended to assist their project engineering personnel in materials analysis planning and execution. Material covered includes a roadmap of the analysis process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, products, and inputs necessary to define scope of analysis, cost, and schedule are included as an appendix to the guide.
Specialized Environmental Chamber Test Complex: User Test Planning Guide
NASA Technical Reports Server (NTRS)
Montz, Michael E.
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the Specialized Environmental Test Complex. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Atmospheric Reentry Materials and Structures Evaluation Facility (ARMSEF). User Test Planning Guide
NASA Technical Reports Server (NTRS)
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the ARMSEF. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Energy Systems Test Area (ESTA) Battery Test Operations User Test Planning Guide
NASA Technical Reports Server (NTRS)
Salinas, Michael
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of the ESTA Battery Test Operations. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
NASA Technical Reports Server (NTRS)
Scully, Robert C.
2011-01-01
Test process, milestones and inputs are unknowns to first-time users of the EMI/EMC Test Facility. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
ERIC Educational Resources Information Center
Corbett, Christianne; Hill, Catherine
2015-01-01
During the 2014 White House Science Fair, President Barack Obama used a sports metaphor to explain why we must address the shortage of women in science, technology, engineering, and mathematics (STEM), particularly in the engineering and computing fields: "Half our team, we're not even putting on the field. We've got to change those…
NASA Women's History Month - Erin Waggoner (AFRC)
2018-03-20
Erin Waggoner is an Aerospace Engineer in the Aerodynamics and Propulsion Branch at NASA Armstrong Flight Research Center. Erin has a BS in Aerospace Engineering from Wichita State University and a MS in Aeronautics and Astronautics from Purdue University. Her work includes planning, coordinating, and executing ground tests; analyzing data; writing papers; and serving as a Flight Test Engineer onboard test aircraft.
ERIC Educational Resources Information Center
Jain, Ajay K.; Moreno, Ana
2015-01-01
Purpose: The study aims at investigating the impact of organizational learning (OL) on the firm's performance and knowledge management (KM) practices in a heavy engineering organization in India. Design/Methodology/Approach: The data were collected from 205 middle and senior executives working in the project engineering management division of a…
Defense Acquisitions Acronyms and Terms
2012-12-01
Computer-Aided Design CADD Computer-Aided Design and Drafting CAE Component Acquisition Executive; Computer-Aided Engineering CAIV Cost As an...Radiation to Ordnance HFE Human Factors Engineering HHA Health Hazard Assessment HNA Host-Nation Approval HNS Host-Nation Support HOL High -Order...Engineering Change Proposal VHSIC Very High Speed Integrated Circuit VLSI Very Large Scale Integration VOC Volatile Organic Compound W WAN Wide
Engine Component Retirement for Cause. Volume 1. Executive Summary
1987-08-01
components of all future engines. A mejor factor in the success of this progrm in taking Retirement for Cause from a concept to reality was the high level of...engine was chosen as the demonstration/validation vehicle for the Retirement for Cause (RCF) program. It is an augmented turbofan engine in the...inspections using surface replication; aspect ratios were determined from post test fractography . The crack size observed from the testing was compared to
78 FR 15597 - Special Conditions: GE Aviation CT7-2E1 Turboshaft Engine Model
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-12
..., Aircraft Certification Service, 12 New England Executive Park, Burlington, Massachusetts 01803-5299... concerning this rule, contact Vincent Bennett, ANE-7, Engine and Propeller Directorate, Aircraft... the rating's definition, overspeed, controls system, and endurance test, because the applicable...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-09
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY Nanoscale Science, Engineering and Technology Subcommittee Committee on Technology, National Science and Technology Council; Public Meetings AGENCY: Executive Office of the President, Office of Science and Technology Policy. ACTION: Notice of Public Meetings. SUMMARY...
A preliminary study of the performance and characteristics of a supersonic executive aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.
1977-01-01
The impact of advanced supersonic technologies on the performance and characteristics of a supersonic executive aircraft was studied in four configurations with different engine locations and wing/body blending and an advanced nonafterburning turbojet or variable cycle engine. An M 2.2 design Douglas scaled arrow-wing was used with Learjet 35 accommodations. All four configurations with turbojet engines meet the performance goals of 5926 km (3200 n.mi.) range, 1981 meters (6500 feet) takeoff field length, and 77 meters per second (150 knots) approach speed. The noise levels of of turbojet configurations studied are excessive. However, a turbojet with mechanical suppressor was not studied. The variable cycle engine configuration is deficient in range by 555 km (300 n.mi) but nearly meets subsonic noise rules (FAR 36 1977 edition), if coannular noise relief is assumed. All configurations are in the 33566 to 36287 kg (74,000 to 80,000 lbm) takeoff gross weight class when incorporating current titanium manufacturing technology.
Ren, Xuezhu; Altmeyer, Michael; Reiss, Siegbert; Schweizer, Karl
2013-02-01
Perceptual attention and executive attention represent two higher-order types of attention and associate with distinctly different ways of information processing. It is hypothesized that these two types of attention implicate different cognitive processes, which are assumed to account for the differential effects of perceptual attention and executive attention on fluid intelligence. Specifically, an encoding process is assumed to be crucial in completing the tasks of perceptual attention while two executive processes, updating and shifting, are stimulated in completing the tasks of executive attention. The proposed hypothesis was tested by means of an integrative approach combining experimental manipulations and psychometric modeling. In a sample of 210 participants the encoding process has proven indispensable in completing the tasks of perceptual attention, and this process accounted for a considerable part of fluid intelligence that was assessed by two figural reasoning tests. In contrast, the two executive processes, updating and shifting, turned out to be necessary in performance according to the tasks of executive attention and these processes accounted for a larger part of the variance in fluid intelligence than that of the processes underlying perceptual attention. Copyright © 2012 Elsevier B.V. All rights reserved.
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
Performance enhancement of various real-time image processing techniques via speculative execution
NASA Astrophysics Data System (ADS)
Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.
1996-03-01
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
A fungicide-responsive kinase as a tool for synthetic cell fate regulation.
Furukawa, Kentaro; Hohmann, Stefan
2015-08-18
Engineered biological systems that precisely execute defined tasks have major potential for medicine and biotechnology. For instance, gene- or cell-based therapies targeting pathogenic cells may replace time- and resource-intensive drug development. Engineering signal transduction systems is a promising, yet presently underexplored approach. Here, we exploit a fungicide-responsive heterologous histidine kinase for pathway engineering and synthetic cell fate regulation in the budding yeast Saccharomyces cerevisiae. Rewiring the osmoregulatory Hog1 MAPK signalling system generates yeast cells programmed to execute three different tasks. First, a synthetic negative feedback loop implemented by employing the fungicide-responsive kinase and a fungicide-resistant derivative reshapes the Hog1 activation profile, demonstrating how signalling dynamics can be engineered. Second, combinatorial integration of different genetic parts including the histidine kinases, a pathway activator and chemically regulated promoters enables control of yeast growth and/or gene expression in a two-input Boolean logic manner. Finally, we implemented a genetic 'suicide attack' system, in which engineered cells eliminate target cells and themselves in a specific and controllable manner. Taken together, fungicide-responsive kinases can be applied in different constellations to engineer signalling behaviour. Sensitizing engineered cells to existing chemicals may be generally useful for future medical and biotechnological applications. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Environmental Testing of the NEXT PM1R Ion Engine
NASA Technical Reports Server (NTRS)
Snyder, John S.; Anderson, John R.; VanNoord, Jonathan L.; Soulas, George C.
2007-01-01
The NEXT propulsion system is an advanced ion propulsion system presently under development that is oriented towards robotic exploration of the solar system using solar electric power. The subsystem includes an ion engine, power processing unit, feed system components, and thruster gimbal. The Prototype Model engine PM1 was subjected to qualification-level environmental testing in 2006 to demonstrate compatibility with environments representative of anticipated mission requirements. Although the testing was largely successful, several issues were identified including the fragmentation of potting cement on the discharge and neutralizer cathode heater terminations during vibration which led to abbreviated thermal testing, and generation of particulate contamination from manufacturing processes and engine materials. The engine was reworked to address most of these findings, renamed PM1R, and the environmental test sequence was repeated. Thruster functional testing was performed before and after the vibration and thermal-vacuum tests. Random vibration testing, conducted with the thruster mated to the breadboard gimbal, was executed at 10.0 Grms for 2 min in each of three axes. Thermal-vacuum testing included three thermal cycles from 120 to 215 C with hot engine re-starts. Thruster performance was nominal throughout the test program, with minor variations in a few engine operating parameters likely caused by facility effects. There were no significant changes in engine performance as characterized by engine operating parameters, ion optics performance measurements, and beam current density measurements, indicating no significant changes to the hardware as a result of the environmental testing. The NEXT PM1R engine and the breadboard gimbal were found to be well-designed against environmental requirements based on the results reported herein. The redesigned cathode heater terminations successfully survived the vibration environments. Based on the results of this test program and confidence in the engineering solutions available for the remaining findings of the first test program, specifically the particulate contamination, the hardware environmental qualification program can proceed with confidence
A 3D character animation engine for multimodal interaction on mobile devices
NASA Astrophysics Data System (ADS)
Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo
2005-03-01
Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Owen, Jeffrey E.
1988-01-01
A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.
Predicting Academic Performance of Master's Students in Engineering Management
ERIC Educational Resources Information Center
Calisir, Fethi; Basak, Ecem; Comertoglu, Sevinc
2016-01-01
The purpose of this study is to investigate the factors affecting academic achievement of the master's students who are enrolling in the executive engineering management master's programs in Turkey. These factors include admission requirements (entrance examination, undergraduate grade point average, English proficiency) and demographic attributes…
NASA Technical Reports Server (NTRS)
Violett, Rebeca S.
1989-01-01
The analysis performed on the Main Injector LOX Inlet Assembly located on the Space Shuttle Main Engine is summarized. An ANSYS finite element model of the inlet assemably was built and executed. Static stress analysis was also performed.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-24
... OFFICE OF SCIENCE AND TECHNOLOGY POLICY Nanoscale Science, Engineering, and Technology Subcommittee; Committee on Technology, National Science and Technology Council; Notice of Public Meeting AGENCY: Executive Office of the President, Office of Science and Technology Policy. ACTION: Notice of Public Meeting...
Environmental Testing of the NEXT PM1 Ion Engine
NASA Technical Reports Server (NTRS)
Synder, John S.; Anderson, John R.; VanNoord, Jonathan L.; Soulas, George C.
2008-01-01
The NEXT propulsion system is an advanced ion propulsion system presently under development that is oriented towards robotic exploration of the solar system using solar electric power. The Prototype Model engine PM1 was subjected to qualification-level environmental testing to demonstrate compatibility with environments representative of anticipated mission requirements. Random vibration testing, conducted with the thruster mated to the breadboard gimbal, was executed at 10.0 Grms for 2 minutes in each of three axes. Thermal-vacuum testing included a deep cold soak of the engine to temperatures of -168 C and thermal cycling from -120 to 203 C. Although the testing was largely successful, several issues were identified including the fragmentation of potting cement on the discharge and neutralizer cathode heater terminations during vibration which led to abbreviated thermal testing, and generation of particulate contamination from manufacturing processes and engine materials. Thruster performance was nominal throughout the test program, with minor variations in some engine operating parameters likely caused by facility effects. In general, the NEXT PM1 engine and the breadboard gimbal were found to be well-designed against environmental requirements based on the results reported herein. After resolution of the findings from this test program the hardware environmental qualification program can proceed with confidence.
The opto-mechanical design process: from vision to reality
NASA Astrophysics Data System (ADS)
Kvamme, E. Todd; Stubbs, David M.; Jacoby, Michael S.
2017-08-01
The design process for an opto-mechanical sub-system is discussed from requirements development through test. The process begins with a proper mission understanding and the development of requirements for the system. Preliminary design activities are then discussed with iterative analysis and design work being shared between the design, thermal, and structural engineering personnel. Readiness for preliminary review and the path to a final design review are considered. The value of prototyping and risk mitigation testing is examined with a focus on when it makes sense to execute a prototype test program. System level margin is discussed in general terms, and the practice of trading margin in one area of performance to meet another area is reviewed. Requirements verification and validation is briefly considered. Testing and its relationship to requirements verification concludes the design process.
2016-06-28
Springs, Colorado from May 31- June 4, 2015. ONR support in the an1otmt of$15,000 was provided to support the planning , execution , and dissemination of...held in Colorado Springs, Colorado from May 31- June 4, 2015. ONR support in the amount of $15,000 was provided to support the planning , execution ...support to assist TMS in carrying out the various necessary phases of the planning , execution , and result- dissemination efforts of the Congress. In
NASA Astrophysics Data System (ADS)
Aronoff, H. I.; Leslie, J. J.; Mittleman, A. N.; Holt, S.
1983-11-01
This manual describes a Shared Time Engineering Program (STEP) conducted by the New England Apparel Manufacturers Association (NEAMA) headquartered in Fall River Massachusetts, and funded by the Office of Trade Adjustment Assistance of the U.S. Department of Commerce. It is addressed to industry association executives, industrial engineers and others interested in examining an innovative model of industrial engineering assistance to small plants which might be adapted to their particular needs.
Manufacturing Methods and Technology Program Plan. Update.
1981-11-01
INDUSTRIAL BASE ENGINEERING ACTIVITY ROCK ISLAND. ILLINOIS 61299 82 INDEX PAGE I. INTRODUCTION The MMT Program Plan Update ........... 1 Industry Guide...obtained from that Plan, extra copies of which are available upon request from the Industrial Base Engineering Activity. Other sources for this data are...Major Subcommands (SUBMACOM’S). The SUBMACOM’S plan, formulate, budget, and execute individual projects. The Industrial Base Engineering Activity
Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems
2012-01-31
2012 UNCLASSIFIED 1 of 58 Impacts and Opportunities for Engineering in the Era of Cloud Computing Systems A Report to the U.S. Department...2.1.7 Engineering of Computational Behavior .............................................................18 2.2 How the Cloud Will Impact Systems...58 Executive Summary This report discusses the impact of cloud computing and the broader revolution in computing on systems, on the disciplines of
Chen, Yu-Xue; Liu, Zheng-Ren; Yu, Ying; Yao, En-Sheng; Liu, Xing-Hua; Liu, Lu
2017-10-01
The purpose of this study was to investigate the existence and extent of cognitive impairment in adult diabetes mellitus (DM) patients with episodes of recurrent severe hypoglycemia, by using meta-analysis to synthesize data across studies. PubMed, EMBASE and Cochrane library search engines were used to identify studies on cognitive performance in DM patients with recurrent severe hypoglycemia. Random-effects meta-analysis was performed on seven eligible studies using an inverse-variance method. Effect sizes, which are the standardized differences between the experimental group and the control group, were calculated. Of the 853 studies, 7 studies met the inclusion criteria. Compared with control subjects, the adult DM patients with episodes of recurrent severe hypoglycemia demonstrated a significantly lowered performance on memory in both types of DM patients, and poor performance of processing speed in type 2 DM patients. There was no significant difference between adult DM patients with and those without severe hypoglycemia in other cognitive domains such as general intelligence, executive function, processing speed and psychomotor efficiency. Our results seem to confirm the hypothesis that cognitive dysfunction is characterized by worse memory and processing speed in adult DM patients with a history of recurrent severe hypoglycemia, whereas general intelligence, executive function, and psychomotor efficiency are spared.
Income, neural executive processes, and preschool children's executive control.
Ruberry, Erika J; Lengua, Liliana J; Crocker, Leanna Harris; Bruce, Jacqueline; Upshaw, Michaela B; Sommerville, Jessica A
2017-02-01
This study aimed to specify the neural mechanisms underlying the link between low household income and diminished executive control in the preschool period. Specifically, we examined whether individual differences in the neural processes associated with executive attention and inhibitory control accounted for income differences observed in performance on a neuropsychological battery of executive control tasks. The study utilized a sample of preschool-aged children (N = 118) whose families represented the full range of income, with 32% of families at/near poverty, 32% lower income, and 36% middle to upper income. Children completed a neuropsychological battery of executive control tasks and then completed two computerized executive control tasks while EEG data were collected. We predicted that differences in the event-related potential (ERP) correlates of executive attention and inhibitory control would account for income differences observed on the executive control battery. Income and ERP measures were related to performance on the executive control battery. However, income was unrelated to ERP measures. The findings suggest that income differences observed in executive control during the preschool period might relate to processes other than executive attention and inhibitory control.
Towards a Decision Support System for Space Flight Operations
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Hogle, Charles; Ruszkowski, James
2013-01-01
The Mission Operations Directorate (MOD) at the Johnson Space Center (JSC) has put in place a Model Based Systems Engineering (MBSE) technological framework for the development and execution of the Flight Production Process (FPP). This framework has provided much added value and return on investment to date. This paper describes a vision for a model based Decision Support System (DSS) for the development and execution of the FPP and its design and development process. The envisioned system extends the existing MBSE methodology and technological framework which is currently in use. The MBSE technological framework currently in place enables the systematic collection and integration of data required for building an FPP model for a diverse set of missions. This framework includes the technology, people and processes required for rapid development of architectural artifacts. It is used to build a feasible FPP model for the first flight of spacecraft and for recurrent flights throughout the life of the program. This model greatly enhances our ability to effectively engage with a new customer. It provides a preliminary work breakdown structure, data flow information and a master schedule based on its existing knowledge base. These artifacts are then refined and iterated upon with the customer for the development of a robust end-to-end, high-level integrated master schedule and its associated dependencies. The vision is to enhance this framework to enable its application for uncertainty management, decision support and optimization of the design and execution of the FPP by the program. Furthermore, this enhanced framework will enable the agile response and redesign of the FPP based on observed system behavior. The discrepancy of the anticipated system behavior and the observed behavior may be due to the processing of tasks internally, or due to external factors such as changes in program requirements or conditions associated with other organizations that are outside of MOD. The paper provides a roadmap for the three increments of this vision. These increments include (1) hardware and software system components and interfaces with the NASA ground system, (2) uncertainty management and (3) re-planning and automated execution. Each of these increments provide value independently; but some may also enable building of a subsequent increment.
Study of solid rocket motors for a space shuttle booster. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
1972-01-01
An analysis of the solid propellant rocket engines for use with the space shuttle booster was conducted. A definition of the specific solid propellant rocket engine stage designs, development program requirements, production requirements, launch requirements, and cost data for each program phase were developed.
40 CFR 72.94 - Units with repowering extension plans.
Code of Federal Regulations, 2014 CFR
2014-07-01
... plans. (a) Design and engineering and contract requirements. No later than January 1, 2000, the... and the permitting authority: (1) Satisfactory documentation of a preliminary design and engineering effort. (2) A binding letter agreement for the executed and binding contract (or for each in a series of...
40 CFR 72.94 - Units with repowering extension plans.
Code of Federal Regulations, 2010 CFR
2010-07-01
... plans. (a) Design and engineering and contract requirements. No later than January 1, 2000, the... and the permitting authority: (1) Satisfactory documentation of a preliminary design and engineering effort. (2) A binding letter agreement for the executed and binding contract (or for each in a series of...
40 CFR 72.94 - Units with repowering extension plans.
Code of Federal Regulations, 2011 CFR
2011-07-01
... plans. (a) Design and engineering and contract requirements. No later than January 1, 2000, the... and the permitting authority: (1) Satisfactory documentation of a preliminary design and engineering effort. (2) A binding letter agreement for the executed and binding contract (or for each in a series of...
40 CFR 72.94 - Units with repowering extension plans.
Code of Federal Regulations, 2012 CFR
2012-07-01
... plans. (a) Design and engineering and contract requirements. No later than January 1, 2000, the... and the permitting authority: (1) Satisfactory documentation of a preliminary design and engineering effort. (2) A binding letter agreement for the executed and binding contract (or for each in a series of...
40 CFR 72.94 - Units with repowering extension plans.
Code of Federal Regulations, 2013 CFR
2013-07-01
... plans. (a) Design and engineering and contract requirements. No later than January 1, 2000, the... and the permitting authority: (1) Satisfactory documentation of a preliminary design and engineering effort. (2) A binding letter agreement for the executed and binding contract (or for each in a series of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
33 CFR 210.2 - Notice of award.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 210.2 Navigation and Navigable Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE PROCUREMENT ACTIVITIES OF THE CORPS OF ENGINEERS § 210.2 Notice of award. The successful bidder... accompany the contract papers which are forwarded for execution. To avoid error, or confusing the notice of...
47 CFR 2.911 - Written application required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... representative who shall indicate his title, such as plant manager, project engineer, etc. (d) Technical test... signature; however, the Office of Engineering and Technology may allow signature by any symbol executed or... computer-generated electronic impulses. [39 FR 5919, Feb. 15, 1974, as amended at 39 FR 27802, Aug. 1, 1974...
47 CFR 2.911 - Written application required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... representative who shall indicate his title, such as plant manager, project engineer, etc. (d) Technical test... signature; however, the Office of Engineering and Technology may allow signature by any symbol executed or... computer-generated electronic impulses. [39 FR 5919, Feb. 15, 1974, as amended at 39 FR 27802, Aug. 1, 1974...
Air Force Institute of Technology, Civil Engineering School: Environmental Protection Course.
ERIC Educational Resources Information Center
Air Force Inst. of Tech., Wright-Patterson AFB, OH. School of Engineering.
This document contains information assembled by the Civil Engineering School to meet the initial requirements of NEPA 1969 and Executive Orders which required the Air Force to implement an effective environmental protection program. This course presents the various aspects of Air Force environmental protection problems which military personnel…
47 CFR 2.911 - Written application required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... representative who shall indicate his title, such as plant manager, project engineer, etc. (d) Technical test... signature; however, the Office of Engineering and Technology may allow signature by any symbol executed or... computer-generated electronic impulses. [39 FR 5919, Feb. 15, 1974, as amended at 39 FR 27802, Aug. 1, 1974...
47 CFR 2.911 - Written application required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... representative who shall indicate his title, such as plant manager, project engineer, etc. (d) Technical test... signature; however, the Office of Engineering and Technology may allow signature by any symbol executed or... computer-generated electronic impulses. [39 FR 5919, Feb. 15, 1974, as amended at 39 FR 27802, Aug. 1, 1974...
47 CFR 2.911 - Written application required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... representative who shall indicate his title, such as plant manager, project engineer, etc. (d) Technical test... signature; however, the Office of Engineering and Technology may allow signature by any symbol executed or... computer-generated electronic impulses. [39 FR 5919, Feb. 15, 1974, as amended at 39 FR 27802, Aug. 1, 1974...
Developing and Implementing an International Engineering Program.
ERIC Educational Resources Information Center
Jain, Ravi K.; Elliott, Gayle G.; Jain, Terumi Takahashi
The goals of the Trans European Mobility Program for University Students (TEMPUS) project include developing curriculum and implementing language and culture training programs with a focus on German and Japanese, and training engineers who have a global perspective. This document contains an executive summary in addition to the full length report…
Superconducting gravity gradiometer mission. Volume 1: Study team executive summary
NASA Technical Reports Server (NTRS)
Morgan, Samuel H. (Editor); Paik, Ho Jung (Editor)
1989-01-01
An executive summary is presented based upon the scientific and engineering studies and developments performed or directed by a Study Team composed of various Federal and University activities involved with the development of a three-axis Superconducting Gravity Gradiometer integrated with a six-axis superconducting accelerometer. This instrument is being developed for a future orbital mission to make precise global gravity measurements. The scientific justification and requirements for such a mission are discussed. This includes geophysics, the primary mission objective, as well as secondary objectives, such as navigation and tests of fundamental laws of physics, i.e., a null test of the inverse square law of gravitation and tests of general relativity. The instrument design and status along with mission analysis, engineering assessments, and preliminary spacecraft concepts are discussed. In addition, critical spacecraft systems and required technology advancements are examined. The mission requirements and an engineering assessment of a precursor flight test of the instrument are discussed.
Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya
2006-01-01
We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.
NASA Technical Reports Server (NTRS)
1986-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, Kennedy Space Center is placing increasing emphasis on the Center's research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safer, more efficient, and more economical execution of our current mission, we are developing the technological tools needed to execute the Center's mission relative to future programs. The Engineering Development Directorate encompasses most of the laboratories and other Center resources that are key elements of research and technology program implementation, and is responsible for implementation of the majority of the projects in this Kennedy Space Center 1986 Annual Report.
Research and technology at Kennedy Space Center
NASA Technical Reports Server (NTRS)
1989-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, Kennedy Space Center is placing increasing emphasis on the Center's research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safer, more efficient, and more economical execution of current mission, the technical tools are developed needed to execute Center's mission relative to future programs. The Engineering Development Directorate encompasses most of the laboratories and other Center resources that are key elements of research and technology program implementation and is responsible for implementation of the majority of the projects in this Kennedy Space Center 1989 Annual Report.
Research and technology 1991 annual report
NASA Technical Reports Server (NTRS)
1991-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, NASA Kennedy is placing increasing emphasis on the center's research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safer, more efficient, and more economical execution of the current mission, the technical tools are being developed which are needed to execute the center's mission relative to future programs. The Engineering Development Directorate encompasses most of the labs and other center resources that are key elements of research and technology program implementation and is responsible for implementation of the majority of the projects in this Kennedy Space Center 1991 annual report.
Baudouin, Alexia; Clarys, David; Vanneste, Sandrine; Isingrini, Michel
2009-12-01
The aim of the present study was to examine executive dysfunctioning and decreased processing speed as potential mediators of age-related differences in episodic memory. We compared the performances of young and elderly adults in a free-recall task. Participants were also given tests to measure executive functions and perceptual processing speed and a coding task (the Digit Symbol Substitution Test, DSST). More precisely, we tested the hypothesis that executive functions would mediate the age-related differences observed in the free-recall task better than perceptual speed. We also tested the assumption that a coding task, assumed to involve both executive processes and perceptual speed, would be the best mediator of age-related differences in memory. Findings first confirmed that the DSST combines executive processes and perceptual speed. Secondly, they showed that executive functions are a significant mediator of age-related differences in memory, and that DSST performance is the best predictor.
Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling
NASA Technical Reports Server (NTRS)
Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw
2005-01-01
The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.
Checkpointing for a hybrid computing node
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cher, Chen-Yong
2016-03-08
According to an aspect, a method for checkpointing in a hybrid computing node includes executing a task in a processing accelerator of the hybrid computing node. A checkpoint is created in a local memory of the processing accelerator. The checkpoint includes state data to restart execution of the task in the processing accelerator upon a restart operation. Execution of the task is resumed in the processing accelerator after creating the checkpoint. The state data of the checkpoint are transferred from the processing accelerator to a main processor of the hybrid computing node while the processing accelerator is executing the task.
Looby, Mairead; Ibarra, Neysi; Pierce, James J; Buckley, Kevin; O'Donovan, Eimear; Heenan, Mary; Moran, Enda; Farid, Suzanne S; Baganz, Frank
2011-01-01
This study describes the application of quality by design (QbD) principles to the development and implementation of a major manufacturing process improvement for a commercially distributed therapeutic protein produced in Chinese hamster ovary cell culture. The intent of this article is to focus on QbD concepts, and provide guidance and understanding on how the various components combine together to deliver a robust process in keeping with the principles of QbD. A fed-batch production culture and a virus inactivation step are described as representative examples of upstream and downstream unit operations that were characterized. A systematic approach incorporating QbD principles was applied to both unit operations, involving risk assessment of potential process failure points, small-scale model qualification, design and execution of experiments, definition of operating parameter ranges and process validation acceptance criteria followed by manufacturing-scale implementation and process validation. Statistical experimental designs were applied to the execution of process characterization studies evaluating the impact of operating parameters on product quality attributes and process performance parameters. Data from process characterization experiments were used to define the proven acceptable range and classification of operating parameters for each unit operation. Analysis of variance and Monte Carlo simulation methods were used to assess the appropriateness of process design spaces. Successful implementation and validation of the process in the manufacturing facility and the subsequent manufacture of hundreds of batches of this therapeutic protein verifies the approaches taken as a suitable model for the development, scale-up and operation of any biopharmaceutical manufacturing process. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
McKenna, Róisín; Rushe, T.; Woodcock, Kate A.
2017-01-01
The structure of executive function (EF) has been the focus of much debate for decades. What is more, the complexity and diversity provided by the developmental period only adds to this contention. The development of executive function plays an integral part in the expression of children's behavioral, cognitive, social, and emotional capabilities. Understanding how these processes are constructed during development allows for effective measurement of EF in this population. This meta-analysis aims to contribute to a better understanding of the structure of executive function in children. A coordinate-based meta-analysis was conducted (using BrainMap GingerALE 2.3), which incorporated studies administering functional magnetic resonance imaging (fMRI) during inhibition, switching, and working memory updating tasks in typical children (aged 6–18 years). The neural activation common across all executive tasks was compared to that shared by tasks pertaining only to inhibition, switching or updating, which are commonly considered to be fundamental executive processes. Results support the existence of partially separable but partially overlapping inhibition, switching, and updating executive processes at a neural level, in children over 6 years. Further, the shared neural activation across all tasks (associated with a proposed “unitary” component of executive function) overlapped to different degrees with the activation associated with each individual executive process. These findings provide evidence to support the suggestion that one of the most influential structural models of executive functioning in adults can also be applied to children of this age. However, the findings also call for careful consideration and measurement of both specific executive processes, and unitary executive function in this population. Furthermore, a need is highlighted for a new systematic developmental model, which captures the integrative nature of executive function in children. PMID:28439231
1991-06-01
Validation And Reconstruction -~ Phase 1: System Architecture Study i ".- Contract NAS 3 -25883 I - _ CR-187124 -4 Phase I Final Report,, " , I Prepared for...131 NAS 3 -25883 1.0 INTRODUCTION 1 2.0 EXECUTIVE SUMMARY 2 3.0 TECHNICAL DISCUSSION 8 3.1 Review of SSME Test Data and Validation Procedure 8 3.1.1...NAS 3 -25883 FIGURES FigureNo. e 1 Elements The Sensor Data Validation and Signal Reconstuction System 7 3 Current NASA MSFC Data Review Process 12 4
NASA Technical Reports Server (NTRS)
Gonzalez, Guillermo A.; Lucy, Melvin H.; Massie, Jeffrey J.
2013-01-01
The NASA Langley Research Center, Engineering Directorate, Electronic System Branch, is responsible for providing pyrotechnic support capabilities to Langley Research Center unmanned flight and ground test projects. These capabilities include device selection, procurement, testing, problem solving, firing system design, fabrication and testing; ground support equipment design, fabrication and testing; checkout procedures and procedure?s training to pyro technicians. This technical memorandum will serve as a guideline for the design, fabrication and testing of electropyrotechnic firing systems. The guidelines will discuss the entire process beginning with requirements definition and ending with development and execution.
Energy self-sufficiency in Northampton, Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The study is not an engineering analysis but begins the process of exploring the potential for conservation and local renewable-resource development in a specific community, Northampton, Massachusetts, with the social, institutional, and environmental factors in that community taken into account. Section I is an extensive executive summary of the full study, and Section II is a detailed examination of the potential for increased local energy self-sufficiency in Northampton, including current and future demand estimates, the possible role of conservation and renewable resources, and a discussion of the economic and social implications of alternative energy systems. (MOW)
Using AUTORAD for Cassini File Uplinks: Incorporating Automated Commanding into Mission Operations
NASA Technical Reports Server (NTRS)
Goo, Sherwin
2014-01-01
As the Cassini spacecraft embarked on the Solstice Mission in October 2010, the flight operations team faced a significant challenge in planning and executing the continuing tour of the Saturnian system. Faced with budget cuts that reduced the science and engineering staff by over a third in size, new and streamlined processes had to be developed to allow the Cassini mission to maintain a high level of science data return with a lower amount of available resources while still minimizing the risk. Automation was deemed an important key in enabling mission operations with reduced workforce and the Cassini flight team has made this goal a priority for the Solstice Mission. The operations team learned about a utility called AUTORAD which would give the flight operations team the ability to program selected command files for radiation up to seven days in advance and help minimize the need for off-shift support that could deplete available staffing during the prime shift hours. This paper will describe how AUTORAD is being utilized by the Cassini flight operations team and the processes that were developed or modified to ensure that proper oversight and verification is maintained in the generation and execution of radiated command files.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifford, David J.; Harris, James M.
2014-12-01
This is the IDC Re-Engineering Phase 2 project Integrated Master Plan (IMP). The IMP presents the major accomplishments planned over time to re-engineer the IDC system. The IMP and the associate Integrated Master Schedule (IMS) are used for planning, scheduling, executing, and tracking the project technical work efforts. REVISIONS Version Date Author/Team Revision Description Authorized by V1.0 12/2014 IDC Re- engineering Project Team Initial delivery M. Harris
Sub-Saharan Africa Report: No. 2788
1983-04-21
Technip CEO, Jacques Celerier Interview Technip Operations Famine Threatens 50 Million Africans (WEST AFRICA, 11 Apr 83...exports. 2 First, Jacques Celerier, chief executive officer of the Technip group, the leading French engineering company which has just been awarded the...Interview With Technip CEO Dakar AFRICA in French Mar 83 pp 79-81 [Interview with Jacques Celerier, chief executive officer of Technip: "Africa Has One
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
ERIC Educational Resources Information Center
Garcia-Madruga, Juan A.; Elosua, Maria Rosa; Gil, Laura; Gomez-Veiga, Isabel; Vila, Jose Oscar; Orjales, Isabel; Contreras, Antonio; Rodriguez, Raquel; Melero, Maria Angeles; Duque, Gonzalo
2013-01-01
Reading comprehension is a highly demanding task that involves the simultaneous process of extracting and constructing meaning in which working memory's executive processes play a crucial role. In this article, a training program on working memory's executive processes to improve reading comprehension is presented and empirically tested in two…
Energy Systems Test Area (ESTA) Electrical Power Systems Test Operations: User Test Planning Guide
NASA Technical Reports Server (NTRS)
Salinas, Michael J.
2012-01-01
Test process, milestones and inputs are unknowns to first-time users of the ESTA Electrical Power Systems Test Laboratory. The User Test Planning Guide aids in establishing expectations for both NASA and non-NASA facility customers. The potential audience for this guide includes both internal and commercial spaceflight hardware/software developers. It is intended to assist their test engineering personnel in test planning and execution. Material covered includes a roadmap of the test process, roles and responsibilities of facility and user, major milestones, facility capabilities, and inputs required by the facility. Samples of deliverables, test article interfaces, and inputs necessary to define test scope, cost, and schedule are included as an appendix to the guide.
Using Decision Structures for Policy Analysis in Software Product-line Evolution - A Case Study
NASA Astrophysics Data System (ADS)
Sarang, Nita; Sanglikar, Mukund A.
Project management decisions are the primary basis for project success (or failure). Mostly, such decisions are based on an intuitive understanding of the underlying software engineering and management process and have a likelihood of being misjudged. Our problem domain is product-line evolution. We model the dynamics of the process by incorporating feedback loops appropriate to two decision structures: staffing policy, and the forces of growth associated with long-term software evolution. The model is executable and supports project managers to assess the long-term effects of possible actions. Our work also corroborates results from earlier studies of E-type systems, in particular the FEAST project and the rules for software evolution, planning and management.
Code of Federal Regulations, 2010 CFR
2010-01-01
... in the science, engineering, and technology fields. Far too many women lack health insurance, and... same educational and career opportunities as our sons, that affects entire communities, our economy... women in the science, engineering, and technology workforce, and to ensure that Federal programs and...
Research Talent in the Natural Sciences and Engineering: Supply and Demand Projections to 1990.
ERIC Educational Resources Information Center
Natural Sciences and Engineering Research Council, Ottawa (Ontario).
This report presents conditional forecasts of the research talent required for the Canadian government's economic growth and research and development (R&D) targets. A number of alternative scenarios are also assessed. The study limits itself to postgraduate manpower in the natural sciences and engineering. Following an executive summary and…
Some research advances in computer graphics that will enhance applications to engineering design
NASA Technical Reports Server (NTRS)
Allan, J. J., III
1975-01-01
Research in man/machine interactions and graphics hardware/software that will enhance applications to engineering design was described. Research aspects of executive systems, command languages, and networking used in the computer applications laboratory are mentioned. Finally, a few areas where little or no research is being done were identified.
AdaFF: Adaptive Failure-Handling Framework for Composite Web Services
NASA Astrophysics Data System (ADS)
Kim, Yuna; Lee, Wan Yeon; Kim, Kyong Hoon; Kim, Jong
In this paper, we propose a novel Web service composition framework which dynamically accommodates various failure recovery requirements. In the proposed framework called Adaptive Failure-handling Framework (AdaFF), failure-handling submodules are prepared during the design of a composite service, and some of them are systematically selected and automatically combined with the composite Web service at service instantiation in accordance with the requirement of individual users. In contrast, existing frameworks cannot adapt the failure-handling behaviors to user's requirements. AdaFF rapidly delivers a composite service supporting the requirement-matched failure handling without manual development, and contributes to a flexible composite Web service design in that service architects never care about failure handling or variable requirements of users. For proof of concept, we implement a prototype system of the AdaFF, which automatically generates a composite service instance with Web Services Business Process Execution Language (WS-BPEL) according to the users' requirement specified in XML format and executes the generated instance on the ActiveBPEL engine.
Goal-directed or aimless? EEG differences during the preparation of a reach-and-touch task.
Pereira, Joana; Ofner, Patrick; Muller-Putz, Gernot R
2015-08-01
The natural control of neuroprostheses is currently a challenge in both rehabilitation engineering and brain-computer interfaces (BCIs) research. One of the recurrent problems is to know exactly when to activate such devices. For the execution of the most common activities of daily living, these devices only need to be active when in the presence of a goal. Therefore, we believe that the distinction between the planning of goal-directed and aimless movements, using non-invasive recordings, can be useful for the implementation of a simple and effective activation method for these devices. We investigated whether those differences are detectable during a reach-and-touch task, using electroencephalography (EEG). Event-related potentials and oscillatory activity changes were studied. Our results show that there are statistically significant differences between both types of movement. Combining this information with movement decoding would allow a natural control strategy for BCIs, exclusively relying on the cognitive processes behind movement preparation and execution.
NASA Astrophysics Data System (ADS)
Umbarkar, A. J.; Balande, U. T.; Seth, P. D.
2017-06-01
The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.
Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles
NASA Technical Reports Server (NTRS)
Morris, S. J.
1978-01-01
The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.
An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process
NASA Astrophysics Data System (ADS)
Nguyen, ThanhDat; Kifor, Claudiu Vasile
2015-09-01
DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.
Using AI and Semantic Web Technologies to attack Process Complexity in Open Systems
NASA Astrophysics Data System (ADS)
Thompson, Simon; Giles, Nick; Li, Yang; Gharib, Hamid; Nguyen, Thuc Duong
Recently many vendors and groups have advocated using BPEL and WS-BPEL as a workflow language to encapsulate business logic. While encapsulating workflow and process logic in one place is a sensible architectural decision the implementation of complex workflows suffers from the same problems that made managing and maintaining hierarchical procedural programs difficult. BPEL lacks constructs for logical modularity such as the requirements construct from the STL [12] or the ability to adapt constructs like pure abstract classes for the same purpose. We describe a system that uses semantic web and agent concepts to implement an abstraction layer for BPEL based on the notion of Goals and service typing. AI planning was used to enable process engineers to create and validate systems that used services and goals as first class concepts and compiled processes at run time for execution.
Systems engineering principles for the design of biomedical signal processing systems.
Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo
2011-06-01
Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Kiefer, Markus
2012-01-01
Unconscious priming is a prototypical example of an automatic process, which is initiated without deliberate intention. Classical theories of automaticity assume that such unconscious automatic processes occur in a purely bottom-up driven fashion independent of executive control mechanisms. In contrast to these classical theories, our attentional sensitization model of unconscious information processing proposes that unconscious processing is susceptible to executive control and is only elicited if the cognitive system is configured accordingly. It is assumed that unconscious processing depends on attentional amplification of task-congruent processing pathways as a function of task sets. This article provides an overview of the latest research on executive control influences on unconscious information processing. I introduce refined theories of automaticity with a particular focus on the attentional sensitization model of unconscious cognition which is specifically developed to account for various attentional influences on different types of unconscious information processing. In support of the attentional sensitization model, empirical evidence is reviewed demonstrating executive control influences on unconscious cognition in the domains of visuo-motor and semantic processing: subliminal priming depends on attentional resources, is susceptible to stimulus expectations and is influenced by action intentions and task sets. This suggests that even unconscious processing is flexible and context-dependent as a function of higher-level executive control settings. I discuss that the assumption of attentional sensitization of unconscious information processing can accommodate conflicting findings regarding the automaticity of processes in many areas of cognition and emotion. This theoretical view has the potential to stimulate future research on executive control of unconscious processing in healthy and clinical populations. PMID:22470329
Kiefer, Markus
2012-01-01
Unconscious priming is a prototypical example of an automatic process, which is initiated without deliberate intention. Classical theories of automaticity assume that such unconscious automatic processes occur in a purely bottom-up driven fashion independent of executive control mechanisms. In contrast to these classical theories, our attentional sensitization model of unconscious information processing proposes that unconscious processing is susceptible to executive control and is only elicited if the cognitive system is configured accordingly. It is assumed that unconscious processing depends on attentional amplification of task-congruent processing pathways as a function of task sets. This article provides an overview of the latest research on executive control influences on unconscious information processing. I introduce refined theories of automaticity with a particular focus on the attentional sensitization model of unconscious cognition which is specifically developed to account for various attentional influences on different types of unconscious information processing. In support of the attentional sensitization model, empirical evidence is reviewed demonstrating executive control influences on unconscious cognition in the domains of visuo-motor and semantic processing: subliminal priming depends on attentional resources, is susceptible to stimulus expectations and is influenced by action intentions and task sets. This suggests that even unconscious processing is flexible and context-dependent as a function of higher-level executive control settings. I discuss that the assumption of attentional sensitization of unconscious information processing can accommodate conflicting findings regarding the automaticity of processes in many areas of cognition and emotion. This theoretical view has the potential to stimulate future research on executive control of unconscious processing in healthy and clinical populations.
Do attentional capacities and processing speed mediate the effect of age on executive functioning?
Gilsoul, Jessica; Simon, Jessica; Hogge, Michaël; Collette, Fabienne
2018-02-06
The executive processes are well known to decline with age, and similar data also exists for attentional capacities and processing speed. Therefore, we investigated whether these two last nonexecutive variables would mediate the effect of age on executive functions (inhibition, shifting, updating, and dual-task coordination). We administered a large battery of executive, attentional and processing speed tasks to 104 young and 71 older people, and we performed mediation analyses with variables showing a significant age effect. All executive and processing speed measures showed age-related effects while only the visual scanning task performance (selective attention) was explained by age when controlled for gender and educational level. Regarding mediation analyses, visual scanning partially mediated the age effect on updating while processing speed partially mediated the age effect on shifting, updating and dual-task coordination. In a more exploratory way, inhibition was also found to partially mediate the effect of age on the three other executive functions. Attention did not greatly influence executive functioning in aging while, in agreement with the literature, processing speed seems to be a major mediator of the age effect on these processes. Interestingly, the global pattern of results seems also to indicate an influence of inhibition but further studies are needed to confirm the role of that variable as a mediator and its relative importance by comparison with processing speed.
NASA Technical Reports Server (NTRS)
Henke, Luke
2010-01-01
The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics, functions, behavior and performance. The ICARE method can be used to improve all elements of a system and, consequently, the system-level functional, physical and operational performance. Even though ICARE was specifically designed for a systems engineer, any person whose job is to examine another person, product, or process can use the ICARE method to improve effectiveness, implementation, usefulness, value, capability, efficiency, integration, design, and/or marketability. This paper provides the details of the ICARE method, emphasizing the method s application to systems engineering. In addition, a sample of other, non-systems engineering applications are briefly discussed to demonstrate how ICARE can be tailored to a variety of diverse jobs (from project management to parenting).
Antonini, Tanya N; Ris, M Douglas; Grosshans, David R; Mahajan, Anita; Okcu, M Fatih; Chintagumpala, Murali; Paulino, Arnold; Child, Amanda E; Orobio, Jessica; Stancel, Heather H; Kahalley, Lisa S
2017-07-01
This study examines attention, processing speed, and executive functioning in pediatric brain tumor survivors treated with proton beam radiation therapy (PBRT). We examined 39 survivors (age 6-19years) who were 3.61years post-PBRT on average. Craniospinal (CSI; n=21) and focal (n=18) subgroups were analyzed. Attention, processing speed, and executive functioning scores were compared to population norms, and clinical/demographic risk factors were examined. As a group, survivors treated with focal PBRT exhibited attention, processing speed, and executive functioning that did not differ from population norms (all p>0.05). Performance in the CSI group across attention scales was normative (all p>0.05), but areas of relative weakness were identified on one executive functioning subtest and several processing speed subtests (all p<0.01). Survivors treated with PBRT may exhibit relative resilience in cognitive domains traditionally associated with radiation late effects. Attention, processing speed, and executive functioning remained intact and within normal limits for survivors treated with focal PBRT. Among survivors treated with CSI, a score pattern emerged that was suggestive of difficulties in underlying component skills (i.e., processing speed) rather than true executive dysfunction. No evidence of profound cognitive impairment was found in either group. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Menrad, Robert J.; Larson, Wiley J.
2008-01-01
This paper shares the findings of NASA's Integrated Learning and Development Program (ILDP) in its effort to reinvigorate the HANDS-ON practice of space systems engineering and project/program management through focused coursework, training opportunities, on-the job learning and special assignments. Prior to March 2005, NASA responsibility for technical workforce development (the program/project manager, systems engineering, discipline engineering, discipline engineering and associated communities) was executed by two parallel organizations. In March 2005 these organizations merged. The resulting program-ILDP-was chartered to implement an integrated competency-based development model capable of enhancing NASA's technical workforce performance as they face the complex challenges of Earth science, space science, aeronautics and human spaceflight missions. Results developed in collaboration with NASA Field Centers are reported on. This work led to definition of the agency's first integrated technical workforce development model known as the Requisite Occupation Competence and Knowledge (the ROCK). Critical processes and products are presented including: 'validation' techniques to guide model development, the Design-A-CUrriculuM (DACUM) process, and creation of the agency's first systems engineering body-of-knowledge. Findings were validated via nine focus groups from industry and government, validated with over 17 space-related organizations, at an estimated cost exceeding $300,000 (US). Masters-level programs and training programs have evolved to address the needs of these practitioner communities based upon these results. The ROCK reintroduced rigor and depth to the practitioner's development in these critical disciplines enabling their ability to take mission concepts from imagination to reality.
NASA Technical Reports Server (NTRS)
Gibbel, Mark; Larson, Timothy
2000-01-01
An Engineering-of-Failure approach to designing and executing an accelerated product qualification test was performed to support a risk assessment of a "work-around" necessitated by an on-orbit failure of another piece of hardware on the Mars Global Surveyor spacecraft. The proposed work-around involved exceeding the previous qualification experience both in terms of extreme cold exposure level and in terms of demonstrated low cycle fatigue life for the power shunt assemblies. An analysis was performed to identify potential failure sites, modes and associated failure mechanisms consistent with the new use conditions. A test was then designed and executed which accelerated the failure mechanisms identified by analysis. Verification of the resulting failure mechanism concluded the effort.
Research and technology 1987 annual report of the Kennedy Space Center
NASA Technical Reports Server (NTRS)
1987-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, Kennedy Space Center is placing increasing emphasis on the Center's research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safer, more efficient, and more economical execution of our current mission, we are developing the technological tools needed to execute the Center's mission relative to future programs. The Engineering Development Directorate encompasses most of the laboratories and other Center resources that are key elements of research and technology program implementation, and is responsible for implementation of the majority of the projects of this Kennedy Space Center 1987 Annual Report.
77 FR 2096 - Proposal Review Panel for Materials Research; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-13
... Engineering Center (MRSEC) at Massachusetts Institute of Technology (MIT) by the Division of Materials... 16, 2012 7:15 a.m.-8:15 a.m. Closed--Executive Session 8:15 a.m.-5 p.m. Open--Review of the MIT MRSEC... session 9 a.m.-10 a.m. Open--Review of the MIT MRSEC 10 a.m.-4:30 p.m. Closed--Executive Session, Report...
Using EPIC to Find Conflicts, Inconsistencies, and Gaps in Department of Defense Policies
2013-01-01
documentation; or deliver preliminary findings. All RAND reports un- dergo rigorous peer review to ensure that they meet high standards for research quality...responsibilities and the products that result from their execution. Once the high -level frame- work was defined, successive lower layers were developed to further...Lead or Chief Engineer Component Acquisition Executive ( CAE ) Managers Configuration Steering Board Materiel developer Contractor Milestone Decision
Yoo, Terry S; Ackerman, Michael J; Lorensen, William E; Schroeder, Will; Chalana, Vikram; Aylward, Stephen; Metaxas, Dimitris; Whitaker, Ross
2002-01-01
We present the detailed planning and execution of the Insight Toolkit (ITK), an application programmers interface (API) for the segmentation and registration of medical image data. This public resource has been developed through the NLM Visible Human Project, and is in beta test as an open-source software offering under cost-free licensing. The toolkit concentrates on 3D medical data segmentation and registration algorithms, multimodal and multiresolution capabilities, and portable platform independent support for Windows, Linux/Unix systems. This toolkit was built using current practices in software engineering. Specifically, we embraced the concept of generic programming during the development of these tools, working extensively with C++ templates and the freedom and flexibility they allow. Software development tools for distributed consortium-based code development have been created and are also publicly available. We discuss our assumptions, design decisions, and some lessons learned.
1999-03-06
Watching the 1999 FIRST Southeastern Regional robotic competition held at KSC are (left to right) FIRST representative Vince Wilczynski and Executive Director of FIRST David Brown, Center Director Roy Bridges, former KSC Director of Shuttle Processing Robert Sieck (pointing), and astronaut David Brown. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. Brown and Sieck served as judges for the event that pits gladiator robots against each other in an athletic-style competition. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spend two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Teams play defense by taking away competitors' pillows and generally harassing opposing machines. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
NASA Technical Reports Server (NTRS)
Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco
2012-01-01
This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.
Diamond Eye: a distributed architecture for image data mining
NASA Astrophysics Data System (ADS)
Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem
1999-02-01
Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.
Intelligent user interface concept for space station
NASA Technical Reports Server (NTRS)
Comer, Edward; Donaldson, Cameron; Bailey, Elizabeth; Gilroy, Kathleen
1986-01-01
The space station computing system must interface with a wide variety of users, from highly skilled operations personnel to payload specialists from all over the world. The interface must accommodate a wide variety of operations from the space platform, ground control centers and from remote sites. As a result, there is a need for a robust, highly configurable and portable user interface that can accommodate the various space station missions. The concept of an intelligent user interface executive, written in Ada, that would support a number of advanced human interaction techniques, such as windowing, icons, color graphics, animation, and natural language processing is presented. The user interface would provide intelligent interaction by understanding the various user roles, the operations and mission, the current state of the environment and the current working context of the users. In addition, the intelligent user interface executive must be supported by a set of tools that would allow the executive to be easily configured and to allow rapid prototyping of proposed user dialogs. This capability would allow human engineering specialists acting in the role of dialog authors to define and validate various user scenarios. The set of tools required to support development of this intelligent human interface capability is discussed and the prototyping and validation efforts required for development of the Space Station's user interface are outlined.
A New Capability for Nuclear Thermal Propulsion Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Benjamin W.; Nuclear and Radiological Engineering Department, University of Florida, Gainesville, FL 32611; Kapernick, Richard J.
2007-01-30
This paper describes a new capability for Nuclear Thermal Propulsion (NTP) design that has been developed, and presents the results of some analyses performed with this design tool. The purpose of the tool is to design to specified mission and material limits, while maximizing system thrust to weight. The head end of the design tool utilizes the ROCket Engine Transient Simulation (ROCETS) code to generate a system design and system design requirements as inputs to the core analysis. ROCETS is a modular system level code which has been used extensively in the liquid rocket engine industry for many years. Themore » core design tool performs high-fidelity reactor core nuclear and thermal-hydraulic design analysis. At the heart of this process are two codes TMSS-NTP and NTPgen, which together greatly automate the analysis, providing the capability to rapidly produce designs that meet all specified requirements while minimizing mass. A PERL based command script, called CORE DESIGNER controls the execution of these two codes, and checks for convergence throughout the process. TMSS-NTP is executed first, to produce a suite of core designs that meet the specified reactor core mechanical, thermal-hydraulic and structural requirements. The suite of designs consists of a set of core layouts and, for each core layout specific designs that span a range of core fuel volumes. NTPgen generates MCNPX models for each of the core designs from TMSS-NTP. Iterative analyses are performed in NTPgen until a reactor design (fuel volume) is identified for each core layout that meets cold and hot operation reactivity requirements and that is zoned to meet a radial core power distribution requirement.« less
Collaborative engineering and design management for the Hobby-Eberly Telescope tracker upgrade
NASA Astrophysics Data System (ADS)
Mollison, Nicholas T.; Hayes, Richard J.; Good, John M.; Booth, John A.; Savage, Richard D.; Jackson, John R.; Rafal, Marc D.; Beno, Joseph H.
2010-07-01
The engineering and design of systems as complex as the Hobby-Eberly Telescope's* new tracker require that multiple tasks be executed in parallel and overlapping efforts. When the design of individual subsystems is distributed among multiple organizations, teams, and individuals, challenges can arise with respect to managing design productivity and coordinating successful collaborative exchanges. This paper focuses on design management issues and current practices for the tracker design portion of the Hobby-Eberly Telescope Wide Field Upgrade project. The scope of the tracker upgrade requires engineering contributions and input from numerous fields including optics, instrumentation, electromechanics, software controls engineering, and site-operations. Successful system-level integration of tracker subsystems and interfaces is critical to the telescope's ultimate performance in astronomical observation. Software and process controls for design information and workflow management have been implemented to assist the collaborative transfer of tracker design data. The tracker system architecture and selection of subsystem interfaces has also proven to be a determining factor in design task formulation and team communication needs. Interface controls and requirements change controls will be discussed, and critical team interactions are recounted (a group-participation Failure Modes and Effects Analysis [FMEA] is one of special interest). This paper will be of interest to engineers, designers, and managers engaging in multi-disciplinary and parallel engineering projects that require coordination among multiple individuals, teams, and organizations.
Promoting a Culture of Tailoring for Systems Engineering Policy Expectations
NASA Technical Reports Server (NTRS)
Blankenship, Van A.
2016-01-01
NASA's Marshall Space Flight Center (MSFC) has developed an integrated systems engineering approach to promote a culture of tailoring for program and project policy requirements. MSFC's culture encourages and supports tailoring, with an emphasis on risk-based decision making, for enhanced affordability and efficiency. MSFC's policy structure integrates the various Agency requirements into a single, streamlined implementation approach which serves as a "one-stop-shop" for our programs and projects to follow. The engineers gain an enhanced understanding of policy and technical expectations, as well as lesson's learned from MSFC's history of spaceflight and science missions, to enable them to make appropriate, risk-based tailoring recommendations. The tailoring approach utilizes a standard methodology to classify projects into predefined levels using selected mission and programmatic scaling factors related to risk tolerance. Policy requirements are then selectively applied and tailored, with appropriate rationale, and approved by the governing authorities, to support risk-informed decisions to achieve the desired cost and schedule efficiencies. The policy is further augmented by implementation tools and lifecycle planning aids which help promote and support the cultural shift toward more tailoring. The MSFC Customization Tool is an integrated spreadsheet that ties together everything that projects need to understand, navigate, and tailor the policy. It helps them classify their project, understand the intent of the requirements, determine their tailoring approach, and document the necessary governance approvals. It also helps them plan for and conduct technical reviews throughout the lifecycle. Policy tailoring is thus established as a normal part of project execution, with the tools provided to facilitate and enable the tailoring process. MSFC's approach to changing the culture emphasizes risk-based tailoring of policy to achieve increased flexibility, efficiency, and effectiveness in project execution, while maintaining appropriate rigor to ensure mission success.
29 CFR 215.6 - The Model Agreement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... representatives of the Railway Labor Executives' Association, Brotherhood of Locomotive Engineers, Brotherhood of Railway and Airline Clerks and International Association of Machinists and Aerospace Workers. The...
Geohazard assessment lifecycle for a natural gas pipeline project
NASA Astrophysics Data System (ADS)
Lekkakis, D.; Boone, M. D.; Strassburger, E.; Li, Z.; Duffy, W. P.
2015-09-01
This paper is a walkthrough of the geohazard risk assessment performed for the Front End Engineering Design (FEED) of a planned large-diameter natural gas pipeline, extending from Eastern Europe to Western Asia for a total length of approximately 1,850 km. The geohazards discussed herein include liquefaction-induced pipe buoyancy, cyclic softening, lateral spreading, slope instability, groundwater rise-induced pipe buoyancy, and karst. The geohazard risk assessment lifecycle was comprised of 4 stages: initially a desktop study was carried out to describe the geologic setting along the alignment and to conduct a preliminary assessment of the geohazards. The development of a comprehensive Digital Terrain Model topography and aerial photography data were fundamental in this process. Subsequently, field geohazard mapping was conducted with the deployment of 8 teams of geoprofessionals, to investigate the proposed major reroutes and delve into areas of poor or questionable data. During the third stage, a geotechnical subsurface site investigation was then executed based on the results of the above study and mapping efforts in order to obtain sufficient data tailored for risk quantification. Lastly, all gathered and processed information was overlain into a Geographical Information database towards a final determination of the critical reaches of the pipeline alignment. Input from Subject Matter Experts (SME) in the fields of landslides, karst and fluvial geomorphology was incorporated during the second and fourth stages of the assessment. Their experience in that particular geographical region was key to making appropriate decisions based on engineering judgment. As the design evolved through the above stages, the pipeline corridor was narrowed from a 2-km wide corridor, to a 500-m corridor and finally to a fixed alignment. Where the geohazard risk was high, rerouting of the pipeline was generally selected as a mitigation measure. In some cases of high uncertainty in the assessment, further exploration was proposed. In cases where rerouting was constrained, mitigation via structural measures was proposed. This paper further discusses the cost, schedule and resource challenges of planning and executing such a large-scale geotechnical investigation, the interfaces between the various disciplines involved during the assessment, the innovative tools employed for the field mapping, the classifications developed for mapping landslides, karst geology, and trench excavatability, determining liquefaction stretches and the process for the site localization of the Above Ground Installations (AGI). It finally discusses the objectives of the FEED study in terms of providing a route, a ± 20% project cost estimate and a schedule, and the additional engineering work foreseen to take place in the detailed engineering phase of the project.
NASA Technical Reports Server (NTRS)
1972-01-01
The activities leading to a tentative concept selection for a pressure-fed engine and propulsion support are outlined. Multiple engine concepts were evaluted through parallel engine major component and system analyses. Booster vehicle coordination, tradeoffs, and technology/development aspects are included. The concept selected for further evaluation has a regeneratively cooled combustion chamber and nozzle in conjuction with an impinging element injector. The propellants chosen are LOX/RP-1, and combustion stabilizing baffles are used to assure dynamic combustion stability.
NASA Technical Reports Server (NTRS)
Power, Gloria B.; Violett, Rebeca S.
1989-01-01
The analysis performed on the High Pressure Oxidizer Turbopump (HPOTP) preburner pump bearing assembly located on the Space Shuttle Main Engine (SSME) is summarized. An ANSYS finite element model for the inlet assembly was built and executed. Thermal and static analyses were performed.
NASA Astrophysics Data System (ADS)
Hunter, Geoffrey
2004-01-01
A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates; it may be allocated as its execution commences, and deallocated as its execution terminates, and if the amount of this local memory is not known until just before execution commencement, then it is essential that it be allocated dynamically as the first action of its execution. This dynamically allocated/deallocated storage of each subprogram"s intermediate values, conforms with the stack discipline; i.e. last allocated = first to be deallocated, an incidental benefit of which is automatic overlaying of variables. This stack-based dynamic memory allocation was a semantic implication of the nested block structure that originated in the ALGOL-60 programming language. AGLOL-60 was a TM language, because the amount of memory allocated on subprogram (block/procedure) entry (for arrays, etc) was computable at execution time. A more general requirement of a Turing machine process is for code generation at run-time; this mandates access to the source language processor (compiler/interpretor) during execution of the process. This fundamental aspect of computer science is important to the future of system design, because it has been overlooked throughout the 55 years since modern computing began in 1048. The popular computer systems of this first half-century of computing were constrained by compile-time (or even operating system boot-time) memory allocation, and were thus limited to executing FA processes. The practical effect was that the distinction between the data-invariant program and its variable data was blurred; programmers had to make trial and error executions, modifying the program"s compile-time constants (array dimensions) to iterate towards the values required at run-time by the data being processed. This era of trial and error computing still persists; it pervades the culture of current (2003) computing practice.
de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea
2018-01-01
Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.
U.S. Seismic Design Maps Web Application
NASA Astrophysics Data System (ADS)
Martinez, E.; Fee, J.
2015-12-01
The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.
Relating MBSE to Spacecraft Development: A NASA Pathfinder
NASA Technical Reports Server (NTRS)
Othon, Bill
2016-01-01
The NASA Engineering and Safety Center (NESC) has sponsored a Pathfinder Study to investigate how Model Based Systems Engineering (MBSE) and Model Based Engineering (MBE) techniques can be applied by NASA spacecraft development projects. The objectives of this Pathfinder Study included analyzing both the products of the modeling activity, as well as the process and tool chain through which the spacecraft design activities are executed. Several aspects of MBSE methodology and process were explored. Adoption and consistent use of the MBSE methodology within an existing development environment can be difficult. The Pathfinder Team evaluated the possibility that an "MBSE Template" could be developed as both a teaching tool as well as a baseline from which future NASA projects could leverage. Elements of this template include spacecraft system component libraries, data dictionaries and ontology specifications, as well as software services that do work on the models themselves. The Pathfinder Study also evaluated the tool chain aspects of development. Two chains were considered: 1. The Development tool chain, through which SysML model development was performed and controlled, and 2. The Analysis tool chain, through which both static and dynamic system analysis is performed. Of particular interest was the ability to exchange data between SysML and other engineering tools such as CAD and Dynamic Simulation tools. For this study, the team selected a Mars Lander vehicle as the element to be designed. The paper will discuss what system models were developed, how data was captured and exchanged, and what analyses were conducted.
1981-08-01
of Transactions ..... . 29 5.5.2 Attached Execution of Transactions ........ ... 29 5.5.3 The Choice of Transaction Execution for Access Control...basic access control mech- anism for statistical security and value-dependent security. In Section 5.5, * we describe the process of execution of ...the process of request execution with access control for in- sert and non-insert requests in MDBS. We recall again (see Chapter 4) that the process
Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)
NASA Astrophysics Data System (ADS)
Ferayorni, Andrew; Beard, Andrew; Cole, Wes; Gregory, Scott; Wöeger, Friedrich
2016-08-01
The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit. The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument's functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI). In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized through completion of this testing, and share lessons learned from the bottoms up testing process.
NASA Astrophysics Data System (ADS)
1981-09-01
Main elements of the design are identified and explained, and the rationale behind them was reviewed. Major systems and plant facilities are listed and discussed. Construction cost and schedule estimates are presented, and the engineering issues that should be reexamined are identified. The latest (1980-1981) information from the MHD technology program is integrated with the elements of a conventional steam power electric generating plant.
Final Environmental Assessment: Base-Wide Building Demolition Arnold Air Force Base, Tennessee
2006-02-01
Building • Engine Test Facility ( ETF )-B Exhauster • ETF -A Airside • ETF -A Exhauster • ETF -A Reefer • CE Facility • Rocket Storage • Von Karman Gas...Executive Order ESA Endangered Species Act ETF Engine Test Facility FamCamp Family Camping Area P:\\ARNOLDAFB\\333402DO42COMPLIANCE\\DEMOLITION...Fabrication Shop • Natural Resources Building • Salt Storage Building • Administration Building • Engine Test Facility ( ETF )-B Exhauster • ETF -A
NASA Technical Reports Server (NTRS)
1981-01-01
Main elements of the design are identified and explained, and the rationale behind them was reviewed. Major systems and plant facilities are listed and discussed. Construction cost and schedule estimates are presented, and the engineering issues that should be reexamined are identified. The latest (1980-1981) information from the MHD technology program is integrated with the elements of a conventional steam power electric generating plant.
[The role adaptation process of the executive director of nursing department].
Kang, Sung-Ye; Park, Kwang-Ok; Kim, Jong-Kyung
2010-12-01
The purpose of this study was to identify the role adaptation process experienced by executive directors of nursing department of general hospitals. Data were collected from 9 executive nursing directors though in-depth interviews about their experiences. The main question was "How do you describe your experience of the process of role adaptation as an executive nursing director?" Qualitative data from field and transcribed notes were analyzed using Strauss & Corbin's grounded theory methodology. The core category of experience of the process of role adaptation as an executive nursing director was identified as "entering the center with pushing and pulling". The participants used five interactional strategies;'maintaining modest attitudes','inquiring about trends of popular feeling','making each person a faithful follower','collecting & displaying power','leading with initiative'. The consequences of role adaptation in executive nursing directors were'coexisting with others','immersing in one's new role with dedication', and'having capacity for high tolerance'. The types of role adaptations of executive directors in nursing department were friendly type, propulsive type, accommodating type. The results of this study produced useful information for executive nursing directors on designing a self-managerial program to enhance role adaptation based on interactional strategies.
NASA Technical Reports Server (NTRS)
Moerder, Dan
1994-01-01
The electronic engineering notebook (EEN) consists of a free form research notebook, implemented in a commercial package for distributed hypermedia, which includes utilities for graphics capture, formatting and display of LaTex constructs, and interfaces to the host operating system. The latter capability consists of an information computer-aided software engineering (CASE) tool and a means to associate executable scripts with source objects. The EEN runs on Sun and HP workstations. The EEN, in day-to-day use can be used in much the same manner as the sort of research notes most researchers keep during development of projects. Graphics can be pasted in, equations can be entered via LaTex, etc. In addition, the fact that the EEN is hypermedia permits easy management of 'context', e.g., derivations and data can contain easily formed links to other supporting derivations and data. The CASE tool also permits development and maintenance of source code directly in the notebook, with access to its derivations and data.
NASA Astrophysics Data System (ADS)
Kravchenko, Iulia; Luhmann, Thomas; Shults, Roman
2016-06-01
For the preparation of modern specialists in the acquisition and processing of three-dimensional data, a broad and detailed study of related modern methods and technologies is necessary. One of the most progressive and effective methods of acquisition and analyzing spatial data is terrestrial laser scanning. The study of methods and technologies for terrestrial laser scanning is of great importance not only for GIS specialists, but also for surveying engineers who make decisions in traditional engineering tasks (monitoring, executive surveys, etc.). The understanding and formation of the right approach in preparing new professionals need to develop a modern and variable educational program. This educational program must provide effective practical and laboratory work and the student's coursework. The resulting knowledge of the study should form the basis for practical or research of young engineers. In 2014, the Institute of Applied Sciences (Jade University Oldenburg, Germany) and Kyiv National University of Construction and Architecture (Kiev, Ukraine) had launched a joint educational project for the introduction of terrestrial laser scanning technology for collection and processing of spatial data. As a result of this project practical recommendations have been developed for the organization of educational processes in the use of terrestrial laser scanning. An advanced project-oriented educational program was developed which is presented in this paper. In order to demonstrate the effectiveness of the program a 3D model of the big and complex main campus of Kyiv National University of Construction and Architecture has been generated.
Language and Memory Improvements following tDCS of Left Lateral Prefrontal Cortex.
Hussey, Erika K; Ward, Nathan; Christianson, Kiel; Kramer, Arthur F
2015-01-01
Recent research demonstrates that performance on executive-control measures can be enhanced through brain stimulation of lateral prefrontal regions. Separate psycholinguistic work emphasizes the importance of left lateral prefrontal cortex executive-control resources during sentence processing, especially when readers must override early, incorrect interpretations when faced with temporary ambiguity. Using transcranial direct current stimulation, we tested whether stimulation of left lateral prefrontal cortex had discriminate effects on language and memory conditions that rely on executive-control (versus cases with minimal executive-control demands, even in the face of task difficulty). Participants were randomly assigned to receive Anodal, Cathodal, or Sham stimulation of left lateral prefrontal cortex while they (1) processed ambiguous and unambiguous sentences in a word-by-word self-paced reading task and (2) performed an n-back memory task that, on some trials, contained interference lure items reputed to require executive-control. Across both tasks, we parametrically manipulated executive-control demands and task difficulty. Our results revealed that the Anodal group outperformed the remaining groups on (1) the sentence processing conditions requiring executive-control, and (2) only the most complex n-back conditions, regardless of executive-control demands. Together, these findings add to the mounting evidence for the selective causal role of left lateral prefrontal cortex for executive-control tasks in the language domain. Moreover, we provide the first evidence suggesting that brain stimulation is a promising method to mitigate processing demands encountered during online sentence processing.
Language and Memory Improvements following tDCS of Left Lateral Prefrontal Cortex
Hussey, Erika K.; Ward, Nathan; Christianson, Kiel; Kramer, Arthur F.
2015-01-01
Recent research demonstrates that performance on executive-control measures can be enhanced through brain stimulation of lateral prefrontal regions. Separate psycholinguistic work emphasizes the importance of left lateral prefrontal cortex executive-control resources during sentence processing, especially when readers must override early, incorrect interpretations when faced with temporary ambiguity. Using transcranial direct current stimulation, we tested whether stimulation of left lateral prefrontal cortex had discriminate effects on language and memory conditions that rely on executive-control (versus cases with minimal executive-control demands, even in the face of task difficulty). Participants were randomly assigned to receive Anodal, Cathodal, or Sham stimulation of left lateral prefrontal cortex while they (1) processed ambiguous and unambiguous sentences in a word-by-word self-paced reading task and (2) performed an n-back memory task that, on some trials, contained interference lure items reputed to require executive-control. Across both tasks, we parametrically manipulated executive-control demands and task difficulty. Our results revealed that the Anodal group outperformed the remaining groups on (1) the sentence processing conditions requiring executive-control, and (2) only the most complex n-back conditions, regardless of executive-control demands. Together, these findings add to the mounting evidence for the selective causal role of left lateral prefrontal cortex for executive-control tasks in the language domain. Moreover, we provide the first evidence suggesting that brain stimulation is a promising method to mitigate processing demands encountered during online sentence processing. PMID:26528814
Understanding and Mitigating Protests of Department of Defense Acquisition Contracts
2010-08-01
of delivery time that can lock out a rejected offeror from a market . Sixth, more complex contracts, like services versus products , generate more...The engineers, attorneys, or head of a business unit need to explain to the team that spent time working on a bid why the company lost. Executives...agency executives have to explain to their team, who also spent time working on the source solicitation, evaluation, and selection, why the company
Revisiting software specification and design for large astronomy projects
NASA Astrophysics Data System (ADS)
Wiant, Scott; Berukoff, Steven
2016-07-01
The separation of science and engineering in the delivery of software systems overlooks the true nature of the problem being solved and the organization that will solve it. Use of a systems engineering approach to managing the requirements flow between these two groups as between a customer and contractor has been used with varying degrees of success by well-known entities such as the U.S. Department of Defense. However, treating science as the customer and engineering as the contractor fosters unfavorable consequences that can be avoided and opportunities that are missed. For example, the "problem" being solved is only partially specified through the requirements generation process since it focuses on detailed specification guiding the parties to a technical solution. Equally important is the portion of the problem that will be solved through the definition of processes and staff interacting through them. This interchange between people and processes is often underrepresented and under appreciated. By concentrating on the full problem and collaborating on a strategy for its solution a science-implementing organization can realize the benefits of driving towards common goals (not just requirements) and a cohesive solution to the entire problem. The initial phase of any project when well executed is often the most difficult yet most critical and thus it is essential to employ a methodology that reinforces collaboration and leverages the full suite of capabilities within the team. This paper describes an integrated approach to specifying the needs induced by a problem and the design of its solution.
NPTool: Towards Scalability and Reliability of Business Process Management
NASA Astrophysics Data System (ADS)
Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton
Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.
Flight Results from the HST SM4 Relative Navigation Sensor System
NASA Technical Reports Server (NTRS)
Naasz, Bo; Eepoel, John Van; Queen, Steve; Southward, C. Michael; Hannah, Joel
2010-01-01
On May 11, 2009, Space Shuttle Atlantis roared off of Launch Pad 39A enroute to the Hubble Space Telescope (HST) to undertake its final servicing of HST, Servicing Mission 4. Onboard Atlantis was a small payload called the Relative Navigation Sensor experiment, which included three cameras of varying focal ranges, avionics to record images and estimate, in real time, the relative position and attitude (aka "pose") of the telescope during rendezvous and deploy. The avionics package, known as SpaceCube and developed at the Goddard Space Flight Center, performed image processing using field programmable gate arrays to accelerate this process, and in addition executed two different pose algorithms in parallel, the Goddard Natural Feature Image Recognition and the ULTOR Passive Pose and Position Engine (P3E) algorithms
NASA Astrophysics Data System (ADS)
Quintana, Virgilio
For many years, 3D models and 2D drawings have been the main basic elements that together form and carry a product's definition throughout its lifecycle. With the advent of the Digital Product Definition trend, the Aerospace and Automotive industries have been very interested in adopting a Model-based Definition (MBD) approach that promises reduced time-to-market and improved product quality. Its main purpose is to improve and accelerate the design, manufacturing and inspection processes by integrating drawing annotations directly onto a 3D model, thereby minimizing the need to generate engineering drawings. Even though CAD tools and international standards support the MBD concept, its implementation throughout the whole product lifecycle has not yet been fully adopted; traditional engineering drawings still play an essential part in the capture and distribution of non-geometric data (tolerances, notes, etc.), in the long-term storage of product definitions, as well as in the management of engineering changes. This is especially so within the Engineering Change Management (ECM) process, which involves the study, review, annotation, validation, approval and release of engineering drawings. The exploration of alternatives to reengineer the ECM process in the absence of drawings is therefore a necessary step before the MBD approach can be broadly accepted. The objective of this research project was to propose a solution to conduct the ECM process in a drawing-less environment and to quantify its potential gains. Two Canadian aerospace companies participated in this project. First, the main barriers to be overcome in order to fully implement the MBD initiative were identified. Our observations were based on forty-one interviews conducted within the Engineering, Drafting, Configuration Management, Airworthiness, Certification, Manufacturing, Inspection and Knowledge Management departments from the two participating companies. The results indicated that there is a need to define how the Product Definition will be carried in this drawing-less environment while supporting all of the downstream users' specific requirements. Next, a solution to conduct an MBD-driven Engineering Change Management Process (ECM) was developed and evaluated based on the process requirements from both companies. The solution consists of the definition of a dataset composed of the MBD model (generated by the CAD system) and a lightweight distribution file (generated and exploited by the visualization application). The ECM process was then reengineered to support its execution when working with MBD datasets. Finally, the gains from administering the MBD-driven ECM process were determined using empirical and experimental data within a discrete-event simulation approach. Based on a case study conducted in a Canadian aerospace company, our results show that a reduction of about 11% can be achieved in both the average overall processing time and in the average cost.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Cassini Maneuver Experience for the Fourth Year of the Solstice Mission
NASA Technical Reports Server (NTRS)
Vaquero, Mar; Hahn, Yungsun; Stumpf, Paul; Valerino, Powtawche; Wagner, Sean; Wong, Mau
2014-01-01
After sixteen years of successful mission operations and invaluable scientific discoveries, the Cassini orbiter continues to tour Saturn on the most complex gravity-assist trajectory ever flown. To ensure that the end-of-mission target of September 2017 is achieved, propellant preservation is highly prioritized over maneuver cycle minimization. Thus, the maneuver decision process, which includes determining whether a maneuver is performed or canceled, designing a targeting strategy and selecting the engine for execution, is being continuously re-evaluated. This paper summarizes the maneuver experience throughout the fourth year of the Solstice Mission highlighting 27 maneuvers targeted to nine Titan flybys.
''Virtual Welding,'' a new aid for teaching Manufacturing Process Engineering
NASA Astrophysics Data System (ADS)
Portela, José M.; Huerta, María M.; Pastor, Andrés; Álvarez, Miguel; Sánchez-Carrilero, Manuel
2009-11-01
Overcrowding in the classroom is a serious problem in universities, particularly in specialties that require a certain type of teaching practice. These practices often require expenditure on consumables and a space large enough to hold the necessary materials and the materials that have already been used. Apart from the budget, another problem concerns the attention paid to each student. The use of simulation systems in the early learning stages of the welding technique can prove very beneficial thanks to error detection functions installed in the system, which provide the student with feedbach during the execution of the practice session, and the significant savings in both consumables and energy.
Resource utilization during software development
NASA Technical Reports Server (NTRS)
Zelkowitz, Marvin V.
1988-01-01
This paper discusses resource utilization over the life cycle of software development and discusses the role that the current 'waterfall' model plays in the actual software life cycle. Software production in the NASA environment was analyzed to measure these differences. The data from 13 different projects were collected by the Software Engineering Laboratory at NASA Goddard Space Flight Center and analyzed for similarities and differences. The results indicate that the waterfall model is not very realistic in practice, and that as technology introduces further perturbations to this model with concepts like executable specifications, rapid prototyping, and wide-spectrum languages, we need to modify our model of this process.
A Graphical User-Interface for Propulsion System Analysis
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Ryall, Kathleen
1992-01-01
NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.
A graphical user-interface for propulsion system analysis
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Ryall, Kathleen
1993-01-01
NASA LeRC uses a series of computer codes to calculate installed propulsion system performance and weight. The need to evaluate more advanced engine concepts with a greater degree of accuracy has resulted in an increase in complexity of this analysis system. Therefore, a graphical user interface was developed to allow the analyst to more quickly and easily apply these codes. The development of this interface and the rationale for the approach taken are described. The interface consists of a method of pictorially representing and editing the propulsion system configuration, forms for entering numerical data, on-line help and documentation, post processing of data, and a menu system to control execution.
NASA Astrophysics Data System (ADS)
Lauinger, Norbert
2004-10-01
The human eye is a good model for the engineering of optical correlators. Three prominent intelligent functionalities in human vision could in the near future become realized by a new diffractive-optical hardware design of optical imaging sensors: (1) Illuminant-adaptive RGB-based color Vision, (2) Monocular 3D Vision based on RGB data processing, (3) Patchwise fourier-optical Object-Classification and Identification. The hardware design of the human eye has specific diffractive-optical elements (DOE's) in aperture and in image space and seems to execute the three jobs at -- or not far behind -- the loci of the images of objects.
Smith, Kelsey E.; Schatz, Jeffrey
2017-01-01
Children with sickle cell disease (SCD) are at risk for working memory deficits due to multiple disease processes. We assessed working memory abilities and related functions in 32 school-age children with SCD and 85 matched comparison children using Baddeley’s working memory model as a framework. Children with SCD performed worse than controls for working memory, central executive function, and processing/rehearsal speed. Central executive function was found to mediate the relationship between SCD status and working memory, but processing speed did not. Cognitive remediation strategies that focus on central executive processes may be important for remediating working memory deficits in SCD. PMID:27759435
NASA Technical Reports Server (NTRS)
Kennedy, J. R.; Fitzpatrick, W. S.
1971-01-01
The computer executive functional system design concepts derived from study of the Space Station/Base are presented. Information Management System hardware configuration as directly influencing the executive design is reviewed. The hardware configuration and generic executive design requirements are considered in detail in a previous report (System Configuration and Executive Requirements Specifications for Reusable Shuttle and Space Station/Base, 9/25/70). This report defines basic system primitives and delineates processes and process control. Supervisor states are considered for describing basic multiprogramming and multiprocessing systems. A high-level computer executive including control of scheduling, allocation of resources, system interactions, and real-time supervisory functions is defined. The description is oriented to provide a baseline for a functional simulation of the computer executive system.
An Experimental Framework for Executing Applications in Dynamic Grid Environments
NASA Technical Reports Server (NTRS)
Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P; Zijdenbos, Alex P; Evans, Alan C
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources.
Bellec, Pierre; Lavoie-Courchesne, Sébastien; Dickinson, Phil; Lerch, Jason P.; Zijdenbos, Alex P.; Evans, Alan C.
2012-01-01
The analysis of neuroimaging databases typically involves a large number of inter-connected steps called a pipeline. The pipeline system for Octave and Matlab (PSOM) is a flexible framework for the implementation of pipelines in the form of Octave or Matlab scripts. PSOM does not introduce new language constructs to specify the steps and structure of the workflow. All steps of analysis are instead described by a regular Matlab data structure, documenting their associated command and options, as well as their input, output, and cleaned-up files. The PSOM execution engine provides a number of automated services: (1) it executes jobs in parallel on a local computing facility as long as the dependencies between jobs allow for it and sufficient resources are available; (2) it generates a comprehensive record of the pipeline stages and the history of execution, which is detailed enough to fully reproduce the analysis; (3) if an analysis is started multiple times, it executes only the parts of the pipeline that need to be reprocessed. PSOM is distributed under an open-source MIT license and can be used without restriction for academic or commercial projects. The package has no external dependencies besides Matlab or Octave, is straightforward to install and supports of variety of operating systems (Linux, Windows, Mac). We ran several benchmark experiments on a public database including 200 subjects, using a pipeline for the preprocessing of functional magnetic resonance images (fMRI). The benchmark results showed that PSOM is a powerful solution for the analysis of large databases using local or distributed computing resources. PMID:22493575
The entropy reduction engine: Integrating planning, scheduling, and control
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.; Kedar, Smadar T.
1991-01-01
The Entropy Reduction Engine, an architecture for the integration of planning, scheduling, and control, is described. The architecture is motivated, presented, and analyzed in terms of its different components; namely, problem reduction, temporal projection, and situated control rule execution. Experience with this architecture has motivated the recent integration of learning. The learning methods are described along with their impact on architecture performance.
2012-06-01
executed a concerted effort to employ reliability standards and testing from the design phase through fielding. Reliability programs remain standard...performed flight test engineer duties on several developmental flight test programs and served as Chief Engineer for a flight test squadron. Major...Quant is an acquisition professional with over 250 flight test hours in various aircraft, including the F-16, Airborne Laser, and HH-60. She holds a
Enterprise Architecture Tradespace Analysis
2014-02-21
EXECUTIVE SUMMARY The Department of Defense (DoD)’s Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for...Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for adaptable designs with diverse systems models that can easily be...Department of Defense [Holland, 2012]. Some explicit goals are: • Establish baseline resiliency of current capabilities • More complete and robust
1984-12-01
December 1984 APPROVED FOR PUBUC RELEASE; DISTRIBUTION UNLIMITED. U_ US ARMY *’ HUMAN ENGINEERING LABORATORY US ARMY BALLISTIC RESEARCH LABORATORY ABERDEEN...INTRODUCTION A. Background In March 1982, the HELBAT ( Human Engineering Laboratory Battalion Artillery Test) Executive Committee agreed that the Ballistic...tactical equipment and its -. human operators. FOSCE mimicked the actions of the platoon forward observers that work for the FIST HQ while the FDS
NASA Astrophysics Data System (ADS)
Ahmadi, Mohammad H.; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah
2015-09-01
Developing new technologies like nano-technology improves the performance of the energy industries. Consequently, emerging new groups of thermal cycles in nano-scale can revolutionize the energy systems' future. This paper presents a thermo-dynamical study of a nano-scale irreversible Stirling engine cycle with the aim of optimizing the performance of the Stirling engine cycle. In the Stirling engine cycle the working fluid is an Ideal Maxwell-Boltzmann gas. Moreover, two different strategies are proposed for a multi-objective optimization issue, and the outcomes of each strategy are evaluated separately. The first strategy is proposed to maximize the ecological coefficient of performance (ECOP), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F . Furthermore, the second strategy is suggested to maximize the thermal efficiency ( η), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F). All the strategies in the present work are executed via a multi-objective evolutionary algorithms based on NSGA∥ method. Finally, to achieve the final answer in each strategy, three well-known decision makers are executed. Lastly, deviations of the outcomes gained in each strategy and each decision maker are evaluated separately.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
The VATES-Diamond as a Verifier's Best Friend
NASA Astrophysics Data System (ADS)
Glesner, Sabine; Bartels, Björn; Göthel, Thomas; Kleine, Moritz
Within a model-based software engineering process it needs to be ensured that properties of abstract specifications are preserved by transformations down to executable code. This is even more important in the area of safety-critical real-time systems where additionally non-functional properties are crucial. In the VATES project, we develop formal methods for the construction and verification of embedded systems. We follow a novel approach that allows us to formally relate abstract process algebraic specifications to their implementation in a compiler intermediate representation. The idea is to extract a low-level process algebraic description from the intermediate code and to formally relate it to previously developed abstract specifications. We apply this approach to a case study from the area of real-time operating systems and show that this approach has the potential to seamlessly integrate modeling, implementation, transformation and verification stages of embedded system development.
CPAS Preflight Drop Test Analysis Process
NASA Technical Reports Server (NTRS)
Englert, Megan E.; Bledsoe, Kristin J.; Romero, Leah M.
2015-01-01
Throughout the Capsule Parachute Assembly System (CPAS) drop test program, the CPAS Analysis Team has developed a simulation and analysis process to support drop test planning and execution. This process includes multiple phases focused on developing test simulations and communicating results to all groups involved in the drop test. CPAS Engineering Development Unit (EDU) series drop test planning begins with the development of a basic operational concept for each test. Trajectory simulation tools include the Flight Analysis and Simulation Tool (FAST) for single bodies, and the Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulation for the mated vehicle. Results are communicated to the team at the Test Configuration Review (TCR) and Test Readiness Review (TRR), as well as at Analysis Integrated Product Team (IPT) meetings in earlier and intermediate phases of the pre-test planning. The ability to plan and communicate efficiently with rapidly changing objectives and tight schedule constraints is a necessity for safe and successful drop tests.
The pervasive nature of unconscious social information processing in executive control
Prabhakaran, Ranjani; Gray, Jeremy R.
2012-01-01
Humans not only have impressive executive abilities, but we are also fundamentally social creatures. In the cognitive neuroscience literature, it has long been assumed that executive control mechanisms, which play a critical role in guiding goal-directed behavior, operate on consciously processed information. Although more recent evidence suggests that unconsciously processed information can also influence executive control, most of this literature has focused on visual masked priming paradigms. However, the social psychological literature has demonstrated that unconscious influences are pervasive, and social information can unintentionally influence a wide variety of behaviors, including some that are likely to require executive abilities. For example, social information can unconsciously influence attention processes, such that simply instructing participants to describe a previous situation in which they had power over someone or someone else had power over them has been shown to unconsciously influence their attentional focus abilities, a key aspect of executive control. In the current review, we consider behavioral and neural findings from a variety of paradigms, including priming of goals and social hierarchical roles, as well as interpersonal interactions, in order to highlight the pervasive nature of social influences on executive control. These findings suggest that social information can play a critical role in executive control, and that this influence often occurs in an unconscious fashion. We conclude by suggesting further avenues of research for investigation of the interplay between social factors and executive control. PMID:22557956
ERIC Educational Resources Information Center
Baudouin, Alexia; Clarys, David; Vanneste, Sandrine; Isingrini, Michel
2009-01-01
The aim of the present study was to examine executive dysfunctioning and decreased processing speed as potential mediators of age-related differences in episodic memory. We compared the performances of young and elderly adults in a free-recall task. Participants were also given tests to measure executive functions and perceptual processing speed…
Automotive technology status and projections. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Dowdy, M.; Burke, A.; Schneider, H.; Edmiston, W.; Klose, G. J.; Heft, R.
1978-01-01
Fuel economy, exhaust emissions, multifuel capability, advanced materials and cost/manufacturability for both conventional and advanced alternative power systems were assessed. To insure valid comparisons of vehicles with alternative power systems, the concept of an Otto-Engine-Equivalent (OEE) vehicle was utilized. Each engine type was sized to provide equivalent vehicle performance. Sensitivity to different performance criteria was evaluated. Fuel economy projections are made for each engine type considering both the legislated emission standards and possible future emissions requirements.
NASA Technical Reports Server (NTRS)
Shannon, Robert V., Jr.
1989-01-01
The model generation and structural analysis performed for the High Pressure Oxidizer Turbopump (HPOTP) preburner pump volute housing located on the main pump end of the HPOTP in the space shuttle main engine are summarized. An ANSYS finite element model of the volute housing was built and executed. A static structural analysis was performed on the Engineering Analysis and Data System (EADS) Cray-XMP supercomputer
A reusable rocket engine intelligen control
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Lorenzo, Carl F.
1988-01-01
An intelligent control system for reusable space propulsion systems for future launch vehicles is described. The system description includes a framework for the design. The framework consists of an execution level with high-speed control and diagnostics, and a coordination level which marries expert system concepts with traditional control. A comparison is made between air breathing and rocket engine control concepts to assess the relative levels of development and to determine the applicability of air breathing control concepts to future reusable rocket engine systems.
A reusable rocket engine intelligent control
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Lorenzo, Carl F.
1988-01-01
An intelligent control system for reusable space propulsion systems for future launch vehicles is described. The system description includes a framework for the design. The framework consists of an execution level with high-speed control and diagnostics, and a coordination level which marries expert system concepts with traditional control. A comparison is made between air breathing and rocket engine control concepts to assess the relative levels of development and to determine the applicability of air breathing control concepts ot future reusable rocket engine systems.
2009-04-09
technical faculty for the Master in Software Engineering program at CMU. Grace holds a B.Sc. in Systems Engineering and an Executive MBA from Icesi...University in Cali, Colombia ; and a Master in Software Engineering from Carnegie Mellon University. 3 Version 1.7.3—SEI Webinar—April 2009 © 2009 Carnegie...Resources and Training SMART Report • http://www.sei.cmu.edu/publications/documents/08.reports/08tn008.html Public Courses • Migration of Legacy
Programmable full-adder computations in communicating three-dimensional cell cultures.
Ausländer, David; Ausländer, Simon; Pierrat, Xavier; Hellmann, Leon; Rachid, Leila; Fussenegger, Martin
2018-01-01
Synthetic biologists have advanced the design of trigger-inducible gene switches and their assembly into input-programmable circuits that enable engineered human cells to perform arithmetic calculations reminiscent of electronic circuits. By designing a versatile plug-and-play molecular-computation platform, we have engineered nine different cell populations with genetic programs, each of which encodes a defined computational instruction. When assembled into 3D cultures, these engineered cell consortia execute programmable multicellular full-adder logics in response to three trigger compounds.
1989-06-01
resulted in an increase of the intermediate seal purge pressure, revised redlines, and a design change from a lift-off seal to a labyrinth seal design. This...engine 0003 caused fa&i!ure of the primary lox seal and an uncontained engine fire. The redline cut was set by a HPOTP overspeed. This failure...occurred as a result of undetected internal HEX damage caused during arc welding which resulted in an engine fire. HEX coil leakage resulted in an
Engineer: The Professional Bulletin of Army Engineers. Volume 41, September-December 2011
2011-12-01
and its sup- port to the joint and coalition forces that will remain in contact as far into the future as we can see, executing a unique blend of war...U.S. Army Engineer School As the regiment makes adjustments based on lessons learned during .the last 10 years, the backbone of our Army is rapidly...adjusting to support these improvements. This requires engaged NCOs at all levels. Junior NCOs should continue to provide the lessons learned to
NASA Risk Management Handbook. Version 1.0
NASA Technical Reports Server (NTRS)
Dezfuli, Homayoon; Benjamin, Allan; Everett, Christopher; Maggio, Gaspare; Stamatelatos, Michael; Youngblood, Robert; Guarro, Sergio; Rutledge, Peter; Sherrard, James; Smith, Curtis;
2011-01-01
The purpose of this handbook is to provide guidance for implementing the Risk Management (RM) requirements of NASA Procedural Requirements (NPR) document NPR 8000.4A, Agency Risk Management Procedural Requirements [1], with a specific focus on programs and projects, and applying to each level of the NASA organizational hierarchy as requirements flow down. This handbook supports RM application within the NASA systems engineering process, and is a complement to the guidance contained in NASA/SP-2007-6105, NASA Systems Engineering Handbook [2]. Specifically, this handbook provides guidance that is applicable to the common technical processes of Technical Risk Management and Decision Analysis established by NPR 7123.1A, NASA Systems Engineering Process and Requirements [3]. These processes are part of the \\Systems Engineering Engine. (Figure 1) that is used to drive the development of the system and associated work products to satisfy stakeholder expectations in all mission execution domains, including safety, technical, cost, and schedule. Like NPR 7123.1A, NPR 8000.4A is a discipline-oriented NPR that intersects with product-oriented NPRs such as NPR 7120.5D, NASA Space Flight Program and Project Management Requirements [4]; NPR 7120.7, NASA Information Technology and Institutional Infrastructure Program and Project Management Requirements [5]; and NPR 7120.8, NASA Research and Technology Program and Project Management Requirements [6]. In much the same way that the NASA Systems Engineering Handbook is intended to provide guidance on the implementation of NPR 7123.1A, this handbook is intended to provide guidance on the implementation of NPR 8000.4A. 1.2 Scope and Depth This handbook provides guidance for conducting RM in the context of NASA program and project life cycles, which produce derived requirements in accordance with existing systems engineering practices that flow down through the NASA organizational hierarchy. The guidance in this handbook is not meant to be prescriptive. Instead, it is meant to be general enough, and contain a sufficient diversity of examples, to enable the reader to adapt the methods as needed to the particular risk management issues that he or she faces. The handbook highlights major issues to consider when managing programs and projects in the presence of potentially significant uncertainty, so that the user is better able to recognize and avoid pitfalls that might otherwise be experienced.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
29 CFR 541.402 - Executive and administrative computer employees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... planning, scheduling, and coordinating activities required to develop systems to solve complex business, scientific or engineering problems of the employer or the employer's customers. Similarly, a senior or lead...
Sensory Processing in Preterm Preschoolers and Its Association with Executive Function
Adams, Jenna N.; Feldman, Heidi M.; Huffman, Lynne C.; Loe, Irene M.
2015-01-01
Background Symptoms of abnormal sensory processing have been related to preterm birth, but have not yet been studied specifically in preterm preschoolers. The degree of association between sensory processing and other domains is important for understanding the role of sensory processing symptoms in the development of preterm children. Aims To test two related hypotheses: (1) preterm preschoolers have more sensory processing symptoms than full term preschoolers and (2) sensory processing is associated with both executive function and adaptive function in preterm preschoolers. Study Design Cross-sectional study Subjects Preterm children (≤34 weeks of gestation; n = 54) and full term controls (≥37 weeks of gestation; n = 73) ages 3-5 years. Outcome Measures Sensory processing was assessed with the Short Sensory Profile. Executive function was assessed with (1) parent ratings on the Behavior Rating Inventory of Executive Function- Preschool version and (2) a performance-based battery of tasks. Adaptive function was assessed with the Vineland Adaptive Behavior Scales-II. Results Preterm preschoolers showed significantly more sensory symptoms than full term controls. A higher percentage of preterm than full term preschoolers had elevated numbers of sensory symptoms (37% vs. 12%). Sensory symptoms in preterm preschoolers were associated with scores on executive function measures, but were not significantly associated with adaptive function. Conclusions Preterm preschoolers exhibited more sensory symptoms than full term controls. Preterm preschoolers with elevated numbers of sensory symptoms also showed executive function impairment. Future research should further examine whether sensory processing and executive function should be considered independent or overlapping constructs. PMID:25706317
From the desktop to the grid: scalable bioinformatics via workflow conversion.
de la Garza, Luis; Veit, Johannes; Szolek, Andras; Röttig, Marc; Aiche, Stephan; Gesing, Sandra; Reinert, Knut; Kohlbacher, Oliver
2016-03-12
Reproducibility is one of the tenets of the scientific method. Scientific experiments often comprise complex data flows, selection of adequate parameters, and analysis and visualization of intermediate and end results. Breaking down the complexity of such experiments into the joint collaboration of small, repeatable, well defined tasks, each with well defined inputs, parameters, and outputs, offers the immediate benefit of identifying bottlenecks, pinpoint sections which could benefit from parallelization, among others. Workflows rest upon the notion of splitting complex work into the joint effort of several manageable tasks. There are several engines that give users the ability to design and execute workflows. Each engine was created to address certain problems of a specific community, therefore each one has its advantages and shortcomings. Furthermore, not all features of all workflow engines are royalty-free -an aspect that could potentially drive away members of the scientific community. We have developed a set of tools that enables the scientific community to benefit from workflow interoperability. We developed a platform-free structured representation of parameters, inputs, outputs of command-line tools in so-called Common Tool Descriptor documents. We have also overcome the shortcomings and combined the features of two royalty-free workflow engines with a substantial user community: the Konstanz Information Miner, an engine which we see as a formidable workflow editor, and the Grid and User Support Environment, a web-based framework able to interact with several high-performance computing resources. We have thus created a free and highly accessible way to design workflows on a desktop computer and execute them on high-performance computing resources. Our work will not only reduce time spent on designing scientific workflows, but also make executing workflows on remote high-performance computing resources more accessible to technically inexperienced users. We strongly believe that our efforts not only decrease the turnaround time to obtain scientific results but also have a positive impact on reproducibility, thus elevating the quality of obtained scientific results.
NASA Astrophysics Data System (ADS)
Osman, Sharifah; Mohammad, Shahrin; Abu, Mohd Salleh
2015-05-01
Mathematics and engineering are inexorably and significantly linked and essentially required in analyzing and accessing thought to make good judgment when dealing in complex and varied engineering problems. A study in the current engineering education curriculum to explore how the critical thinking and mathematical thinking relates to one another, is therefore timely crucial. Unfortunately, there is not much information available explicating about the link. This paper aims to report findings of a critical review as well as to provide brief description of an on-going research aimed to investigate the dispositions of critical thinking and the relationship and integration between critical thinking and mathematical thinking during the execution of civil engineering tasks. The first part of the paper reports an in-depth review on these matters based on rather limited resources. The review showed a considerable form of congruency between these two perspectives of thinking, with some prevalent trends of engineering workplace tasks, problems and challenges. The second part describes an on-going research to be conducted by the researcher to investigate rigorously the relationship and integration between these two types of thinking within the perspective of civil engineering tasks. A reasonably close non-participant observations and semi-structured interviews will be executed for the pilot and main stages of the study. The data will be analyzed using constant comparative analysis in which the grounded theory methodology will be adopted. The findings will serve as a useful grounding for constructing a substantive theory revealing the integral relationship between critical thinking and mathematical thinking in the real civil engineering practice context. The substantive theory, from an angle of view, is expected to contribute some additional useful information to the engineering program outcomes and engineering education instructions, aligns with the expectations of engineering program outcomes set by the Engineering Accreditation Council.
NASA Astrophysics Data System (ADS)
Zeng, Qingtian; Liu, Cong; Duan, Hua
2016-09-01
Correctness of an emergency response process specification is critical to emergency mission success. Therefore, errors in the specification should be detected and corrected at build-time. In this paper, we propose a resource conflict detection approach and removal strategy for emergency response processes constrained by resources and time. In this kind of emergency response process, there are two timing functions representing the minimum and maximum execution time for each activity, respectively, and many activities require resources to be executed. Based on the RT_ERP_Net, the earliest time to start each activity and the ideal execution time of the process can be obtained. To detect and remove the resource conflicts in the process, the conflict detection algorithms and a priority-activity-first resolution strategy are given. In this way, real execution time for each activity is obtained and a conflict-free RT_ERP_Net is constructed by adding virtual activities. By experiments, it is proved that the resolution strategy proposed can shorten the execution time of the whole process to a great degree.
Sheehy, Eamon J; Vinardell, Tatiana; Toner, Mary E; Buckley, Conor T; Kelly, Daniel J
2014-01-01
Cartilaginous tissues engineered using mesenchymal stem cells (MSCs) can be leveraged to generate bone in vivo by executing an endochondral program, leading to increased interest in the use of such hypertrophic grafts for the regeneration of osseous defects. During normal skeletogenesis, canals within the developing hypertrophic cartilage play a key role in facilitating endochondral ossification. Inspired by this developmental feature, the objective of this study was to promote endochondral ossification of an engineered cartilaginous construct through modification of scaffold architecture. Our hypothesis was that the introduction of channels into MSC-seeded hydrogels would firstly facilitate the in vitro development of scaled-up hypertrophic cartilaginous tissues, and secondly would accelerate vascularisation and mineralisation of the graft in vivo. MSCs were encapsulated into hydrogels containing either an array of micro-channels, or into non-channelled 'solid' controls, and maintained in culture conditions known to promote a hypertrophic cartilaginous phenotype. Solid constructs accumulated significantly more sGAG and collagen in vitro, while channelled constructs accumulated significantly more calcium. In vivo, the channels acted as conduits for vascularisation and accelerated mineralisation of the engineered graft. Cartilaginous tissue within the channels underwent endochondral ossification, producing lamellar bone surrounding a hematopoietic marrow component. This study highlights the potential of utilising engineering methodologies, inspired by developmental skeletal processes, in order to enhance endochondral bone regeneration strategies.
Sheehy, Eamon J.; Vinardell, Tatiana; Toner, Mary E.; Buckley, Conor T.; Kelly, Daniel J.
2014-01-01
Cartilaginous tissues engineered using mesenchymal stem cells (MSCs) can be leveraged to generate bone in vivo by executing an endochondral program, leading to increased interest in the use of such hypertrophic grafts for the regeneration of osseous defects. During normal skeletogenesis, canals within the developing hypertrophic cartilage play a key role in facilitating endochondral ossification. Inspired by this developmental feature, the objective of this study was to promote endochondral ossification of an engineered cartilaginous construct through modification of scaffold architecture. Our hypothesis was that the introduction of channels into MSC-seeded hydrogels would firstly facilitate the in vitro development of scaled-up hypertrophic cartilaginous tissues, and secondly would accelerate vascularisation and mineralisation of the graft in vivo. MSCs were encapsulated into hydrogels containing either an array of micro-channels, or into non-channelled ‘solid’ controls, and maintained in culture conditions known to promote a hypertrophic cartilaginous phenotype. Solid constructs accumulated significantly more sGAG and collagen in vitro, while channelled constructs accumulated significantly more calcium. In vivo, the channels acted as conduits for vascularisation and accelerated mineralisation of the engineered graft. Cartilaginous tissue within the channels underwent endochondral ossification, producing lamellar bone surrounding a hematopoietic marrow component. This study highlights the potential of utilising engineering methodologies, inspired by developmental skeletal processes, in order to enhance endochondral bone regeneration strategies. PMID:24595316
Risk-Informed Decision Making: Application to Technology Development Alternative Selection
NASA Technical Reports Server (NTRS)
Dezfuli, Homayoon; Maggio, Gaspare; Everett, Christopher
2010-01-01
NASA NPR 8000.4A, Agency Risk Management Procedural Requirements, defines risk management in terms of two complementary processes: Risk-informed Decision Making (RIDM) and Continuous Risk Management (CRM). The RIDM process is used to inform decision making by emphasizing proper use of risk analysis to make decisions that impact all mission execution domains (e.g., safety, technical, cost, and schedule) for program/projects and mission support organizations. The RIDM process supports the selection of an alternative prior to program commitment. The CRM process is used to manage risk associated with the implementation of the selected alternative. The two processes work together to foster proactive risk management at NASA. The Office of Safety and Mission Assurance at NASA Headquarters has developed a technical handbook to provide guidance for implementing the RIDM process in the context of NASA risk management and systems engineering. This paper summarizes the key concepts and procedures of the RIDM process as presented in the handbook, and also illustrates how the RIDM process can be applied to the selection of technology investments as NASA's new technology development programs are initiated.
Peterson, Kevin J.; Pathak, Jyotishman
2014-01-01
Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459
Executive roundtable on coal-fired generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2009-09-15
Power Engineering magazine invited six industry executives from the coal-fired sector to discuss issues affecting current and future prospects of coal-fired generation. The executives are Tim Curran, head of Alstom Power for the USA and Senior Vice President and General Manager of Boilers North America; Ray Kowalik, President and General Manager of Burns and McDonnell Energy Group; Jeff Holmstead, head of Environmental Strategies for the Bracewell Giuliani law firm; Jim Mackey, Vice President, Fluor Power Group's Solid Fuel business line; Tom Shelby, President Kiewit Power Inc., and David Wilks, President of Energy Supply for Excel Energy Group. Steve Blankinship, themore » magazine's Associate Editor, was the moderator. 6 photos.« less
cFE/CFS (Core Flight Executive/Core Flight System)
NASA Technical Reports Server (NTRS)
Wildermann, Charles P.
2008-01-01
This viewgraph presentation describes in detail the requirements and goals of the Core Flight Executive (cFE) and the Core Flight System (CFS). The Core Flight Software System is a mission independent, platform-independent, Flight Software (FSW) environment integrating a reusable core flight executive (cFE). The CFS goals include: 1) Reduce time to deploy high quality flight software; 2) Reduce project schedule and cost uncertainty; 3) Directly facilitate formalized software reuse; 4) Enable collaboration across organizations; 5) Simplify sustaining engineering (AKA. FSW maintenance); 6) Scale from small instruments to System of Systems; 7) Platform for advanced concepts and prototyping; and 7) Common standards and tools across the branch and NASA wide.
Perrotin, Audrey; Isingrini, Michel; Souchay, Céline; Clarys, David; Taconnat, Laurence
2006-05-01
This research investigated adult age differences in a metamemory monitoring task-episodic feeling-of-knowing (FOK) and in an episodic memory task-cued recall. Executive functioning and processing speed were examined as mediators of these age differences. Young and elderly adults were administered an episodic FOK task, a cued recall task, executive tests and speed tests. Age-related decline was observed on all the measures. Correlation analyses revealed a pattern of double dissociation which indicates a specific relationship between executive score and FOK accuracy, and between speed score and cued recall. When executive functioning and processing speed were evaluated concurrently on FOK and cued recall variables, hierarchical regression analyses showed that executive score was a better mediator of age-related variance in FOK, and that speed score was the better mediator of age-related variance in cued recall.
Engineering a humanized bone organ model in mice to study bone metastases.
Martine, Laure C; Holzapfel, Boris M; McGovern, Jacqui A; Wagner, Ferdinand; Quent, Verena M; Hesami, Parisa; Wunner, Felix M; Vaquette, Cedryck; De-Juan-Pardo, Elena M; Brown, Toby D; Nowlan, Bianca; Wu, Dan Jing; Hutmacher, Cosmo Orlando; Moi, Davide; Oussenko, Tatiana; Piccinini, Elia; Zandstra, Peter W; Mazzieri, Roberta; Lévesque, Jean-Pierre; Dalton, Paul D; Taubenberger, Anna V; Hutmacher, Dietmar W
2017-04-01
Current in vivo models for investigating human primary bone tumors and cancer metastasis to the bone rely on the injection of human cancer cells into the mouse skeleton. This approach does not mimic species-specific mechanisms occurring in human diseases and may preclude successful clinical translation. We have developed a protocol to engineer humanized bone within immunodeficient hosts, which can be adapted to study the interactions between human cancer cells and a humanized bone microenvironment in vivo. A researcher trained in the principles of tissue engineering will be able to execute the protocol and yield study results within 4-6 months. Additive biomanufactured scaffolds seeded and cultured with human bone-forming cells are implanted ectopically in combination with osteogenic factors into mice to generate a physiological bone 'organ', which is partially humanized. The model comprises human bone cells and secreted extracellular matrix (ECM); however, other components of the engineered tissue, such as the vasculature, are of murine origin. The model can be further humanized through the engraftment of human hematopoietic stem cells (HSCs) that can lead to human hematopoiesis within the murine host. The humanized organ bone model has been well characterized and validated and allows dissection of some of the mechanisms of the bone metastatic processes in prostate and breast cancer.
The difference engine: a model of diversity in speeded cognition.
Myerson, Joel; Hale, Sandra; Zheng, Yingye; Jenkins, Lisa; Widaman, Keith F
2003-06-01
A theory of diversity in speeded cognition, the difference engine, is proposed, in which information processing is represented as a series of generic computational steps. Some individuals tend to perform all of these computations relatively quickly and other individuals tend to perform them all relatively slowly, reflecting the existence of a general cognitive speed factor, but the time required for response selection and execution is assumed to be independent of cognitive speed. The difference engine correctly predicts the positively accelerated form of the relation between diversity of performance, as measured by the standard deviation for the group, and task difficulty, as indexed by the mean response time (RT) for the group. In addition, the difference engine correctly predicts approximately linear relations between the RTs of any individual and average performance for the group, with the regression lines for fast individuals having slopes less than 1.0 (and positive intercepts) and the regression lines for slow individuals having slopes greater than 1.0 (and negative intercepts). Similar predictions are made for comparisons of slow, average, and fast subgroups, regardless of whether those subgroups are formed on the basis of differences in ability, age, or health status. These predictions are consistent with evidence from studies of healthy young and older adults as well as from studies of depressed and age-matched control groups.
The roles of associative and executive processes in creative cognition.
Beaty, Roger E; Silvia, Paul J; Nusbaum, Emily C; Jauk, Emanuel; Benedek, Mathias
2014-10-01
How does the mind produce creative ideas? Past research has pointed to important roles of both executive and associative processes in creative cognition. But such work has largely focused on the influence of one ability or the other-executive or associative-so the extent to which both abilities may jointly affect creative thought remains unclear. Using multivariate structural equation modeling, we conducted two studies to determine the relative influences of executive and associative processes in domain-general creative cognition (i.e., divergent thinking). Participants completed a series of verbal fluency tasks, and their responses were analyzed by means of latent semantic analysis (LSA) and scored for semantic distance as a measure of associative ability. Participants also completed several measures of executive function-including broad retrieval ability (Gr) and fluid intelligence (Gf). Across both studies, we found substantial effects of both associative and executive abilities: As the average semantic distance between verbal fluency responses and cues increased, so did the creative quality of divergent-thinking responses (Study 1 and Study 2). Moreover, the creative quality of divergent-thinking responses was predicted by the executive variables-Gr (Study 1) and Gf (Study 2). Importantly, the effects of semantic distance and the executive function variables remained robust in the same structural equation model predicting divergent thinking, suggesting unique contributions of both constructs. The present research extends recent applications of LSA in creativity research and provides support for the notion that both associative and executive processes underlie the production of novel ideas.
Thermal/structural Tailoring of Engine Blades (T/SEAEBL). Theoretical Manual
NASA Technical Reports Server (NTRS)
Brown, K. W.; Clevenger, W. B.
1994-01-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.
Thermal/structural tailoring of engine blades (T/SEAEBL). Theoretical manual
NASA Astrophysics Data System (ADS)
Brown, K. W.; Clevenger, W. B.
1994-03-01
The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.
An Engineering Report in Civil Engineering and Management.
1987-12-01
programs as the Apollo program and the Canaveral program. Progress in the late 70s and the 80s has seen advancements in the application of sophisticated...other forces in military operations; subsequent combat service support ashore and defense against overt or clandestine enemy attacks directed toward...construction execution plans; assigns construction projects to NCF units; monitors progress and assures adherence to quality standards: directs
13th Annual Systems Engineering Conference. Volume 3
2010-10-28
Case for Considering Acquisition Program Executability Prior to Materiel Development Decision (MDD), Mr. Gregory Laushine, SAIC · 10810...David Asiello, Office Deputy Under Secretary of Defense (I&E) · 10907 - A Case Study of an Evolving ESOH Program — One Company’s Perspective, Mr...10732 - R&D Transition Interface with Early Systems Engineering: SEALION and Open Systems Case Studies, Mr. Michael Bosworth, Naval Sea Systems
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-05
... new airworthiness directive (AD) for the products listed above. This AD was prompted by four reports of unrecoverable engine stalls, during hover in a left-roll attitude. This AD requires the... service information at the FAA, Engine & Propeller Directorate, 12 New England Executive Park, Burlington...
Method and apparatus for fault tolerance
NASA Technical Reports Server (NTRS)
Masson, Gerald M. (Inventor); Sullivan, Gregory F. (Inventor)
1993-01-01
A method and apparatus for achieving fault tolerance in a computer system having at least a first central processing unit and a second central processing unit. The method comprises the steps of first executing a first algorithm in the first central processing unit on input which produces a first output as well as a certification trail. Next, executing a second algorithm in the second central processing unit on the input and on at least a portion of the certification trail which produces a second output. The second algorithm has a faster execution time than the first algorithm for a given input. Then, comparing the first and second outputs such that an error result is produced if the first and second outputs are not the same. The step of executing a first algorithm and the step of executing a second algorithm preferably takes place over essentially the same time period.
Checking Flight Rules with TraceContract: Application of a Scala DSL for Trace Analysis
NASA Technical Reports Server (NTRS)
Barringer, Howard; Havelund, Klaus; Morris, Robert A.
2011-01-01
Typically during the design and development of a NASA space mission, rules and constraints are identified to help reduce reasons for failure during operations. These flight rules are usually captured in a set of indexed tables, containing rule descriptions, rationales for the rules, and other information. Flight rules can be part of manual operations procedures carried out by humans. However, they can also be automated, and either implemented as on-board monitors, or as ground based monitors that are part of a ground data system. In the case of automated flight rules, one considerable expense to be addressed for any mission is the extensive process by which system engineers express flight rules in prose, software developers translate these requirements into code, and then both experts verify that the resulting application is correct. This paper explores the potential benefits of using an internal Scala DSL for general trace analysis, named TRACECONTRACT, to write executable specifications of flight rules. TRACECONTRACT can generally be applied to analysis of for example log files or for monitoring executing systems online.
Hospital downsizing and workforce reduction strategies: some inner workings.
Weil, Thomas P
2003-02-01
Downsizing, manpower reductions, re-engineering, and resizing are used extensively in the United States to reduce cost and to evaluate the effectiveness and efficiency of various functions and processes. Published studies report that these managerial strategies result in a minimal impact on access to services, quality of care, and the ability to reduce costs. But, these approaches certainly alienate employees. These findings are usually explained by the significant difficulties experienced in eliminating nursing and other similar direct patient care-oriented positions and in terminating white-collar employees. Possibly an equally plausible reason why hospitals and physician practices react so poorly to these management strategies is their cost structure-high fixed (85%) and low variable (15%)-and that simply generating greater volume does not necessarily achieve economies of scale. More workable alternatives for health executives to effectuate cost reductions consist of simplifying prepayment, decreasing the overall availability and centralizing tertiary services at academic health centres, and closing superfluous hospitals and other health facilities. America's pluralistic values and these proposals having serious political repercussions for health executives and elected officials often present serious barriers in their implementation.
Modeling Constellation Virtual Missions Using the Vdot(Trademark) Process Management Tool
NASA Technical Reports Server (NTRS)
Hardy, Roger; ONeil, Daniel; Sturken, Ian; Nix, Michael; Yanez, Damian
2011-01-01
The authors have identified a software tool suite that will support NASA's Virtual Mission (VM) effort. This is accomplished by transforming a spreadsheet database of mission events, task inputs and outputs, timelines, and organizations into process visualization tools and a Vdot process management model that includes embedded analysis software as well as requirements and information related to data manipulation and transfer. This paper describes the progress to date, and the application of the Virtual Mission to not only Constellation but to other architectures, and the pertinence to other aerospace applications. Vdot s intuitive visual interface brings VMs to life by turning static, paper-based processes into active, electronic processes that can be deployed, executed, managed, verified, and continuously improved. A VM can be executed using a computer-based, human-in-the-loop, real-time format, under the direction and control of the NASA VM Manager. Engineers in the various disciplines will not have to be Vdot-proficient but rather can fill out on-line, Excel-type databases with the mission information discussed above. The author s tool suite converts this database into several process visualization tools for review and into Microsoft Project, which can be imported directly into Vdot. Many tools can be embedded directly into Vdot, and when the necessary data/information is received from a preceding task, the analysis can be initiated automatically. Other NASA analysis tools are too complex for this process but Vdot automatically notifies the tool user that the data has been received and analysis can begin. The VM can be simulated from end-to-end using the author s tool suite. The planned approach for the Vdot-based process simulation is to generate the process model from a database; other advantages of this semi-automated approach are the participants can be geographically remote and after refining the process models via the human-in-the-loop simulation, the system can evolve into a process management server for the actual process.
Bio-engineering for land stabilization : executive summary report.
DOT National Transportation Integrated Search
2010-06-30
Soil-bioengineering, or simply : bioengineering, is the use of vegetation for : slope stabilization. Currently, a large : number of slopes near Ohio highways are : experiencing stability problems. These : failures usually begin as local erosion...
NASA Technical Reports Server (NTRS)
1985-01-01
As the NASA Center responsible for assembly, checkout, servicing, launch, recovery, and operational support of Space Transportation System elements and payloads, Kennedy Space Center is placing increasing emphasis on the Center's research and technology program. In addition to strengthening those areas of engineering and operations technology that contribute to safe, more efficient, and more economical execution of our current mission, we are developing the technological tools needed to execute the Center's mission relative to Space Station and other future programs. The Engineering Development Directorate encompasses most of the laboratories and other Center resources that are key elements of research and technology program implementation and is responsible for implementation of the majority of the projects in this Kennedy Space Center 1985 Annual Report. The report contains brief descriptions of research and technology projects in major areas of Kennedy Space Center's disciplinary expertise.
Engine structures modeling software system: Computer code. User's manual
NASA Technical Reports Server (NTRS)
1992-01-01
ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Geometric modeling for computer aided design
NASA Technical Reports Server (NTRS)
Schwing, James L.; Olariu, Stephen
1995-01-01
The primary goal of this grant has been the design and implementation of software to be used in the conceptual design of aerospace vehicles particularly focused on the elements of geometric design, graphical user interfaces, and the interaction of the multitude of software typically used in this engineering environment. This has resulted in the development of several analysis packages and design studies. These include two major software systems currently used in the conceptual level design of aerospace vehicles. These tools are SMART, the Solid Modeling Aerospace Research Tool, and EASIE, the Environment for Software Integration and Execution. Additional software tools were designed and implemented to address the needs of the engineer working in the conceptual design environment. SMART provides conceptual designers with a rapid prototyping capability and several engineering analysis capabilities. In addition, SMART has a carefully engineered user interface that makes it easy to learn and use. Finally, a number of specialty characteristics have been built into SMART which allow it to be used efficiently as a front end geometry processor for other analysis packages. EASIE provides a set of interactive utilities that simplify the task of building and executing computer aided design systems consisting of diverse, stand-alone, analysis codes. Resulting in a streamlining of the exchange of data between programs reducing errors and improving the efficiency. EASIE provides both a methodology and a collection of software tools to ease the task of coordinating engineering design and analysis codes.
Ruckdeschel, J
2001-01-01
He didn't like math. Loved biology. So he ditched his plans to become an engineer and ended up pursuing a career in medicine and hospital administration. He led a sleepy cancer center to new heights of cutting-edge research and progressive types of treatment. And he did it all on his own terms--a mix of practicality and instinct that serves up some interesting perspectives for fellow physician executives to consider. Meet John Ruckdeschel, MD.
Multiscale Issues and Simulation-Based Science and Engineering for Materials-by-Design
2010-05-15
planning and execution of programs to achieve the vision of ? material -by-design?. A key part of this effort has been to examine modeling at the mesoscale...15. SUBJECT TERMS Modelling & Simulation, Materials Design 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18...planning and execution of programs to achieve the vision of “ material -by-design”. A key part of this effort has been to examine modeling at the mesoscale. A
Clustering execution in a processing system to increase power savings
Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.
2018-03-20
Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.
The role of executive functions in social impairment in Autism Spectrum Disorder.
Leung, Rachel C; Vogan, Vanessa M; Powell, Tamara L; Anagnostou, Evdokia; Taylor, Margot J
2016-01-01
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by socio-communicative impairments. Executive dysfunction may explain some key characteristics of ASD, both social and nonsocial hallmarks. Limited research exists exploring the relations between executive function and social impairment in ASD and few studies have used a comparison control group. Thus, the objective of the present study was to investigate the relations between executive functioning using the Behavioral Rating Inventory of Executive Functioning (BRIEF), social impairment as measured by the Social Responsiveness Scale (SRS), and overall autistic symptomology as measured by the Autism Diagnostic Observation Schedule (ADOS) in children and adolescents with and without ASD. Seventy children and adolescents diagnosed with ASD and 71 typically developing controls were included in this study. Findings showed that behavioral regulation executive processes (i.e., inhibition, shifting, and emotional control) predicted social function in all children. However, metacognitive executive processes (i.e., initiation, working memory, planning, organization, and monitoring) predicted social function only in children with ASD and not in typically developing children. Our findings suggest a distinct metacognitive executive function-social symptom link in ASD that is not present in the typical population. Understanding components of executive functioning that contribute to the autistic symptomology, particularly in the socio-communicative domain, is crucial for developing effective interventions that target key executive processes as well as underlying behavioral symptoms.
Knowles, Emma E M; Weiser, Mark; David, Anthony S; Glahn, David C; Davidson, Michael; Reichenberg, Abraham
2015-12-01
Substantial impairment in performance on the digit-symbol substitution task in patients with schizophrenia is well established, which has been widely interpreted as denoting a specific impairment in processing speed. However, other higher order cognitive functions might be more critical to performance on this task. To date, this idea has not been rigorously investigated in patients with schizophrenia. Neuropsychological measures of processing speed, memory, and executive functioning were completed by 125 patients with schizophrenia and 272 control subjects. We implemented a series of confirmatory factor and structural regression modeling to build an integrated model of processing speed, memory, and executive function with which to deconstruct the digit-symbol substitution task and characterize discrepancies between patients with schizophrenia and control subjects. The overall structure of the processing speed, memory, and executive function model was the same across groups (χ(2) = 208.86, p > .05), but the contribution of the specific cognitive domains to coding task performance differed significantly. When completing the task, control subjects relied on executive function and, indirectly, on working memory ability, whereas patients with schizophrenia used an alternative set of cognitive operations whereby they relied on the same processes required to complete verbal fluency tasks. Successful coding task performance relies predominantly on executive function, rather than processing speed or memory. Patients with schizophrenia perform poorly on this task because of an apparent lack of appropriate executive function input; they rely instead on an alternative cognitive pathway. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bedrax-Weiss, Tania; Jonsson, Ari K.; Frank, Jeremy D.; McGann, Conor
2003-01-01
Generating plans for execution imposes a different set of requirements on the planning process than those imposed by planning alone. In highly unpredictable execution environments, a fully-grounded plan may become inconsistent frequently when the world fails to behave as expected. Intelligent execution permits making decisions when the most up-to-date information is available, ensuring fewer failures. Planning should acknowledge the capabilities of the execution system, both to ensure robust execution in the face of uncertainty, which also relieves the planner of the burden of making premature commitments. We present Plan Identification Functions (PIFs), which formalize what it means for a plan to be executable, md are used in conjunction with a complete model of system behavior to halt the planning process when an executable plan is found. We describe the implementation of plan identification functions for a temporal, constraint-based planner. This particular implementation allows the description of many different plan identification functions. characteristics crf the xectieonfvii rnm-enft,h e best plan to hand to the execution system will contain more or less commitment and information.
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
ERIC Educational Resources Information Center
Rosen, Sonia M.; Boyle, Joseph R.; Cariss, Kaitlyn; Forchelli, Gina A.
2014-01-01
Students with learning disabilities have been reported to have difficulty in a number of different executive function processes that affect their academic performance (Singer & Bashir, 1999). Executive function difficulties for students with learning disabilities have been implicated as the reason why these students struggle with complex…
ERIC Educational Resources Information Center
Cartwright, Kelly B.
2012-01-01
Research Findings: Executive function begins to develop in infancy and involves an array of processes, such as attention, inhibition, working memory, and cognitive flexibility, which provide the means by which individuals control their own behavior, work toward goals, and manage complex cognitive processes. Thus, executive function plays a…
A Discussion of the Discrete Fourier Transform Execution on a Typical Desktop PC
NASA Technical Reports Server (NTRS)
White, Michael J.
2006-01-01
This paper will discuss and compare the execution times of three examples of the Discrete Fourier Transform (DFT). The first two examples will demonstrate the direct implementation of the algorithm. In the first example, the Fourier coefficients are generated at the execution of the DFT. In the second example, the coefficients are generated prior to execution and the DFT coefficients are indexed at execution. The last example will demonstrate the Cooley- Tukey algorithm, better known as the Fast Fourier Transform. All examples were written in C executed on a PC using a Pentium 4 running at 1.7 Ghz. As a function of N, the total complex data size, the direct implementation DFT executes, as expected at order of N2 and the FFT executes at order of N log2 N. At N=16K, there is an increase in processing time beyond what is expected. This is not caused by implementation but is a consequence of the effect that machine architecture and memory hierarchy has on implementation. This paper will include a brief overview of digital signal processing, along with a discussion of contemporary work with discrete Fourier processing.
NASA Astrophysics Data System (ADS)
Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii
2017-02-01
Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.
NASA Astrophysics Data System (ADS)
Jayamani, E.; Perera, D. S.; Soon, K. H.; Bakri, M. K. B.
2017-04-01
A systematic method of material analysis aiming for fuel efficiency improvement with the utilization of natural fiber reinforced polymer matrix composites in the automobile industry is proposed. A multi-factor based decision criteria with Analytical Hierarchy Process (AHP) was used and executed through MATLAB to achieve improved fuel efficiency through the weight reduction of vehicular components by effective comparison between two engine hood designs. The reduction was simulated by utilizing natural fiber polymer composites with thermoplastic polypropylene (PP) as the matrix polymer and benchmarked against a synthetic based composite component. Results showed that PP with 35% of flax fiber loading achieved a 0.4% improvement in fuel efficiency, and it was the highest among the 27 candidate fibers.
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
NASA Astrophysics Data System (ADS)
Olson, Richard F.
2013-05-01
Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.
The AGINAO Self-Programming Engine
NASA Astrophysics Data System (ADS)
Skaba, Wojciech
2013-01-01
The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
Conception preliminaire de disques de turbine axiale pour moteurs d'aeronefs
NASA Astrophysics Data System (ADS)
Ouellet, Yannick
The preliminary design phase of a turbine rotor has an important impact on the architecture of a new engine definition, as it sets the technical orientation right from start and provides a good estimate of product performance, weight and cost. In addition, the execution speed at this preliminary phase has become critical into capturing business opportunities. Improving upfront accuracy also alleviates downstream detailed design work and therefore reduces overall product development cycle time. This preliminary phase contains elements slowing down its process, including low interoperability of currently used systems, incompatibility of software and ineffective management of data. In order to overcome these barriers, we have developed the first module of a new Design and Analysis (D&A) platform for the rotor disc. This complete platform ensures integration of different tools processing in batch mode, and is driven from a single graphical user interface. The platform developed has been linked with different optimization methods (algorithms, configuration) in order to automate the disc design and propose best practices for rotor structural optimization. This methodology allowed reduction in design cycle time and improvement of performance. It was applied on two reference P&WC axial discs. The platform's architecture was also used in the development of reference charts to better understand disc performance within given design space. Four high pressure rotor discs of P&WC turbofan and turboprop engines were used to generate the technical charts and understand the effect of various parameters. The new tools supporting disc D&A, combined with the optimization process and reference charts, has proven to be profitable in terms of component performance and engineering effort inputs.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
Bottrighi, Alessio; Terenziani, Paolo
2016-09-01
Several different computer-assisted management systems of computer interpretable guidelines (CIGs) have been developed by the Artificial Intelligence in Medicine community. Each CIG system is characterized by a specific formalism to represent CIGs, and usually provides a manager to acquire, consult and execute them. Though there are several commonalities between most formalisms in the literature, each formalism has its own peculiarities. The goal of our work is to provide a flexible support to the extension or definition of CIGs formalisms, and of their acquisition and execution engines. Instead of defining "yet another CIG formalism and its manager", we propose META-GLARE (META Guideline Acquisition, Representation, and Execution), a "meta"-system to define new CIG systems. In this paper, META-GLARE, a meta-system to define new CIG systems, is presented. We try to capture the commonalities among current CIG approaches, by providing (i) a general manager for the acquisition, consultation and execution of hierarchical graphs (representing the control flow of actions in CIGs), parameterized over the types of nodes and of arcs constituting it, and (ii) a library of different elementary components of guidelines nodes (actions) and arcs, in which each type definition involves the specification of how objects of this type can be acquired, consulted and executed. We provide generality and flexibility, by allowing free aggregations of such elementary components to define new primitive node and arc types. We have drawn several experiments, in which we have used META-GLARE to build a CIG system (Experiment 1 in Section 8), or to extend it (Experiments 2 and 3). Such experiments show that META-GLARE provides a useful and easy-to-use support to such tasks. For instance, re-building the Guideline Acquisition, Representation, and Execution (GLARE) system using META-GLARE required less than one day (Experiment 1). META-GLARE is a meta-system for CIGs supporting fast prototyping. Since META-GLARE provides acquisition and execution engines that are parametric over the specific CIG formalism, it supports easy update and construction of CIG systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Education Program for Ph.D. Course to Cultivate Literacy and Competency
NASA Astrophysics Data System (ADS)
Yokono, Yasuyuki; Mitsuishi, Mamoru
The program aims to cultivate internationally competitive young researchers equipped with Fundamental attainment (mathematics, physics, chemistry and biology, and fundamental social sciences) , Specialized knowledge (mechanical dynamics, mechanics of materials, hydrodynamics, thermodynamics, design engineering, manufacturing engineering and material engineering, and bird‧s-eye view knowledge on technology, society and the environment) , Literacy (Language, information literacy, technological literacy and knowledge of the law) and Competency (Creativity, problem identification and solution, planning and execution, self-management, teamwork, leadership, sense of responsibility and sense of duty) to become future leaders in industry and academia.
History of the Fluids Engineering Division
Cooper, Paul; Martin, C. Samuel; O'Hern, Timothy J.
2016-08-03
The 90th Anniversary of the Fluids Engineering Division (FED) of ASME will be celebrated on July 10–14, 2016 in Washington, DC. The venue is ASME's Summer Heat Transfer Conference (SHTC), Fluids Engineering Division Summer Meeting (FEDSM), and International Conference on Nanochannels and Microchannels (ICNMM). The occasion is an opportune time to celebrate and reflect on the origin of FED and its predecessor—the Hydraulic Division (HYD), which existed from 1926–1963. Furthermore, the FED Executive Committee decided that it would be appropriate to publish concurrently a history of the HYD/FED.
History of the Fluids Engineering Division
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, Paul; Martin, C. Samuel; O'Hern, Timothy J.
The 90th Anniversary of the Fluids Engineering Division (FED) of ASME will be celebrated on July 10–14, 2016 in Washington, DC. The venue is ASME's Summer Heat Transfer Conference (SHTC), Fluids Engineering Division Summer Meeting (FEDSM), and International Conference on Nanochannels and Microchannels (ICNMM). The occasion is an opportune time to celebrate and reflect on the origin of FED and its predecessor—the Hydraulic Division (HYD), which existed from 1926–1963. Furthermore, the FED Executive Committee decided that it would be appropriate to publish concurrently a history of the HYD/FED.
NASA Technical Reports Server (NTRS)
Bekele, Gete
2002-01-01
This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cusick, Lesley T.
When the Environmental Management (EM) Program at the Oak Ridge Office of the Department of Energy (DOE) began its major decontamination and decommissioning (D and D) program activities in the mid-1990's, it was understood that the work to demolish the gaseous diffusion process buildings at the K-25 Site, as it was then known, would be challenging. Nothing of that size and breadth had ever been done within the DOE complex and the job brought about a full menu of unique attributes: radiological contamination with enriched materials entrained in certain areas of the system, a facility that was never designed notmore » to operate but had been shut down since 1964, and a loyal following of individuals and organizations who were committed to the physical preservation of at least some portion of the historic Manhattan Project property. DOE was able to solve and resolve the issues related to nuclear materials management, contamination control, and determining the best way to safely and efficiently deconstruct the massive building. However, for a variety of reasons, resolution of the historic preservation questions - what and how much to preserve, how to preserve it, where to preserve it, how to interpret it, how much to spend on preservation, and by and for whom preservation should occur - remained open to debate for over a decade. After a dozen years, countless meetings, phone calls, discussions and other exchanges, and four National Historic Preservation Act (NHPA) [1] Memoranda of Agreement (MOA), a final MOA [2] has been executed. The final executed MOA's measures are robust, creative, substantive, and will be effective. They include a multi-story replica of a portion of the K-25 Building, the dedication of the K-25 Building footprint for preservation purposes, an equipment building to house authentic Manhattan Project and Cold War equipment, a virtual museum, an on-site history center, a grant to preserve a historically-significant Manhattan Project-era hotel in Oak Ridge, and more. The MOA was designed to offer something for everyone. The MOA for the K- 25 Building and interpretation of the East Tennessee Technology Park (ETTP; formerly the K-25 Site) was executed by all of the signatory parties on August 7, 2012 - almost 67 years to-the-day after the 'product' of the K-25 process building became known to more than just a small group of scientists and engineers working on a secret project for the Army Corps of Engineers Manhattan District. (authors)« less
NASA Technical Reports Server (NTRS)
Stark, Michael; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
My assertion is that not only are product lines a relevant research topic, but that the tools used by empirical software engineering researchers can address observed practical problems. Our experience at NASA has been there are often externally proposed solutions available, but that we have had difficulties applying them in our particular context. We have also focused on return on investment issues when evaluating product lines, and while these are important, one can not attain objective data on success or failure until several applications from a product family have been deployed. The use of the Quality Improvement Paradigm (QIP) can address these issues: (1) Planning an adoption path from an organization's current state to a product line approach; (2) Constructing a development process to fit the organization's adoption path; (3) Evaluation of product line development processes as the project is being developed. The QIP consists of the following six steps: (1) Characterize the project and its environment; (2) Set quantifiable goals for successful project performance; (3) Choose the appropriate process models, supporting methods, and tools for the project; (4) Execute the process, analyze interim results, and provide real-time feedback for corrective action; (5) Analyze the results of completed projects and recommend improvements; and (6) Package the lessons learned as updated and refined process models. A figure shows the QIP in detail. The iterative nature of the QIP supports an incremental development approach to product lines, and the project learning and feedback provide the necessary early evaluations.
TAMU: Blueprint for A New Space Mission Operations System Paradigm
NASA Technical Reports Server (NTRS)
Ruszkowski, James T.; Meshkat, Leila; Haensly, Jean; Pennington, Al; Hogle, Charles
2011-01-01
The Transferable, Adaptable, Modular and Upgradeable (TAMU) Flight Production Process (FPP) is a System of System (SOS) framework which cuts across multiple organizations and their associated facilities, that are, in the most general case, in geographically disperse locations, to develop the architecture and associated workflow processes of products for a broad range of flight projects. Further, TAMU FPP provides for the automatic execution and re-planning of the workflow processes as they become operational. This paper provides the blueprint for the TAMU FPP paradigm. This blueprint presents a complete, coherent technique, process and tool set that results in an infrastructure that can be used for full lifecycle design and decision making during the flight production process. Based on the many years of experience with the Space Shuttle Program (SSP) and the International Space Station (ISS), the currently cancelled Constellation Program which aimed on returning humans to the moon as a starting point, has been building a modern model-based Systems Engineering infrastructure to Re-engineer the FPP. This infrastructure uses a structured modeling and architecture development approach to optimize the system design thereby reducing the sustaining costs and increasing system efficiency, reliability, robustness and maintainability metrics. With the advent of the new vision for human space exploration, it is now necessary to further generalize this framework to take into consideration a broad range of missions and the participation of multiple organizations outside of the MOD; hence the Transferable, Adaptable, Modular and Upgradeable (TAMU) concept.
Test plan : Branson TRIP system/historical data analysis
DOT National Transportation Integrated Search
2000-06-28
The focus of this data collection effort centers around the following six factors specifically articulated by the Federal Lands Highway, Executive Quality Council. They are as follows: Level of Contracting Out--identify what Preiliminary Engineering ...
REPORT ON 2017 MnROAD CONSTRUCTION ACTIVITIES
DOT National Transportation Integrated Search
2018-05-01
The National Road Research Alliance (NRRA), a multi-state pooled-fund program, exists to provide strategic implementation of pavement engineering solutions through cooperative research. NRRA is led by an Executive Committee of state DOT partners, and...
Carrasco, Juan A; Dormido, Sebastián
2006-04-01
The use of industrial control systems in simulators facilitates the execution of engineering activities related with the installation and the optimization of the control systems in real plants. "Industrial control system" intends to be a valid term that would represent all the control systems which can be installed in an industrial plant, ranging from complex distributed control systems and SCADA packages to small single control devices. This paper summarizes the current alternatives for the development of simulators of industrial plants and presents an analysis of the process of integrating an industrial control system into a simulator, with the aim of helping in the installation of real control systems in simulators.
Calculation and use of an environment's characteristic software metric set
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Selby, Richard W., Jr.
1985-01-01
Since both cost/quality and production environments differ, this study presents an approach for customizing a characteristic set of software metrics to an environment. The approach is applied in the Software Engineering Laboratory (SEL), a NASA Goddard production environment, to 49 candidate process and product metrics of 652 modules from six (51,000 to 112,000 lines) projects. For this particular environment, the method yielded the characteristic metric set (source lines, fault correction effort per executable statement, design effort, code effort, number of I/O parameters, number of versions). The uses examined for a characteristic metric set include forecasting the effort for development, modification, and fault correction of modules based on historical data.
ARDesigner: a web-based system for allosteric RNA design.
Shu, Wenjie; Liu, Ming; Chen, Hebing; Bo, Xiaochen; Wang, Shengqi
2010-12-01
RNA molecules play vital informational, structural, and functional roles in molecular biology, making them ideal targets for synthetic biology. However, several challenges remain for engineering novel allosteric RNA molecules, and the development of efficient computational design techniques is vitally needed. Here we describe the development of Allosteric RNA Designer (ARDesigner), a user-friendly and freely available web-based system for allosteric RNA design that incorporates mutational robustness in the design process. The system output includes detailed design information in a graphical HTML format. We used ARDesigner to engineer a temperature-sensitive AR, and found that the resulting design satisfied the prescribed properties/input. ARDesigner provides a simple means for researchers to design allosteric RNAs with specific properties. With its versatile framework and possibilities for further enhancement, ARDesigner may serve as a useful tool for synthetic biologists and therapeutic design. ARDesigner and its executable version are freely available at http://biotech.bmi.ac.cn/ARDesigner. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Combined Engineering Education Based on Regional Needs Aiming for Design Education
NASA Astrophysics Data System (ADS)
Hama, Katsumi; Yaegashi, Kosuke; Kobayashi, Junya
The importance of design education that cultivates integrated competences has been suggested in higher educational institutions in fields of engineering in relation to quality assurance of engineering education. However, it is also pointed out to lay stress on cooperative education in collaboration with the community because there is a limit to correspond to the design education only by a group of educational institutions. This paper reports the outline of the practical engineering education, which is executing in the project learning of Hakodate National College of Technology, based on regional needs and the result of the activity as a model of education program for fusion and combination.
2012-05-07
executions are all the executions of the first except the single infinite execution stuttering around s0. And because of this exception, s0 is not bisimilar...maximal paths in the diagram, and that whose executions are all the executions of the first system except the infinite execution stuttering around s0. s...advance to s1, from where on it behaves just like the first one. What sets the behaviour of the two processes apart is, of course, the infinite stuttering
De Neys, Wim
2006-06-01
Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).
Computing Cooling Flows in Turbines
NASA Technical Reports Server (NTRS)
Gauntner, J.
1986-01-01
Algorithm developed for calculating both quantity of compressor bleed flow required to cool turbine and resulting decrease in efficiency due to cooling air injected into gas stream. Program intended for use with axial-flow, air-breathing, jet-propulsion engines with variety of airfoil-cooling configurations. Algorithm results compared extremely well with figures given by major engine manufacturers for given bulk-metal temperatures and cooling configurations. Program written in FORTRAN IV for batch execution.
Computer-Aided Design Of Turbine Blades And Vanes
NASA Technical Reports Server (NTRS)
Hsu, Wayne Q.
1988-01-01
Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
ERIC Educational Resources Information Center
Meltzer, Lynn
2013-01-01
Success in our 21st century schools is linked with students' mastery of a wide range of academic and technological skills that rely heavily on executive function processes. This article describes a theoretical paradigm for understanding, assessing, and teaching that emphasizes the central importance of six executive function processes: goal…
Clustering execution in a processing system to increase power savings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.
Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling themore » tasks.« less
Framework for Integrating Science Data Processing Algorithms Into Process Control Systems
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Chang, Albert Y.; Foster, Brian M.; Freeborn, Dana J.; Woollard, David M.; Ramirez, Paul M.
2011-01-01
A software framework called PCS Task Wrapper is responsible for standardizing the setup, process initiation, execution, and file management tasks surrounding the execution of science data algorithms, which are referred to by NASA as Product Generation Executives (PGEs). PGEs codify a scientific algorithm, some step in the overall scientific process involved in a mission science workflow. The PCS Task Wrapper provides a stable operating environment to the underlying PGE during its execution lifecycle. If the PGE requires a file, or metadata regarding the file, the PCS Task Wrapper is responsible for delivering that information to the PGE in a manner that meets its requirements. If the PGE requires knowledge of upstream or downstream PGEs in a sequence of executions, that information is also made available. Finally, if information regarding disk space, or node information such as CPU availability, etc., is required, the PCS Task Wrapper provides this information to the underlying PGE. After this information is collected, the PGE is executed, and its output Product file and Metadata generation is managed via the PCS Task Wrapper framework. The innovation is responsible for marshalling output Products and Metadata back to a PCS File Management component for use in downstream data processing and pedigree. In support of this, the PCS Task Wrapper leverages the PCS Crawler Framework to ingest (during pipeline processing) the output Product files and Metadata produced by the PGE. The architectural components of the PCS Task Wrapper framework include PGE Task Instance, PGE Config File Builder, Config File Property Adder, Science PGE Config File Writer, and PCS Met file Writer. This innovative framework is really the unifying bridge between the execution of a step in the overall processing pipeline, and the available PCS component services as well as the information that they collectively manage.
Kluwe-Schiavon, Bruno; Viola, Thiago W; Sanvicente-Vieira, Breno; Malloy-Diniz, Leandro F; Grassi-Oliveira, Rodrigo
2016-01-01
Recently, there has been growing interest in understanding how executive functions are conceptualized in psychopathology. Since several models have been proposed, the major issue lies within the definition of executive functioning itself. Theoretical discussions have emerged, narrowing the boundaries between "hot" and "cold" executive functions or between self-regulation and cognitive control. Nevertheless, the definition of executive functions is far from a consensual proposition and it has been suggested that these models might be outdated. Current efforts indicate that human behavior and cognition are by-products of many brain systems operating and interacting at different levels, and therefore, it is very simplistic to assume a dualistic perspective of information processing. Based upon an adaptive perspective, we discuss how executive functions could emerge from the ability to solve immediate problems and to generalize successful strategies, as well as from the ability to synthesize and to classify environmental information in order to predict context and future. We present an executive functioning perspective that emerges from the dynamic balance between automatic-controlled behaviors and an emotional-salience state. According to our perspective, the adaptive role of executive functioning is to automatize efficient solutions simultaneously with cognitive demand, enabling individuals to engage such processes with increasingly complex problems. Understanding executive functioning as a mediator of stress and cognitive engagement not only fosters discussions concerning individual differences, but also offers an important paradigm to understand executive functioning as a continuum process rather than a categorical and multicomponent structure.
Kluwe-Schiavon, Bruno; Viola, Thiago W.; Sanvicente-Vieira, Breno; Malloy-Diniz, Leandro F.; Grassi-Oliveira, Rodrigo
2017-01-01
Recently, there has been growing interest in understanding how executive functions are conceptualized in psychopathology. Since several models have been proposed, the major issue lies within the definition of executive functioning itself. Theoretical discussions have emerged, narrowing the boundaries between “hot” and “cold” executive functions or between self-regulation and cognitive control. Nevertheless, the definition of executive functions is far from a consensual proposition and it has been suggested that these models might be outdated. Current efforts indicate that human behavior and cognition are by-products of many brain systems operating and interacting at different levels, and therefore, it is very simplistic to assume a dualistic perspective of information processing. Based upon an adaptive perspective, we discuss how executive functions could emerge from the ability to solve immediate problems and to generalize successful strategies, as well as from the ability to synthesize and to classify environmental information in order to predict context and future. We present an executive functioning perspective that emerges from the dynamic balance between automatic-controlled behaviors and an emotional-salience state. According to our perspective, the adaptive role of executive functioning is to automatize efficient solutions simultaneously with cognitive demand, enabling individuals to engage such processes with increasingly complex problems. Understanding executive functioning as a mediator of stress and cognitive engagement not only fosters discussions concerning individual differences, but also offers an important paradigm to understand executive functioning as a continuum process rather than a categorical and multicomponent structure. PMID:28154541
Quality management of manufacturing process based on manufacturing execution system
NASA Astrophysics Data System (ADS)
Zhang, Jian; Jiang, Yang; Jiang, Weizhuo
2017-04-01
Quality control elements in manufacturing process are elaborated. And the approach of quality management of manufacturing process based on manufacturing execution system (MES) is discussed. The functions of MES for a microcircuit production line are introduced conclusively.
Hinault, T; Lemaire, P
2016-01-01
In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.
Naber, Marnix; Vedder, Anneke; Brown, Stephen B R E; Nieuwenhuis, Sander
2016-01-01
The Stroop task is a popular neuropsychological test that measures executive control. Strong Stroop interference is commonly interpreted in neuropsychology as a diagnostic marker of impairment in executive control, possibly reflecting executive dysfunction. However, popular models of the Stroop task indicate that several other aspects of color and word processing may also account for individual differences in the Stroop task, independent of executive control. Here we use new approaches to investigate the degree to which individual differences in Stroop interference correlate with the relative processing speed of word and color stimuli, and the lateral inhibition between visual stimuli. We conducted an electrophysiological and behavioral experiment to measure (1) how quickly an individual's brain processes words and colors presented in isolation (P3 latency), and (2) the strength of an individual's lateral inhibition between visual representations with a visual illusion. Both measures explained at least 40% of the variance in Stroop interference across individuals. As these measures were obtained in contexts not requiring any executive control, we conclude that the Stroop effect also measures an individual's pre-set way of processing visual features such as words and colors. This study highlights the important contributions of stimulus processing speed and lateral inhibition to individual differences in Stroop interference, and challenges the general view that the Stroop task primarily assesses executive control.
Affective and executive network processing associated with persuasive antidrug messages.
Ramsay, Ian S; Yzer, Marco C; Luciana, Monica; Vohs, Kathleen D; MacDonald, Angus W
2013-07-01
Previous research has highlighted brain regions associated with socioemotional processes in persuasive message encoding, whereas cognitive models of persuasion suggest that executive brain areas may also be important. The current study aimed to identify lateral prefrontal brain areas associated with persuasive message viewing and understand how activity in these executive regions might interact with activity in the amygdala and medial pFC. Seventy adolescents were scanned using fMRI while they watched 10 strongly convincing antidrug public service announcements (PSAs), 10 weakly convincing antidrug PSAs, and 10 advertisements (ads) unrelated to drugs. Antidrug PSAs compared with nondrug ads more strongly elicited arousal-related activity in the amygdala and medial pFC. Within antidrug PSAs, those that were prerated as strongly persuasive versus weakly persuasive showed significant differences in arousal-related activity in executive processing areas of the lateral pFC. In support of the notion that persuasiveness involves both affective and executive processes, functional connectivity analyses showed greater coactivation between the lateral pFC and amygdala during PSAs known to be strongly (vs. weakly) convincing. These findings demonstrate that persuasive messages elicit activation in brain regions responsible for both emotional arousal and executive control and represent a crucial step toward a better understanding of the neural processes responsible for persuasion and subsequent behavior change.
Graphical Language for Data Processing
NASA Technical Reports Server (NTRS)
Alphonso, Keith
2011-01-01
A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.