Measuring the software process and product: Lessons learned in the SEL
NASA Technical Reports Server (NTRS)
Basili, V. R.
1985-01-01
The software development process and product can and should be measured. The software measurement process at the Software Engineering Laboratory (SEL) has taught a major lesson: develop a goal-driven paradigm (also characterized as a goal/question/metric paradigm) for data collection. Project analysis under this paradigm leads to a design for evaluating and improving the methodology of software development and maintenance.
Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2014-01-01
Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181
An Object-Oriented Approach to Writing Computational Electromagnetics Codes
NASA Technical Reports Server (NTRS)
Zimmerman, Martin; Mallasch, Paul G.
1996-01-01
Presently, most computer software development in the Computational Electromagnetics (CEM) community employs the structured programming paradigm, particularly using the Fortran language. Other segments of the software community began switching to an Object-Oriented Programming (OOP) paradigm in recent years to help ease design and development of highly complex codes. This paper examines design of a time-domain numerical analysis CEM code using the OOP paradigm, comparing OOP code and structured programming code in terms of software maintenance, portability, flexibility, and speed.
Software development: A paradigm for the future
NASA Technical Reports Server (NTRS)
Basili, Victor R.
1989-01-01
A new paradigm for software development that treats software development as an experimental activity is presented. It provides built-in mechanisms for learning how to develop software better and reusing previous experience in the forms of knowledge, processes, and products. It uses models and measures to aid in the tasks of characterization, evaluation and motivation. An organization scheme is proposed for separating the project-specific focus from the organization's learning and reuse focuses of software development. The implications of this approach for corporations, research and education are discussed and some research activities currently underway at the University of Maryland that support this approach are presented.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
Object oriented development of engineering software using CLIPS
NASA Technical Reports Server (NTRS)
Yoon, C. John
1991-01-01
Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.
Documenting the decision structure in software development
NASA Technical Reports Server (NTRS)
Wild, J. Christian; Maly, Kurt; Shen, Stewart N.
1990-01-01
Current software development paradigms focus on the products of the development process. Much of the decision making process which produces these products is outside the scope of these paradigms. The Decision-Based Software Development (DBSD) paradigm views the design process as a series of interrelated decisions which involve the identification and articulation of problems, alternates, solutions and justifications. Decisions made by programmers and analysts are recorded in a project data base. Unresolved problems are also recorded and resources for their resolution are allocated by management according to the overall development strategy. This decision structure is linked to the products affected by the relevant decision and provides a process oriented view of the resulted system. Software maintenance uses this decision view of the system to understand the rationale behind the decisions affecting the part of the system to be modified. D-HyperCase, a prototype Decision-Based Hypermedia System is described and results of applying the DBSD approach during its development are presented.
A proposed research program in information processing
NASA Technical Reports Server (NTRS)
Schorr, Herbert
1992-01-01
The goal of the Formalized Software Development (FSD) project was to demonstrate improvements productivity of software development and maintenance through the use of a new software lifecycle paradigm. The paradigm calls for the mechanical, but human-guided, derivation of software implementations from formal specifications of the desired software behavior. It relies on altering a system's specification and rederiving its implementation as the standard technology for software maintenance. A system definition for this paradigm is composed of a behavioral specification together with a body of annotations that control the derivation of executable code from the specification. Annotations generally achieve the selection of certain data representations and/or algorithms that are consistent with, but not mandated by, the behavioral specification. In doing this, they may yield systems which exhibit only certain behaviors among multiple alternatives permitted by the behavioral specification. The FSD project proposed to construct a testbed in which to explore the realization of this new paradigm. The testbed was to provide operational support environment for software design, implementation, and maintenance. The testbed was proposed to provide highly automated support for individual programmers ('programming in the small'), but not to address the additional needs of programming teams ('programming in the large'). The testbed proposed to focus on supporting rapid construction and evolution of useful prototypes of software systems, as opposed to focusing on the problems of achieving production quality performance of systems.
Spacecraft Avionics Software Development Then and Now: Different but the Same
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Garman, John (Jack); Vice, Jason
2012-01-01
NASA has always been in the business of balancing new technologies and techniques to achieve human space travel objectives. NASA s historic Software Production Facility (SPF) was developed to serve complex avionics software solutions during an era dominated by mainframes, tape drives, and lower level programming languages. These systems have proven themselves resilient enough to serve the Shuttle Orbiter Avionics life cycle for decades. The SPF and its predecessor the Software Development Lab (SDL) at NASA s Johnson Space Center (JSC) hosted flight software (FSW) engineering, development, simulation, and test. It was active from the beginning of Shuttle Orbiter development in 1972 through the end of the shuttle program in the summer of 2011 almost 40 years. NASA s Kedalion engineering analysis lab is on the forefront of validating and using many contemporary avionics HW/SW development and integration techniques, which represent new paradigms to NASA s heritage culture in avionics software engineering. Kedalion has validated many of the Orion project s HW/SW engineering techniques borrowed from the adjacent commercial aircraft avionics environment, inserting new techniques and skills into the Multi-Purpose Crew Vehicle (MPCV) Orion program. Using contemporary agile techniques, COTS products, early rapid prototyping, in-house expertise and tools, and customer collaboration, NASA has adopted a cost effective paradigm that is currently serving Orion effectively. This paper will explore and contrast differences in technology employed over the years of NASA s space program, due largely to technological advances in hardware and software systems, while acknowledging that the basic software engineering and integration paradigms share many similarities.
Integrating automated support for a software management cycle into the TAME system
NASA Technical Reports Server (NTRS)
Sunazuka, Toshihiko; Basili, Victor R.
1989-01-01
Software managers are interested in the quantitative management of software quality, cost and progress. An integrated software management methodology, which can be applied throughout the software life cycle for any number purposes, is required. The TAME (Tailoring A Measurement Environment) methodology is based on the improvement paradigm and the goal/question/metric (GQM) paradigm. This methodology helps generate a software engineering process and measurement environment based on the project characteristics. The SQMAR (software quality measurement and assurance technology) is a software quality metric system and methodology applied to the development processes. It is based on the feed forward control principle. Quality target setting is carried out before the plan-do-check-action activities are performed. These methodologies are integrated to realize goal oriented measurement, process control and visual management. A metric setting procedure based on the GQM paradigm, a management system called the software management cycle (SMC), and its application to a case study based on NASA/SEL data are discussed. The expected effects of SMC are quality improvement, managerial cost reduction, accumulation and reuse of experience, and a highly visual management reporting system.
The maturing of the quality improvement paradigm in the SEL
NASA Technical Reports Server (NTRS)
Basili, Victor R.
1993-01-01
The Software Engineering Laboratory uses a paradigm for improving the software process and product, called the quality improvement paradigm. This paradigm has evolved over the past 18 years, along with our software development processes and product. Since 1976, when we first began the SEL, we have learned a great deal about improving the software process and product, making a great many mistakes along the way. Quality improvement paradigm, as it is currently defined, can be broken up into six steps: characterize the current project and its environment with respect to the appropriate models and metrics; set the quantifiable goals for successful project performance and improvement; choose the appropriate process model and supporting methods and tools for this project; execute the processes, construct the products, and collect, validate, and analyze the data to provide real-time feedback for corrective action; analyze the data to evaluate the current practices, determine problems, record findings, and make recommendations for future project improvements; and package the experience gained in the form of updated and refined models and other forms of structured knowledge gained from this and prior projects and save it in an experience base to be reused on future projects.
The Knowledge-Based Software Assistant: Beyond CASE
NASA Technical Reports Server (NTRS)
Carozzoni, Joseph A.
1993-01-01
This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.
ERIC Educational Resources Information Center
Reyes Alamo, Jose M.
2010-01-01
The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…
Collaboration Between NASA Centers of Excellence on Autonomous System Software Development
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.; Larson, William E.; Delgado, H. (Technical Monitor)
2001-01-01
Software for space systems flight operations has its roots in the early days of the space program when computer systems were incapable of supporting highly complex and flexible control logic. Control systems relied on fast data acquisition and supervisory control from a roomful of systems engineers on the ground. Even though computer hardware and software has become many orders of magnitude more capable, space systems have largely adhered to this original paradigm In an effort to break this mold, Kennedy Space Center (KSC) has invested in the development of model-based diagnosis and control applications for ten years having broad experience in both ground and spacecraft systems and software. KSC has now partnered with Ames Research Center (ARC), NASA's Center of Excellence in Information Technology, to create a new paradigm for the control of dynamic space systems. ARC has developed model-based diagnosis and intelligent planning software that enables spacecraft to handle most routine problems automatically and allocate resources in a flexible way to realize mission objectives. ARC demonstrated the utility of onboard diagnosis and planning with an experiment aboard Deep Space I in 1999. This paper highlights the software control system collaboration between KSC and ARC. KSC has developed a Mars In-situ Resource Utilization testbed based on the Reverse Water Gas Shift (RWGS) reaction. This plant, built in KSC's Applied Chemistry Laboratory, is capable of producing the large amount of Oxygen that would be needed to support a Human Mars Mission. KSC and ARC are cooperating to develop an autonomous, fault-tolerant control system for RWGS to meet the need for autonomy on deep space missions. The paper will also describe how the new system software paradigm will be applied to Vehicle Health Monitoring, tested on the new X vehicles and integrated into future launch processing systems.
National Cycle Program (NCP) Common Analysis Tool for Aeropropulsion
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; Evans, A.
1999-01-01
Through the NASA/Industry Cooperative Effort (NICE) agreement, NASA Lewis and industry partners are developing a new engine simulation, called the National Cycle Program (NCP), which is the initial framework of NPSS. NCP is the first phase toward achieving the goal of NPSS. This new software supports the aerothermodynamic system simulation process for the full life cycle of an engine. The National Cycle Program (NCP) was written following the Object Oriented Paradigm (C++, CORBA). The software development process used was also based on the Object Oriented paradigm. Software reviews, configuration management, test plans, requirements, design were all apart of the process used in developing NCP. Due to the many contributors to NCP, the stated software process was mandatory for building a common tool intended for use by so many organizations. The U.S. aircraft and airframe companies recognize NCP as the future industry standard for propulsion system modeling.
Knowledge-based requirements analysis for automating software development
NASA Technical Reports Server (NTRS)
Markosian, Lawrence Z.
1988-01-01
We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.
Field Test of Route Planning Software for Lunar Polar Missions
NASA Astrophysics Data System (ADS)
Horchler, A. D.; Cunningham, C.; Jones, H. L.; Arnett, D.; Fang, E.; Amoroso, E.; Otten, N.; Kitchell, F.; Holst, I.; Rock, G.; Whittaker, W.
2017-10-01
A novel field test paradigm has been developed to demonstrate and validate route planning software in the stark low-angled light and sweeping shadows a rover would experience at the poles of the Moon. Software, ConOps, and test results are presented.
Developing ICALL Tools Using GATE
ERIC Educational Resources Information Center
Wood, Peter
2008-01-01
This article discusses the use of the General Architecture for Text Engineering (GATE) as a tool for the development of ICALL and NLP applications. It outlines a paradigm shift in software development, which is mainly influenced by projects such as the Free Software Foundation. It looks at standards that have been proposed to facilitate the…
The Package-Based Development Process in the Flight Dynamics Division
NASA Technical Reports Server (NTRS)
Parra, Amalia; Seaman, Carolyn; Basili, Victor; Kraft, Stephen; Condon, Steven; Burke, Steven; Yakimovich, Daniil
1997-01-01
The Software Engineering Laboratory (SEL) has been operating for more than two decades in the Flight Dynamics Division (FDD) and has adapted to the constant movement of the software development environment. The SEL's Improvement Paradigm shows that process improvement is an iterative process. Understanding, Assessing and Packaging are the three steps that are followed in this cyclical paradigm. As the improvement process cycles back to the first step, after having packaged some experience, the level of understanding will be greater. In the past, products resulting from the packaging step have been large process documents, guidebooks, and training programs. As the technical world moves toward more modularized software, we have made a move toward more modularized software development process documentation, as such the products of the packaging step are becoming smaller and more frequent. In this manner, the QIP takes on a more spiral approach rather than a waterfall. This paper describes the state of the FDD in the area of software development processes, as revealed through the understanding and assessing activities conducted by the COTS study team. The insights presented include: (1) a characterization of a typical FDD Commercial Off the Shelf (COTS) intensive software development life-cycle process, (2) lessons learned through the COTS study interviews, and (3) a description of changes in the SEL due to the changing and accelerating nature of software development in the FDD.
Coupling Sensing Hardware with Data Interrogation Software for Structural Health Monitoring
Farrar, Charles R.; Allen, David W.; Park, Gyuhae; ...
2006-01-01
The process of implementing a damage detection strategy for aerospace, civil and mechanical engineering infrastructure is referred to as structural health monitoring (SHM). The authors' approach is to address the SHM problem in the context of a statistical pattern recognition paradigm. In this paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition and Cleansing, (3) Feature Extraction and Data Compression, and (4) Statistical Model Development for Feature Discrimination. These processes must be implemented through hardware or software and, in general, some combination of these two approaches will be used. This paper will discussmore » each portion of the SHM process with particular emphasis on the coupling of a general purpose data interrogation software package for structural health monitoring with a modular wireless sensing and processing platform. More specifically, this paper will address the need to take an integrated hardware/software approach to developing SHM solutions.« less
A Framework for Teaching Software Development Methods
ERIC Educational Resources Information Center
Dubinsky, Yael; Hazzan, Orit
2005-01-01
This article presents a study that aims at constructing a teaching framework for software development methods in higher education. The research field is a capstone project-based course, offered by the Technion's Department of Computer Science, in which Extreme Programming is introduced. The research paradigm is an Action Research that involves…
NASA Astrophysics Data System (ADS)
Dulo, D. A.
Safety critical software systems permeate spacecraft, and in a long term venture like a starship would be pervasive in every system of the spacecraft. Yet software failure today continues to plague both the systems and the organizations that develop them resulting in the loss of life, time, money, and valuable system platforms. A starship cannot afford this type of software failure in long journeys away from home. A single software failure could have catastrophic results for the spaceship and the crew onboard. This paper will offer a new approach to developing safe reliable software systems through focusing not on the traditional safety/reliability engineering paradigms but rather by focusing on a new paradigm: Resilience and Failure Obviation Engineering. The foremost objective of this approach is the obviation of failure, coupled with the ability of a software system to prevent or adapt to complex changing conditions in real time as a safety valve should failure occur to ensure safe system continuity. Through this approach, safety is ensured through foresight to anticipate failure and to adapt to risk in real time before failure occurs. In a starship, this type of software engineering is vital. Through software developed in a resilient manner, a starship would have reduced or eliminated software failure, and would have the ability to rapidly adapt should a software system become unstable or unsafe. As a result, long term software safety, reliability, and resilience would be present for a successful long term starship mission.
1990-05-01
Sanders Associates. Inc. A demonstration of knowledge-based support for the evolut ;cnry development of software system requirements uskig mitV/9 text...Conference Commiffee W Douga W~t Spin-Off Technologies 4 AN OVERVIEW OF RADC’S KNOWLEDGE BASED SOFTWARE ASSISTANT PROGRAM Donald M. Elefante Rome Air...Knowledge-Based Software Assistant is a formally based, computer-mediated paradigm for the specification, development, evolution , and Ir ig term
2010-10-01
facial trustworthiness; facial displays of anger) presented subliminally . Furthermore, the responsiveness of these regions to subliminal stimulation ...develop, or program the computerized stimulation paradigms for use during functional neuroimaging (i.e., MJT; BMAT; EFAT). These paradigms will be...programming began on the computerized functional MRI stimulation paradigms using e-prime software. • Quarter #2: Programming of all computerized functional
ERIC Educational Resources Information Center
Barajas-Saavedra, Arturo; Álvarez-Rodriguez, Francisco J.; Mendoza-González, Ricardo; Oviedo-De-Luna, Ana C.
2015-01-01
Development of digital resources is difficult due to their particular complexity relying on pedagogical aspects. Another aspect is the lack of well-defined development processes, experiences documented, and standard methodologies to guide and organize game development. Added to this, there is no documented technique to ensure correct…
Use of Field Programmable Gate Array Technology in Future Space Avionics
NASA Technical Reports Server (NTRS)
Ferguson, Roscoe C.; Tate, Robert
2005-01-01
Fulfilling NASA's new vision for space exploration requires the development of sustainable, flexible and fault tolerant spacecraft control systems. The traditional development paradigm consists of the purchase or fabrication of hardware boards with fixed processor and/or Digital Signal Processing (DSP) components interconnected via a standardized bus system. This is followed by the purchase and/or development of software. This paradigm has several disadvantages for the development of systems to support NASA's new vision. Building a system to be fault tolerant increases the complexity and decreases the performance of included software. Standard bus design and conventional implementation produces natural bottlenecks. Configuring hardware components in systems containing common processors and DSPs is difficult initially and expensive or impossible to change later. The existence of Hardware Description Languages (HDLs), the recent increase in performance, density and radiation tolerance of Field Programmable Gate Arrays (FPGAs), and Intellectual Property (IP) Cores provides the technology for reprogrammable Systems on a Chip (SOC). This technology supports a paradigm better suited for NASA's vision. Hardware and software production are melded for more effective development; they can both evolve together over time. Designers incorporating this technology into future avionics can benefit from its flexibility. Systems can be designed with improved fault isolation and tolerance using hardware instead of software. Also, these designs can be protected from obsolescence problems where maintenance is compromised via component and vendor availability.To investigate the flexibility of this technology, the core of the Central Processing Unit and Input/Output Processor of the Space Shuttle AP101S Computer were prototyped in Verilog HDL and synthesized into an Altera Stratix FPGA.
A software development and evolution model based on decision-making
NASA Technical Reports Server (NTRS)
Wild, J. Christian; Dong, Jinghuan; Maly, Kurt
1991-01-01
Design is a complex activity whose purpose is to construct an artifact which satisfies a set of constraints and requirements. However the design process is not well understood. The software design and evolution process is the focus of interest, and a three dimensional software development space organized around a decision-making paradigm is presented. An initial instantiation of this model called 3DPM(sub p) which was partly implemented, is presented. Discussion of the use of this model in software reuse and process management is given.
The Need for a Cooperative Paradigm to Meet Business' Key Microcomputer Training Requirements.
ERIC Educational Resources Information Center
Hubbard, Gary R.
1985-01-01
The growing awareness and availability of business application software at small business prices and the creation of a unique national computer training consortium has motivated one community college district to promote more non-credit, short-term training opportunities in accounting software. Rationale for and development of these opportunities…
NASA Technical Reports Server (NTRS)
Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren
1997-01-01
The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions
Model Based Analysis and Test Generation for Flight Software
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep
2009-01-01
We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2013-12-01
In this paper, are presented some possibilities concerning the implementation of a test-driven development as a programming method. Here is offered a different point of view for creation of advanced programming techniques (build tests before programming source with all necessary software tools and modules respectively). Therefore, this nontraditional approach for easier programmer's work through building tests at first is preferable way of software development. This approach allows comparatively simple programming (applied with different object-oriented programming languages as for example JAVA, XML, PYTHON etc.). It is predictable way to develop software tools and to provide help about creating better software that is also easier to maintain. Test-driven programming is able to replace more complicated casual paradigms, used by many programmers.
Object-oriented design of medical imaging software.
Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R
1994-01-01
A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.
A Probabilistic Software System Attribute Acceptance Paradigm for COTS Software Evaluation
NASA Technical Reports Server (NTRS)
Morris, A. Terry
2005-01-01
Standard software requirement formats are written from top-down perspectives only, that is, from an ideal notion of a client s needs. Despite the exactness of the standard format, software and system errors in designed systems have abounded. Bad and inadequate requirements have resulted in cost overruns, schedule slips and lost profitability. Commercial off-the-shelf (COTS) software components are even more troublesome than designed systems because they are often provided as is and subsequently delivered with unsubstantiated validation of described capabilities. For COTS software, there needs to be a way to express the client s software needs in a consistent and formal manner using software system attributes derived from software quality standards. Additionally, the format needs to be amenable to software evaluation processes that integrate observable evidence garnered from historical data. This paper presents a paradigm that effectively bridges the gap between what a client desires (top-down) and what has been demonstrated (bottom-up) for COTS software evaluation. The paradigm addresses the specification of needs before the software evaluation is performed and can be used to increase the shared understanding between clients and software evaluators about what is required and what is technically possible.
Report on a Knowledge-Based Software Assistant.
1983-08-01
maintainers, project managers , and end-users). In this paradigm, software activities, including definition, management , and validation will be...project management . This report also presents a plan for the development of the KBSA, along with a description of the necessary supporting technology...Activity Coordination .. .. .. ..... ...... ..... .... 19 3.2 Project Management and Documentation. .. ... ........ 20 3.2.1 Project Management Facet
NASA Astrophysics Data System (ADS)
Melton, R.; Thomas, J.
With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.
Software Assurance Challenges for the Commercial Crew Program
NASA Technical Reports Server (NTRS)
Cuyno, Patrick; Malnick, Kathy D.; Schaeffer, Chad E.
2015-01-01
This paper will provide a description of some of the challenges NASA is facing in providing software assurance within the new commercial space services paradigm, namely with the Commercial Crew Program (CCP). The CCP will establish safe, reliable, and affordable access to the International Space Station (ISS) by purchasing a ride from commercial companies. The CCP providers have varying experience with software development in safety-critical space systems. NASA's role in providing effective software assurance support to the CCP providers is critical to the success of CCP. These challenges include funding multiple vehicles that execute in parallel and have different rules of engagement, multiple providers with unique proprietary concerns, providing equivalent guidance to all providers, permitting alternates to NASA standards, and a large number of diverse stakeholders. It is expected that these challenges will exist in future programs, especially if the CCP paradigm proves successful. The proposed CCP approach to address these challenges includes a risk-based assessment with varying degrees of engagement and a distributed assurance model. This presentation will describe NASA IV&V Program's software assurance support and responses to these challenges.
SoftLab: A Soft-Computing Software for Experimental Research with Commercialization Aspects
NASA Technical Reports Server (NTRS)
Akbarzadeh-T, M.-R.; Shaikh, T. S.; Ren, J.; Hubbell, Rob; Kumbla, K. K.; Jamshidi, M
1998-01-01
SoftLab is a software environment for research and development in intelligent modeling/control using soft-computing paradigms such as fuzzy logic, neural networks, genetic algorithms, and genetic programs. SoftLab addresses the inadequacies of the existing soft-computing software by supporting comprehensive multidisciplinary functionalities from management tools to engineering systems. Furthermore, the built-in features help the user process/analyze information more efficiently by a friendly yet powerful interface, and will allow the user to specify user-specific processing modules, hence adding to the standard configuration of the software environment.
ERIC Educational Resources Information Center
Nakata, Tatsuya
2011-01-01
The present study aims to conduct a comprehensive investigation of flashcard software for learning vocabulary in a second language. Nine flashcard programs were analysed using 17 criteria derived from previous studies on flashcard learning as well as paired-associate learning. Results suggest that in general, most programs have been developed in a…
Formal verification of AI software
NASA Technical Reports Server (NTRS)
Rushby, John; Whitehurst, R. Alan
1989-01-01
The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.
A Paradigm Shift in Nuclear Spectrum Analysis
NASA Astrophysics Data System (ADS)
Westmeier, Wolfram; Siemon, Klaus
2012-08-01
An overview of the latest developments in quantitative spectrometry software is presented. New strategies and algorithms introduced are characterized by buzzwords “Physics, no numerology”, “Fuzzy logic” and “Repeated analyses”. With the implementation of these new ideas one arrives at software capabilities that were unreachable before and which are now realized in the GAMMA-W, SODIGAM and ALPS packages.
Architecture of a framework for providing information services for public transport.
García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.
Kedalion: NASA's Adaptable and Agile Hardware/Software Integration and Test Lab
NASA Technical Reports Server (NTRS)
Mangieri, Mark L.; Vice, Jason
2011-01-01
NASA fs Kedalion engineering analysis lab at Johnson Space Center is on the forefront of validating and using many contemporary avionics hardware/software development and integration techniques, which represent new paradigms to heritage NASA culture. Kedalion has validated many of the Orion hardware/software engineering techniques borrowed from the adjacent commercial aircraft avionics solution space, with the intention to build upon such techniques to better align with today fs aerospace market. Using agile techniques, commercial products, early rapid prototyping, in-house expertise and tools, and customer collaboration, Kedalion has demonstrated that cost effective contemporary paradigms hold the promise to serve future NASA endeavors within a diverse range of system domains. Kedalion provides a readily adaptable solution for medium/large scale integration projects. The Kedalion lab is currently serving as an in-line resource for the project and the Multipurpose Crew Vehicle (MPCV) program.
Improving automation standards via semantic modelling: Application to ISA88.
Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès
2017-03-01
Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1991-01-01
The ongoing debate over the role of formalism and formal specifications in software features many speakers with diverse positions. Yet, in the end, they share the conviction that the requirements of a software system can be unambiguously specified, that acceptable software is a product demonstrably meeting the specifications, and that the design process can be carried out with little interaction between designers and users once the specification has been agreed to. This conviction is part of a larger paradigm prevalent in American management thinking, which holds that organizations are systems that can be precisely specified and optimized. This paradigm, which traces historically to the works of Frederick Taylor in the early 1900s, is no longer sufficient for organizations and software systems today. In the domain of software, a new paradigm, called user-centered design, overcomes the limitations of pure formalism. Pioneered in Scandinavia, user-centered design is spreading through Europe and is beginning to make its way into the U.S.
Enhancing instruction scheduling with a block-structured ISA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melvin, S.; Patt, Y.
It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less
Supporting metabolomics with adaptable software: design architectures for the end-user.
Sarpe, Vladimir; Schriemer, David C
2017-02-01
Large and disparate sets of LC-MS data are generated by modern metabolomics profiling initiatives, and while useful software tools are available to annotate and quantify compounds, the field requires continued software development in order to sustain methodological innovation. Advances in software development practices allow for a new paradigm in tool development for metabolomics, where increasingly the end-user can develop or redeploy utilities ranging from simple algorithms to complex workflows. Resources that provide an organized framework for development are described and illustrated with LC-MS processing packages that have leveraged their design tools. Full access to these resources depends in part on coding experience, but the emergence of workflow builders and pluggable frameworks strongly reduces the skill level required. Developers in the metabolomics community are encouraged to use these resources and design content for uptake and reuse. Copyright © 2016 Elsevier Ltd. All rights reserved.
Advances in knowledge-based software engineering
NASA Technical Reports Server (NTRS)
Truszkowski, Walt
1991-01-01
The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.
Architecture of a Framework for Providing Information Services for Public Transport
García, Carmelo R.; Pérez, Ricardo; Lorenzo, Álvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained. PMID:22778585
The Personal Software Process: Downscaling the factory
NASA Technical Reports Server (NTRS)
Roy, Daniel M.
1994-01-01
It is argued that the next wave of software process improvement (SPI) activities will be based on a people-centered paradigm. The most promising such paradigm, Watts Humphrey's personal software process (PSP), is summarized and its advantages are listed. The concepts of the PSP are shown also to fit a down-scaled version of Basili's experience factory. The author's data and lessons learned while practicing the PSP are presented along with personal experience, observations, and advice from the perspective of a consultant and teacher for the personal software process.
The TAME Project: Towards improvement-oriented software environments
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Rombach, H. Dieter
1988-01-01
Experience from a dozen years of analyzing software engineering processes and products is summarized as a set of software engineering and measurement principles that argue for software engineering process models that integrate sound planning and analysis into the construction process. In the TAME (Tailoring A Measurement Environment) project at the University of Maryland, such an improvement-oriented software engineering process model was developed that uses the goal/question/metric paradigm to integrate the constructive and analytic aspects of software development. The model provides a mechanism for formalizing the characterization and planning tasks, controlling and improving projects based on quantitative analysis, learning in a deeper and more systematic way about the software process and product, and feeding the appropriate experience back into the current and future projects. The TAME system is an instantiation of the TAME software engineering process model as an ISEE (integrated software engineering environment). The first in a series of TAME system prototypes has been developed. An assessment of experience with this first limited prototype is presented including a reassessment of its initial architecture.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
ACS from development to operations
NASA Astrophysics Data System (ADS)
Caproni, Alessandro; Colomer, Pau; Jeram, Bogdan; Sommer, Heiko; Chiozzi, Gianluca; Mañas, Miguel M.
2016-08-01
The ALMA Common Software (ACS), provides the infrastructure of the distributed software system of ALMA and other projects. ACS, built on top of CORBA and Data Distribution Service (DDS) middleware, is based on a Component- Container paradigm and hides the complexity of the middleware allowing the developer to focus on domain specific issues. The transition of the ALMA observatory from construction to operations brings with it that ACS effort focuses primarily on scalability, stability and robustness rather than on new features. The transition came together with a shorter release cycle and a more extensive testing. For scalability, the most problematic area has been the CORBA notification service, used to implement the publisher subscriber pattern because of the asynchronous nature of the paradigm: a lot of effort has been spent to improve its stability and recovery from run time errors. The original bulk data mechanism, implemented using the CORBA Audio/Video Streaming Service, showed its limitations and has been replaced with a more performant and scalable DDS implementation. Operational needs showed soon the difference between releases cycles for Online software (i.e. used during observations) and Offline software, which requires much more frequent releases. This paper attempts to describe the impact the transition from construction to operations had on ACS, the solution adopted so far and a look into future evolution.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... Environmental Services, Inc., Dupont Direct Financial Holdings, Inc., New Paradigm Software Corp. (n/k/a Brunton... concerning the securities of Commodore Environmental Services, Inc. because it has not filed any periodic... accurate information concerning the securities of New Paradigm Software Corp. (n/k/a Brunton Vineyards...
EMMA: a new paradigm in configurable software
Nogiec, J. M.; Trombly-Freytag, K.
2017-11-23
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: A New Paradigm in Configurable Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J. M.; Trombly-Freytag, K.
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: a new paradigm in configurable software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J. M.; Trombly-Freytag, K.
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: a new paradigm in configurable software
NASA Astrophysics Data System (ADS)
Nogiec, J. M.; Trombly-Freytag, K.
2017-10-01
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
Development of a case tool to support decision based software development
NASA Technical Reports Server (NTRS)
Wild, Christian J.
1993-01-01
A summary of the accomplishments of the research over the past year are presented. Achievements include: made demonstrations with DHC, a prototype supporting decision based software development (DBSD) methodology, for Paramax personnel at ODU; met with Paramax personnel to discuss DBSD issues, the process of integrating DBSD and Refinery and the porting process model; completed and submitted a paper describing DBSD paradigm to IFIP '92; completed and presented a paper describing the approach for software reuse at the Software Reuse Workshop in April 1993; continued to extend DHC with a project agenda, facility necessary for a better project management; completed a primary draft of the re-engineering process model for porting; created a logging form to trace all the activities involved in the process of solving the reengineering problem, and developed a primary chart with the problems involved by the reengineering process.
The Experience Factory: Strategy and Practice
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi
1995-01-01
The quality movement, that has had in recent years a dramatic impact on all industrial sectors, has recently reached the system and software industry. Although some concepts of quality management, originally developed for other product types, can be applied to software, its specificity as a product which is developed and not produced requires a special approach. This paper introduces a quality paradigm specifically tailored on the problem of the systems and software industry. Reuse of products, processes and experiences originating from the system life cycle is seen today as a feasible solution to the problem of developing higher quality systems at a lower cost. In fact, quality improvement is very often achieved by defining and developing an appropriate set of strategic capabilities and core competencies to support them. A strategic capability is, in this context, a corporate goal defined by the business position of the organization and implemented by key business processes. Strategic capabilities are supported by core competencies, which are aggregate technologies tailored to the specific needs of the organization in performing the needed business processes. Core competencies are non-transitional, have a consistent evolution, and are typically fueled by multiple technologies. Their selection and development requires commitment, investment and leadership. The paradigm introduced in this paper for developing core competencies is the Quality Improvement Paradigm which consists of six steps: (1) Characterize the environment, (2) Set the goals, (3) Choose the process, (4) Execute the process, (5) Analyze the process data, and (6) Package experience. The process must be supported by a goal oriented approach to measurement and control, and an organizational infrastructure, called Experience Factory. The Experience Factory is a logical and physical organization distinct from the project organizations it supports. Its goal is development and support of core competencies through capitalization and reuse of its cycle experience and products. The paper introduces the major concepts of the proposed approach, discusses their relationship with other approaches used in the industry, and presents a case in which those concepts have been successfully applied.
A Novel Method for Mining SaaS Software Tag via Community Detection in Software Services Network
NASA Astrophysics Data System (ADS)
Qin, Li; Li, Bing; Pan, Wei-Feng; Peng, Tao
The number of online software services based on SaaS paradigm is increasing. However, users usually find it hard to get the exact software services they need. At present, tags are widely used to annotate specific software services and also to facilitate the searching of them. Currently these tags are arbitrary and ambiguous since mostly of them are generated manually by service developers. This paper proposes a method for mining tags from the help documents of software services. By extracting terms from the help documents and calculating the similarity between the terms, we construct a software similarity network where nodes represent software services, edges denote the similarity relationship between software services, and the weights of the edges are the similarity degrees. The hierarchical clustering algorithm is used for community detection in this software similarity network. At the final stage, tags are mined for each of the communities and stored as ontology.
Open Source software and social networks: disruptive alternatives for medical imaging.
Ratib, Osman; Rosset, Antoine; Heuberger, Joris
2011-05-01
In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate and exchange information is a new model that is particularly suitable for some specific groups of healthcare professional and for physicians. It has also changed the expectations of how patients wish to communicate with their physicians. Emerging disruptive technologies and innovative paradigm such as Open Source software are leading the way to a new generation of information systems that slowly will change the way physicians and healthcare providers as well as patients will interact and communicate in the future. The impact of these new technologies is particularly effective in image communication, PACS and teleradiology. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Software Reviews Since Acquisition Reform - The Artifact Perspective
2004-01-01
Risk Management OLD NEW Slide 13Acquisition of Software Intensive Systems 2004 – Peter Hantos Single, basic software paradigm Single processor Low...software risk mitigation related trade-offs must be done together Integral Software Engineering Activities Process Maturity and Quality Frameworks Quality
Open Source Paradigm: A Synopsis of The Cathedral and the Bazaar for Health and Social Care.
Benson, Tim
2016-07-04
Open source software (OSS) is becoming more fashionable in health and social care, although the ideas are not new. However progress has been slower than many had expected. The purpose is to summarise the Free/Libre Open Source Software (FLOSS) paradigm in terms of what it is, how it impacts users and software engineers and how it can work as a business model in health and social care sectors. Much of this paper is a synopsis of Eric Raymond's seminal book The Cathedral and the Bazaar, which was the first comprehensive description of the open source ecosystem, set out in three long essays. Direct quotes from the book are used liberally, without reference to specific passages. The first part contrasts open and closed source approaches to software development and support. The second part describes the culture and practices of the open source movement. The third part considers business models. A key benefit of open source is that users can access and collaborate on improving the software if they wish. Closed source code may be regarded as a strategic business risk that that may be unacceptable if there is an open source alternative. The sharing culture of the open source movement fits well with that of health and social care.
1992-12-01
OOD) Paradigm ...... .... 2-7 2.4.3 Feature-Oriented Domain Analysis ( FODA ) ..... 2-7 2.4.4 Hierarchical Software Systems .................. 2-7...domain analysis ( FODA ) is one approach to domain analysis whose primary goal is to make domain products reusable (20:47). A domain model describes 2-5...7), among others. 2.4.3 Feature-Oriented Domain Analysis ( FODA ) Kang and others used the com- plete FODA methodology to successfully develop a window
Towards Cross-Organizational Innovative Business Process Interoperability Services
NASA Astrophysics Data System (ADS)
Karacan, Ömer; Del Grosso, Enrico; Carrez, Cyril; Taglino, Francesco
This paper presents the vision and initial results of the COIN (FP7-IST-216256) European project for the development of open source Collaborative Business Process Interoperability (CBPip) in cross-organisational business collaboration environments following the Software-as-a-Service Utility (SaaS-U) paradigm.
ERIC Educational Resources Information Center
Rutherford, Teomara; Kibrick, Melissa; Burchinal, Margaret; Richland, Lindsey; Conley, AnneMarie; Osborne, Keara; Schneider, Stephanie; Duran, Lauren; Coulson, Andrew; Antenore, Fran; Daniels, Abby; Martinez, Michael E.
2010-01-01
This paper describes the background, methodology, preliminary findings, and anticipated future directions of a large-scale multi-year randomized field experiment addressing the efficacy of ST Math [Spatial-Temporal Math], a fully-developed math curriculum that uses interactive animated software. ST Math's unique approach minimizes the use of…
Library Development Handbook. Central Archive for Reusable Defense Software (CARDS)
1993-10-29
features. This feature benefits the individual not versed in the terminology of the domain. When class requirements become part of the domain criteria, they... franchisee - Group to whom a franchise is granted. generic architecture - A collection of high-level paradigms and constraints that characterize the
STOP-IT: Windows executable software for the stop-signal paradigm.
Verbruggen, Frederick; Logan, Gordon D; Stevens, Michaël A
2008-05-01
The stop-signal paradigm is a useful tool for the investigation of response inhibition. In this paradigm, subjects are instructed to respond as fast as possible to a stimulus unless a stop signal is presented after a variable delay. However, programming the stop-signal task is typically considered to be difficult. To overcome this issue, we present software called STOP-IT, for running the stop-signal task, as well as an additional analyzing program called ANALYZE-IT. The main advantage of both programs is that they are a precompiled executable, and for basic use there is no need for additional programming. STOP-IT and ANALYZE-IT are completely based on free software, are distributed under the GNU General Public License, and are available at the personal Web sites of the first two authors or at expsy.ugent.be/tscope/stop.html.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing.
Cole, Brian S; Moore, Jason H
2018-03-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.
Eleven quick tips for architecting biomedical informatics workflows with cloud computing
Moore, Jason H.
2018-01-01
Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416
A Software Engineering Paradigm for Quick-turnaround Earth Science Data Projects
NASA Astrophysics Data System (ADS)
Moore, K.
2016-12-01
As is generally the case with applied sciences professional and educational programs, the participants of such programs can come from a variety of technical backgrounds. In the NASA DEVELOP National Program, the participants constitute an interdisciplinary set of backgrounds, with varying levels of experience with computer programming. DEVELOP makes use of geographically explicit data sets, and it is necessary to use geographic information systems and geospatial image processing environments. As data sets cover longer time spans and include more complex sets of parameters, automation is becoming an increasingly prevalent feature. Though platforms such as ArcGIS, ERDAS Imagine, and ENVI facilitate the batch-processing of geospatial imagery, these environments are naturally constricting to the user in that they limit him or her to the tools that are available. Users must then turn to "homemade" scripting in more traditional programming languages such as Python, JavaScript, or R, to automate workflows. However, in the context of quick-turnaround projects like those in DEVELOP, the programming learning curve may be prohibitively steep. In this work, we consider how to best design a software development paradigm that addresses two major constants: an arbitrarily experienced programmer and quick-turnaround project timelines.
ERIC Educational Resources Information Center
Mitchell, Susan Marie
2012-01-01
Uncontrollable costs, schedule overruns, and poor end product quality continue to plague the software engineering field. Innovations formulated with the expectation to minimize or eliminate cost, schedule, and quality problems have generally fallen into one of three categories: programming paradigms, software tools, and software process…
Agile IT: Thinking in User-Centric Models
NASA Astrophysics Data System (ADS)
Margaria, Tiziana; Steffen, Bernhard
We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems' life cycle. Based on the `One-Thing Approach', which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming - in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the user process in the center of the development and the application expert in control of the process evolution.
NASA Astrophysics Data System (ADS)
Ames, D.; Kadlec, J.; Horsburgh, J. S.; Maidment, D. R.
2009-12-01
The Consortium of Universities for the Advancement of Hydrologic Sciences (CUAHSI) Hydrologic Information System (HIS) project includes extensive development of data storage and delivery tools and standards including WaterML (a language for sharing hydrologic data sets via web services); and HIS Server (a software tool set for delivering WaterML from a server); These and other CUASHI HIS tools have been under development and deployment for several years and together, present a relatively complete software “stack” to support the consistent storage and delivery of hydrologic and other environmental observation data. This presentation describes the development of a new HIS software tool called “HydroDesktop” and the development of an online open source software development community to update and maintain the software. HydroDesktop is a local (i.e. not server-based) client side software tool that ultimately will run on multiple operating systems and will provide a highly usable level of access to HIS services. The software provides many key capabilities including data query, map-based visualization, data download, local data maintenance, editing, graphing, data export to selected model-specific data formats, linkage with integrated modeling systems such as OpenMI, and ultimately upload to HIS servers from the local desktop software. As the software is presently in the early stages of development, this presentation will focus on design approach and paradigm and is viewed as an opportunity to encourage participation in the open development community. Indeed, recognizing the value of community based code development as a means of ensuring end-user adoption, this project has adopted an “iterative” or “spiral” software development approach which will be described in this presentation.
ERIC Educational Resources Information Center
Boticki, I.; Katic, M.; Martin,S.
2013-01-01
This paper explores the educational benefits of introducing the aspect-oriented programming paradigm into a programming course in a study on a sample of 75 undergraduate software engineering students. It discusses how using the aspect-oriented paradigm, in addition to the object-oriented programming paradigm, affects students' programs, their exam…
Building an experience factory for maintenance
NASA Technical Reports Server (NTRS)
Valett, Jon D.; Condon, Steven E.; Briand, Lionel; Kim, Yong-Mi; Basili, Victor R.
1994-01-01
This paper reports the preliminary results of a study of the software maintenance process in the Flight Dynamics Division (FDD) of the National Aeronautics and Space Administration/Goddard Space Flight Center (NASA/GSFC). This study is being conducted by the Software Engineering Laboratory (SEL), a research organization sponsored by the Software Engineering Branch of the FDD, which investigates the effectiveness of software engineering technologies when applied to the development of applications software. This software maintenance study began in October 1993 and is being conducted using the Quality Improvement Paradigm (QIP), a process improvement strategy based on three iterative steps: understanding, assessing, and packaging. The preliminary results represent the outcome of the understanding phase, during which SEL researchers characterized the maintenance environment, product, and process. Findings indicate that a combination of quantitative and qualitative analysis is effective for studying the software maintenance process, that additional measures should be collected for maintenance (as opposed to new development), and that characteristics such as effort, error rate, and productivity are best considered on a 'release' basis rather than on a project basis. The research thus far has documented some basic differences between new development and software maintenance. It lays the foundation for further application of the QIP to investigate means of improving the maintenance process and product in the FDD.
Software Systems for High-performance Quantum Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Britt, Keith A
Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventionalmore » computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.« less
Long-term Preservation of Data Analysis Capabilities
NASA Astrophysics Data System (ADS)
Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.
2015-09-01
While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.
Paradigms of Evaluation in Natural Language Processing: Field Linguistics for Glass Box Testing
ERIC Educational Resources Information Center
Cohen, Kevin Bretonnel
2010-01-01
Although software testing has been well-studied in computer science, it has received little attention in natural language processing. Nonetheless, a fully developed methodology for glass box evaluation and testing of language processing applications already exists in the field methods of descriptive linguistics. This work lays out a number of…
Running R Statistical Computing Environment Software on the Peregrine
for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing
Parallel Processing with Digital Signal Processing Hardware and Software
NASA Technical Reports Server (NTRS)
Swenson, Cory V.
1995-01-01
The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.
Thin-Slice Measurement of Wisdom
Hu, Chao S.; Ferrari, Michel; Wang, Qiandong; Woodruff, Earl
2017-01-01
Objective Measurement of Wisdom within a short period of time is vital for both the public interest (e.g., understanding a presidential election) and research (e.g., testing factors that facilitate wisdom development). A measurement of emotion associated with wisdom would be especially informative; therefore, a novel Thin-Slice measurement of wisdom was developed based on the Berlin Paradigm. For about 2 min, participants imagined the lens of a camera as the eyes of their friend/teacher whom they advised about a life dilemma. Verbal response and facial expression were both recorded by a camera: verbal responses were then rated on both the Berlin Wisdom criteria and newly developed Chinese wisdom criteria; facial expressions were analyzed by the software iMotion FACET module. Results showed acceptable inter-rater and inter-item reliability for this novel paradigm. Moreover, both wisdom ratings were not significantly correlated with Social desirability, and the Berlin wisdom rating was significantly negatively correlated with Neuroticism; feeling of surprise was significantly positively correlated with both wisdom criteria ratings. Our results provide the first evidence of this Thin-slice Wisdom Paradigm’s reliability, its immunity to social desirability, and its validity for assessing candidates’ wisdom within a short timeframe. Although still awaiting further development, this novel Paradigm contributes to an emerging Universal Wisdom Paradigm applicable across cultures. PMID:28861016
Software and the future of programming languages.
Aho, Alfred V
2004-02-27
Although software is the key enabler of the global information infrastructure, the amount and extent of software in use in the world today are not widely understood, nor are the programming languages and paradigms that have been used to create the software. The vast size of the embedded base of existing software and the increasing costs of software maintenance, poor security, and limited functionality are posing significant challenges for the software R&D community.
Teaching Software Componentization: A Bar Chart Java Bean
ERIC Educational Resources Information Center
Mitri, Michel
2010-01-01
In the current object-oriented paradigm, software construction increasingly involves creating and utilizing "software components". These components can serve a variety of functions, from common algorithmic processes to database connectivity to graphical interfaces. The advantage of component architectures is that programmers can use pre-existing…
Internet-Assisted Real-Time Experiments Using the Internet--Hardware and Software Considerations
ERIC Educational Resources Information Center
Singh, R. Paul; Circelli, Diego
2005-01-01
The spectacular increase in Internet-based applications during the past decade has had a significant impact on the education delivery paradigms. The user interactivity aspect of the Internet has provided new opportunities to instructors to incorporate its use in developing new learning systems. The use of the Internet in carrying out live…
1991-07-30
4 Management reviews, engineering and WBS -Spiral 0 -5 *Risk Management Planning -Spiral 0-5 ,41.- Unrelsi ugt .Proper initial planning -Spiral 0.1...Reusability issues for trusted systems are associated closely with maintenance issues. Reuse theory and practice for highly trusted systems will require
A Case Study of Coordination in Distributed Agile Software Development
NASA Astrophysics Data System (ADS)
Hole, Steinar; Moe, Nils Brede
Global Software Development (GSD) has gained significant popularity as an emerging paradigm. Companies also show interest in applying agile approaches in distributed development to combine the advantages of both approaches. However, in their most radical forms, agile and GSD can be placed in each end of a plan-based/agile spectrum because of how work is coordinated. We describe how three GSD projects applying agile methods coordinate their work. We found that trust is needed to reduce the need of standardization and direct supervision when coordinating work in a GSD project, and that electronic chatting supports mutual adjustment. Further, co-location and modularization mitigates communication problems, enables agility in at least part of a GSD project, and renders the implementation of Scrum of Scrums possible.
Automatic generation of randomized trial sequences for priming experiments.
Ihrke, Matthias; Behrendt, Jörg
2011-01-01
In most psychological experiments, a randomized presentation of successive displays is crucial for the validity of the results. For some paradigms, this is not a trivial issue because trials are interdependent, e.g., priming paradigms. We present a software that automatically generates optimized trial sequences for (negative-) priming experiments. Our implementation is based on an optimization heuristic known as genetic algorithms that allows for an intuitive interpretation due to its similarity to natural evolution. The program features a graphical user interface that allows the user to generate trial sequences and to interactively improve them. The software is based on freely available software and is released under the GNU General Public License.
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Socio-Cultural Challenges in Global Software Engineering Education
ERIC Educational Resources Information Center
Hoda, Rashina; Babar, Muhammad Ali; Shastri, Yogeshwar; Yaqoob, Humaa
2017-01-01
Global software engineering education (GSEE) is aimed at providing software engineering (SE) students with knowledge, skills, and understanding of working in globally distributed arrangements so they can be prepared for the global SE (GSE) paradigm. It is important to understand the challenges involved in GSEE for improving the quality and…
ENCOMPASS: A SAGA based environment for the compositon of programs and specifications, appendix A
NASA Technical Reports Server (NTRS)
Terwilliger, Robert B.; Campbell, Roy H.
1985-01-01
ENCOMPASS is an example integrated software engineering environment being constructed by the SAGA project. ENCOMPASS supports the specification, design, construction and maintenance of efficient, validated, and verified programs in a modular programming language. The life cycle paradigm, schema of software configurations, and hierarchical library structure used by ENCOMPASS is presented. In ENCOMPASS, the software life cycle is viewed as a sequence of developments, each of which reuses components from the previous ones. Each development proceeds through the phases planning, requirements definition, validation, design, implementation, and system integration. The components in a software system are modeled as entities which have relationships between them. An entity may have different versions and different views of the same project are allowed. The simple entities supported by ENCOMPASS may be combined into modules which may be collected into projects. ENCOMPASS supports multiple programmers and projects using a hierarchical library system containing a workspace for each programmer; a project library for each project, and a global library common to all projects.
Wilson, James C; Kesler, Mitch; Pelegrin, Sara-Lynn E; Kalvi, LeAnna; Gruber, Aaron; Steenland, Hendrik W
2015-09-30
The physical distance between predator and prey is a primary determinant of behavior, yet few paradigms exist to study this reliably in rodents. The utility of a robotically controlled laser for use in a predator-prey-like (PPL) paradigm was explored for use in rats. This involved the construction of a robotic two-dimensional gimbal to dynamically position a laser beam in a behavioral test chamber. Custom software was used to control the trajectory and final laser position in response to user input on a console. The software also detected the location of the laser beam and the rodent continuously so that the dynamics of the distance between them could be analyzed. When the animal or laser beam came within a fixed distance the animal would either be rewarded with electrical brain stimulation or shocked subcutaneously. Animals that received rewarding electrical brain stimulation could learn to chase the laser beam, while animals that received aversive subcutaneous shock learned to actively avoid the laser beam in the PPL paradigm. Mathematical computations are presented which describe the dynamic interaction of the laser and rodent. The robotic laser offers a neutral stimulus to train rodents in an open field and is the first device to be versatile enough to assess distance between predator and prey in real time. With ongoing behavioral testing this tool will permit the neurobiological investigation of predator/prey-like relationships in rodents, and may have future implications for prosthetic limb development through brain-machine interfaces. Copyright © 2015 Elsevier B.V. All rights reserved.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
C-Language Integrated Production System, Version 5.1
NASA Technical Reports Server (NTRS)
Riley, Gary; Donnell, Brian; Ly, Huyen-Anh VU; Culbert, Chris; Savely, Robert T.; Mccoy, Daniel J.; Giarratano, Joseph
1992-01-01
CLIPS 5.1 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming provides representation of knowledge by use of heuristics. Object-oriented programming enables modeling of complex systems as modular components. Procedural programming enables CLIPS to represent knowledge in ways similar to those allowed in such languages as C, Pascal, Ada, and LISP. Working with CLIPS 5.1, one can develop expert-system software by use of rule-based programming only, object-oriented programming only, procedural programming only, or combinations of the three.
Software Reviews. PC Software for Artificial Intelligence Applications.
ERIC Educational Resources Information Center
Epp, Helmut; And Others
1988-01-01
Contrasts artificial intelligence and conventional programming languages. Reviews Personal Consultant Plus, Smalltalk/V, and Nexpert Object, which are PC-based products inspired by problem-solving paradigms. Provides information on background and operation of each. (RT)
NASA Technical Reports Server (NTRS)
Basili, Victor R.
1992-01-01
The concepts of quality improvements have permeated many businesses. It is clear that the nineties will be the quality era for software and there is a growing need to develop or adapt quality improvement approaches to the software business. Thus we must understand software as an artifact and software as a business. Since the business we are dealing with is software, we must understand the nature of software and software development. The software discipline is evolutionary and experimental; it is a laboratory science. Software is development not production. The technologies of the discipline are human based. There is a lack of models that allow us to reason about the process and the product. All software is not the same; process is a variable, goals are variable, etc. Packaged, reusable, experiences require additional resources in the form of organization, processes, people, etc. There have been a variety of organizational frameworks proposed to improve quality for various businesses. The ones discussed in this presentation include: Plan-Do-Check-Act, a quality improvement process based upon a feedback cycle for optimizing a single process model/production line; the Experience Factory/Quality Improvement Paradigm, continuous improvements through the experimentation, packaging, and reuse of experiences based upon a business's needs; Total Quality Management, a management approach to long term success through customer satisfaction based on the participation of all members of an organization; the SEI capability maturity model, a staged process improvement based upon assessment with regard to a set of key process areas until you reach a level 5 which represents a continuous process improvement; and Lean (software) Development, a principle supporting the concentration of the production on 'value added' activities and the elimination of reduction of 'not value added' activities.
Journals May Soon Use Anti-Plagiarism Software on Their Authors
ERIC Educational Resources Information Center
Rampell, Catherine
2008-01-01
This spring, academic journals may turn the anti-plagiarism software that professors have been using against their students on the professors themselves. CrossRef, a publishing industry association, and the software company iParadigms announced a deal last week to create CrossCheck, an anti-plagiarism program for academic journals. The software…
Roca, Alberto I
2014-01-01
The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org.
NASA Astrophysics Data System (ADS)
Furuya, Haruhisa; Hiratsuka, Mitsuyoshi
This article overviews the historical transition of legal protection of Computer software contracts in the Unite States and presents how it should function under Uniform Commercial Code and its amended Article 2B, Uniform Computer Information Transactions Act, and also recently-approved “Principles of the Law of Software Contracts”.
Toward Reusable Graphics Components in Ada
1993-03-01
Then alternatives for obtaining well- engineered reusable software components were examined. Finally, the alternatives were analyzed, and the most...reusable software components. Chapter 4 describes detailed design and implementation strategies in building a well- engineered reusable set of components in...study. 2.2 The Object-Oriented Paradigm 2.2.1 The Need for Object-Oriented Techniques. Among software engineers the software crisis is a well known
2012-01-01
computerized stimulation paradigms for use during functional neuroimaging (i.e., MSIT). Accomplishments: • The following computer tasks were...and Stability Test. • Programming of all computerized functional MRI stimulation paradigms and assessment tasks using E-prime software was completed...Computer stimulation paradigms were tested in the scanner environment to ensure that they could be presented and seen by subjects in the scanner
An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Follen, Gregory J.
2003-01-01
Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT).
Hellrung, Lydia; Hollmann, Maurice; Zscheyge, Oliver; Schlumm, Torsten; Kalberlah, Christian; Roggenhofer, Elisabeth; Okon-Singer, Hadas; Villringer, Arno; Horstmann, Annette
2015-01-01
In this work we present a new open source software package offering a unified framework for the real-time adaptation of fMRI stimulation procedures. The software provides a straightforward setup and highly flexible approach to adapt fMRI paradigms while the experiment is running. The general framework comprises the inclusion of parameters from subject’s compliance, such as directing gaze to visually presented stimuli and physiological fluctuations, like blood pressure or pulse. Additionally, this approach yields possibilities to investigate complex scientific questions, for example the influence of EEG rhythms or fMRI signals results themselves. To prove the concept of this approach, we used our software in a usability example for an fMRI experiment where the presentation of emotional pictures was dependent on the subject’s gaze position. This can have a significant impact on the results. So far, if this is taken into account during fMRI data analysis, it is commonly done by the post-hoc removal of erroneous trials. Here, we propose an a priori adaptation of the paradigm during the experiment’s runtime. Our fMRI findings clearly show the benefits of an adapted paradigm in terms of statistical power and higher effect sizes in emotion-related brain regions. This can be of special interest for all experiments with low statistical power due to a limited number of subjects, a limited amount of time, costs or available data to analyze, as is the case with real-time fMRI. PMID:25837719
Models and Frameworks: A Synergistic Association for Developing Component-Based Applications
Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858
Models and frameworks: a synergistic association for developing component-based applications.
Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.
Test Driven Development: Lessons from a Simple Scientific Model
NASA Astrophysics Data System (ADS)
Clune, T. L.; Kuo, K.
2010-12-01
In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.
Automated support for system's engineering and operations - The development of new paradigms
NASA Technical Reports Server (NTRS)
Truszkowski, Walt; Hall, Gardiner A.; Jaworski, Allan; Zoch, David
1992-01-01
Technological developments in spacecraft ground operations are reviewed. The technological, operations-oriented, managerial, and economic factors driving the evolution of the Mission Operations Control Center (MOCC), and its predecessor the Operational Control Center are examined. The functional components of the various MOCC subsystems are outlined. A brief overview is given of the concepts behind the The Knowledge-Based Software Engineering Environment, the Generic Spacecraft Analysis Assistant, and the Knowledge From Pictures tool.
Foundations for Security Aware Software Development Education
2005-11-22
depending on the budget, that support robustness. We discuss the educational customer base, projected lifetime, and complexity of paradigm shift that should...in Honour of Sir Tony Hoar, [6] Cheetham, C. and Ferraiolo, K., "The Systems Security Millenial Perspectives in Computer Science, Engineering...Capability Maturity Model", 21st 2002, 229-246. National Information Systems Security Conference, [15] Schwartz, J., "Object Oriented Extensions to October 5
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Using Docker Compose for the Simple Deployment of an Integrated Drug Target Screening Platform.
List, Markus
2017-06-10
Docker virtualization allows for software tools to be executed in an isolated and controlled environment referred to as a container. In Docker containers, dependencies are provided exactly as intended by the developer and, consequently, they simplify the distribution of scientific software and foster reproducible research. The Docker paradigm is that each container encapsulates one particular software tool. However, to analyze complex biomedical data sets, it is often necessary to combine several software tools into elaborate workflows. To address this challenge, several Docker containers need to be instantiated and properly integrated, which complicates the software deployment process unnecessarily. Here, we demonstrate how an extension to Docker, Docker compose, can be used to mitigate these problems by providing a unified setup routine that deploys several tools in an integrated fashion. We demonstrate the power of this approach by example of a Docker compose setup for a drug target screening platform consisting of five integrated web applications and shared infrastructure, deployable in just two lines of codes.
Exploring the Role of Value Networks for Software Innovation
NASA Astrophysics Data System (ADS)
Morgan, Lorraine; Conboy, Kieran
This paper describes a research-in-progress that aims to explore the applicability and implications of open innovation practices in two firms - one that employs agile development methods and another that utilizes open source software. The open innovation paradigm has a lot in common with open source and agile development methodologies. A particular strength of agile approaches is that they move away from 'introverted' development, involving only the development personnel, and intimately involves the customer in all areas of software creation, supposedly leading to the development of a more innovative and hence more valuable information system. Open source software (OSS) development also shares two key elements of the open innovation model, namely the collaborative development of the technology and shared rights to the use of the technology. However, one shortfall with agile development in particular is the narrow focus on a single customer representative. In response to this, we argue that current thinking regarding innovation needs to be extended to include multiple stakeholders both across and outside the organization. Additionally, for firms utilizing open source, it has been found that their position in a network of potential complementors determines the amount of superior value they create for their customers. Thus, this paper aims to get a better understanding of the applicability and implications of open innovation practices in firms that employ open source and agile development methodologies. In particular, a conceptual framework is derived for further testing.
A Testbed for Evaluating Lunar Habitat Autonomy Architectures
NASA Technical Reports Server (NTRS)
Lawler, Dennis G.
2008-01-01
A lunar outpost will involve a habitat with an integrated set of hardware and software that will maintain a safe environment for human activities. There is a desire for a paradigm shift whereby crew will be the primary mission operators, not ground controllers. There will also be significant periods when the outpost is uncrewed. This will require that significant automation software be resident in the habitat to maintain all system functions and respond to faults. JSC is developing a testbed to allow for early testing and evaluation of different autonomy architectures. This will allow evaluation of different software configurations in order to: 1) understand different operational concepts; 2) assess the impact of failures and perturbations on the system; and 3) mitigate software and hardware integration risks. The testbed will provide an environment in which habitat hardware simulations can interact with autonomous control software. Faults can be injected into the simulations and different mission scenarios can be scripted. The testbed allows for logging, replaying and re-initializing mission scenarios. An initial testbed configuration has been developed by combining an existing life support simulation and an existing simulation of the space station power distribution system. Results from this initial configuration will be presented along with suggested requirements and designs for the incremental development of a more sophisticated lunar habitat testbed.
Achieving Operability via the Mission System Paradigm
NASA Technical Reports Server (NTRS)
Hammer, Fred J.; Kahr, Joseph R.
2006-01-01
In the past, flight and ground systems have been developed largely-independently, with the flight system taking the lead, and dominating the development process. Operability issues have been addressed poorly in planning, requirements, design, I&T, and system-contracting activities. In many cases, as documented in lessons-learned, this has resulted in significant avoidable increases in cost and risk. With complex missions and systems, operability is being recognized as an important end-to-end design issue. Never-the-less, lessons-learned and operability concepts remain, in many cases, poorly understood and sporadically applied. A key to effective application of operability concepts is adopting a 'mission system' paradigm. In this paradigm, flight and ground systems are treated, from an engineering and management perspective, as inter-related elements of a larger mission system. The mission system consists of flight hardware, flight software, telecom services, ground data system, testbeds, flight teams, science teams, flight operations processes, procedures, and facilities. The system is designed in functional layers, which span flight and ground. It is designed in response to project-level requirements, mission design and an operations concept, and is developed incrementally, with early and frequent integration of flight and ground components.
Parallel computation and the Basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1992-12-16
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to-use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communication costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis and Parallelmore » Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Parallel computation and the basis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, G.R.
1993-05-01
A software package has been written that can facilitate efforts to develop powerful, flexible, and easy-to use programs that can run in single-processor, massively parallel, and distributed computing environments. Particular attention has been given to the difficulties posed by a program consisting of many science packages that represent subsystems of a complicated, coupled system. Methods have been found to maintain independence of the packages by hiding data structures without increasing the communications costs in a parallel computing environment. Concepts developed in this work are demonstrated by a prototype program that uses library routines from two existing software systems, Basis andmore » Parallel Virtual Machine (PVM). Most of the details of these libraries have been encapsulated in routines and macros that could be rewritten for alternative libraries that possess certain minimum capabilities. The prototype software uses a flexible master-and-slaves paradigm for parallel computation and supports domain decomposition with message passing for partitioning work among slaves. Facilities are provided for accessing variables that are distributed among the memories of slaves assigned to subdomains. The software is named PROTOPAR.« less
Goal Structuring Notation in a Radiation Hardening Assurance Case for COTS-Based Spacecraft
NASA Technical Reports Server (NTRS)
Witulski, A.; Austin, R.; Evans, J.; Mahadevan, N.; Karsai, G.; Sierawski, B.; LaBel, K.; Reed, R.; Schrimpf, R.
2016-01-01
A systematic approach is presented to constructing a radiation assurance case using Goal Structuring Notation (GSN) for spacecraft containing COTS parts. The GSN paradigm is applied to an SRAM single-event upset experiment board designed to fly on a CubeSat in January 2017. A custom software language for development of a GSN assurance case is under development at Vanderbilt. Construction of a radiation assurance case without use of hardened parts or extensive radiation testing is discussed.
The Design of Fault Tolerant Quantum Dot Cellular Automata Based Logic
NASA Technical Reports Server (NTRS)
Armstrong, C. Duane; Humphreys, William M.; Fijany, Amir
2002-01-01
As transistor geometries are reduced, quantum effects begin to dominate device performance. At some point, transistors cease to have the properties that make them useful computational components. New computing elements must be developed in order to keep pace with Moore s Law. Quantum dot cellular automata (QCA) represent an alternative paradigm to transistor-based logic. QCA architectures that are robust to manufacturing tolerances and defects must be developed. We are developing software that allows the exploration of fault tolerant QCA gate architectures by automating the specification, simulation, analysis and documentation processes.
An evolutionary solution to anesthesia automated record keeping.
Bicker, A A; Gage, J S; Poppers, P J
1998-08-01
In the course of five years the development of an automated anesthesia record keeper has evolved through nearly a dozen stages, each marked by new features and sophistication. Commodity PC hardware and software minimized development costs. Object oriented analysis, programming and design supported the process of change. In addition, we developed an evolutionary strategy that optimized motivation, risk management, and maximized return on investment. Besides providing record keeping services, the system supports educational and research activities and through a flexible plotting paradigm, supports each anesthesiologist's focus on physiological data during and after anesthesia.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
With the advent of parallel hardware and software technologies users are faced with the challenge to choose a programming paradigm best suited for the underlying computer architecture. With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors (SMP), parallel programming techniques have evolved to support parallelism beyond a single level. Which programming paradigm is the best will depend on the nature of the given problem, the hardware architecture, and the available software. In this study we will compare different programming paradigms for the parallelization of a selected benchmark application on a cluster of SMP nodes. We compare the timings of different implementations of the same CFD benchmark application employing the same numerical algorithm on a cluster of Sun Fire SMP nodes. The rest of the paper is structured as follows: In section 2 we briefly discuss the programming models under consideration. We describe our compute platform in section 3. The different implementations of our benchmark code are described in section 4 and the performance results are presented in section 5. We conclude our study in section 6.
2014-01-01
Background The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. Results The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. Conclusions The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org. PMID:25237393
A Layered Solution for Supercomputing Storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grider, Gary
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storage—based on inexpensive, failure-prone disk drives—between disk drives and tape archives.
Discrete mathematics, formal methods, the Z schema and the software life cycle
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.
Optical phase plates as a creative medium for special effects in images
NASA Astrophysics Data System (ADS)
Shaoulov, Vesselin I.; Meyer, Catherine; Argotti, Yann; Rolland, Jannick P.
2001-12-01
A new paradigm and methods for special effects in images were recently proposed by artist and movie producer Steven Hylen. Based on these methods, images resembling painting may be formed using optical phase plates. The role of the mathematical and optical properties of the phase plates is studied in the development of these new art forms. Results of custom software as well as ASAP simulations are presented.
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
Software for Intelligent System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Trevino, Luis C.
2004-01-01
The slide presentation is a briefing in four areas: overview of health management paradigms; overview of the ARC-Houston Software Engineering Technology Workshop held on April 20-22, 2004; identified technologies relevant to technical themes of intelligent system health management; and the author's thoughts on these topics.
Extreme Programming: A Kuhnian Revolution?
NASA Astrophysics Data System (ADS)
Northover, Mandy; Northover, Alan; Gruner, Stefan; Kourie, Gerrick G.; Boake, Andrew
This paper critically assesses the extent to which the Agile Software community's use of Thomas Kuhn's theory of revolutionary scientific change is justified. It will be argued that Kuhn's concepts of "scientific revolution" and "paradigm shift" cannot adequately explain the change from one type of software methodology to another.
NASA Technical Reports Server (NTRS)
Withey, James V.
1986-01-01
The validity of real-time software is determined by its ability to execute on a computer within the time constraints of the physical system it is modeling. In many applications the time constraints are so critical that the details of process scheduling are elevated to the requirements analysis phase of the software development cycle. It is not uncommon to find specifications for a real-time cyclic executive program included to assumed in such requirements. It was found that prelininary designs structured around this implementation abscure the data flow of the real world system that is modeled and that it is consequently difficult and costly to maintain, update and reuse the resulting software. A cyclic executive is a software component that schedules and implicitly synchronizes the real-time software through periodic and repetitive subroutine calls. Therefore a design method is sought that allows the deferral of process scheduling to the later stages of design. The appropriate scheduling paradigm must be chosen given the performance constraints, the largest environment and the software's lifecycle. The concept of process inversion is explored with respect to the cyclic executive.
ANTS: Applying A New Paradigm for Lunar and Planetary Exploration
NASA Technical Reports Server (NTRS)
Clark, P. E.; Curtis, S. A.; Rilee, M. L.
2002-01-01
ANTS (Autonomous Nano- Technology Swarm), a mission architecture consisting of a large (1000 member) swarm of picoclass (1 kg) totally autonomous spacecraft with both adaptable and evolvable heuristic systems, is being developed as a NASA advanced mission concept, and is here examined as a paradigm for lunar surface exploration. As the capacity and complexity of hardware and software, demands for bandwidth, and the sophistication of goals for lunar and planetary exploration have increased, greater cost constraints have led to fewer resources and thus, the need to operate spacecraft with less frequent human contact. At present, autonomous operation of spacecraft systems allows great capability of spacecraft to 'safe' themselves and survive when conditions threaten spacecraft safety. To further develop spacecraft capability, NASA is at the forefront of development of new mission architectures which involve the use of Intelligent Software Agents (ISAs), performing experiments in space and on the ground to advance deliberative and collaborative autonomous control techniques. Selected missions in current planning stages require small groups of spacecraft weighing tens, instead of hundreds, of kilograms to cooperate at a tactical level to select and schedule measurements to be made by appropriate instruments onboard. Such missions will be characterizing rapidly unfolding real-time events on a routine basis. The next level of development, which we are considering here, is in the use of autonomous systems at the strategic level, to explore the remote terranes, potentially involving large surveys or detailed reconnaissance.
Risk as a Resource - A New Paradigm
NASA Technical Reports Server (NTRS)
Gindorf, Thomas E.
1996-01-01
NASA must change dramatically because of the current United States federal budget climate. The American people and their elected officials have mandated a smaller, more efficient and effective government. For the past decade, NASA's budget had grown at or slightly above the rate of inflation. In that era, taking all steps to avoid the risk of failure was the rule. Spacecraft development was characterized by extensive analyses, numerous reviews, and multiple conservative tests. This methodology was consistent with the long available schedules for developing hardware and software for very large, billion dollar spacecraft. Those days are over. The time when every identifiable step was taken to avoid risk is being replaced by a new paradigm which manages risk in much the same way as other resources (schedule, performance, or dollars) are managed. While success is paramount to survival, it can no longer be bought with a large growing NASA budget.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
Supporting the Growing Needs of the GIS Industry
NASA Technical Reports Server (NTRS)
2003-01-01
Visual Learning Systems, Inc. (VLS), of Missoula, Montana, has developed a commercial software application called Feature Analyst. Feature Analyst was conceived under a Small Business Innovation Research (SBIR) contract with NASA's Stennis Space Center, and through the Montana State University TechLink Center, an organization funded by NASA and the U.S. Department of Defense to link regional companies with Federal laboratories for joint research and technology transfer. The software provides a paradigm shift to automated feature extraction, as it utilizes spectral, spatial, temporal, and ancillary information to model the feature extraction process; presents the ability to remove clutter; incorporates advanced machine learning techniques to supply unparalleled levels of accuracy; and includes an exceedingly simple interface for feature extraction.
Go Ahead of Malware’s Infections and Controls: Towards New Techniques for Proactive Cyber Defense
2016-12-08
in SDN (such as topology poisoning attacks and data-to-control plan saturation attacks) and developed new defense for SDN (such as TopoGuard and... Poisoning Network Visibility in Software-Defined Networks: New Attacks and Countermeasures As part of our research on discovering new vulnerabilities...future network- ing paradigm. We demonstrate that this new attacks can effectively poison the network topology information, then further successfully
A Layered Solution for Supercomputing Storage
Grider, Gary
2018-06-13
To solve the supercomputing challenge of memory keeping up with processing speed, a team at Los Alamos National Laboratory developed two innovative memory management and storage technologies. Burst buffers peel off data onto flash memory to support the checkpoint/restart paradigm of large simulations. MarFS adds a thin software layer enabling a new tier for campaign storageâbased on inexpensive, failure-prone disk drivesâbetween disk drives and tape archives.
2015-05-30
scalable application of cutting edge technologies. 20 4. Responding to changing resources—With likely significant resource reductions the depot...deal with underutilized organic capability while continuing to increase outsourcing of depot workload. In addition the study states that a...the unique organic skills that TYAD could 40 bring to the software sustainment mission could be valuable based on the specific type of software
PyMUS: Python-Based Simulation Software for Virtual Experiments on Motor Unit System
Kim, Hojeong; Kim, Minjung
2018-01-01
We constructed a physiologically plausible computationally efficient model of a motor unit and developed simulation software that allows for integrative investigations of the input–output processing in the motor unit system. The model motor unit was first built by coupling the motoneuron model and muscle unit model to a simplified axon model. To build the motoneuron model, we used a recently reported two-compartment modeling approach that accurately captures the key cell-type-related electrical properties under both passive conditions (somatic input resistance, membrane time constant, and signal attenuation properties between the soma and the dendrites) and active conditions (rheobase current and afterhyperpolarization duration at the soma and plateau behavior at the dendrites). To construct the muscle unit, we used a recently developed muscle modeling approach that reflects the experimentally identified dependencies of muscle activation dynamics on isometric, isokinetic and dynamic variation in muscle length over a full range of stimulation frequencies. Then, we designed the simulation software based on the object-oriented programing paradigm and developed the software using open-source Python language to be fully operational using graphical user interfaces. Using the developed software, separate simulations could be performed for a single motoneuron, muscle unit and motor unit under a wide range of experimental input protocols, and a hierarchical analysis could be performed from a single channel to the entire system behavior. Our model motor unit and simulation software may represent efficient tools not only for researchers studying the neural control of force production from a cellular perspective but also for instructors and students in motor physiology classroom settings. PMID:29695959
PyMUS: Python-Based Simulation Software for Virtual Experiments on Motor Unit System.
Kim, Hojeong; Kim, Minjung
2018-01-01
We constructed a physiologically plausible computationally efficient model of a motor unit and developed simulation software that allows for integrative investigations of the input-output processing in the motor unit system. The model motor unit was first built by coupling the motoneuron model and muscle unit model to a simplified axon model. To build the motoneuron model, we used a recently reported two-compartment modeling approach that accurately captures the key cell-type-related electrical properties under both passive conditions (somatic input resistance, membrane time constant, and signal attenuation properties between the soma and the dendrites) and active conditions (rheobase current and afterhyperpolarization duration at the soma and plateau behavior at the dendrites). To construct the muscle unit, we used a recently developed muscle modeling approach that reflects the experimentally identified dependencies of muscle activation dynamics on isometric, isokinetic and dynamic variation in muscle length over a full range of stimulation frequencies. Then, we designed the simulation software based on the object-oriented programing paradigm and developed the software using open-source Python language to be fully operational using graphical user interfaces. Using the developed software, separate simulations could be performed for a single motoneuron, muscle unit and motor unit under a wide range of experimental input protocols, and a hierarchical analysis could be performed from a single channel to the entire system behavior. Our model motor unit and simulation software may represent efficient tools not only for researchers studying the neural control of force production from a cellular perspective but also for instructors and students in motor physiology classroom settings.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
NASA Astrophysics Data System (ADS)
Conboy, Kieran; Lang, Michael
This chapter outlines the alternative perspectives of "rationalism" and "improvisation" within information systems development and describes the major shortcomings of each. It then discusses how these shortcomings manifested themselves within an e-government case study where a "structured" requirements management method was employed. Although this method was very prescriptive and firmly rooted in the "rational" paradigm, it was observed that users often resorted to improvised behaviour, such as privately making decisions on how certain aspects of the method should or should not be implemented.
NASA Astrophysics Data System (ADS)
Segret, Boris; Semery, Alain; Vannitsen, Jordan; Mosser, Benoît.; Miau, Jiun-Jih; Juang, Jyh-Ching; Deleflie, Florent
2014-08-01
The AGILE principles in the software industry seems well adapted to the paradigm of CubeSat missions that involve students for the development of space missions. Some of well-known engineering and program processes are revisited on the example of an interplanetary CubeSat mission profile that has been developed by several teams of students in various countries and at various educational levels since 02/2013. The lessons learned at adapting traditional space mission methods are emphasized and they produce a metaphoric image of paving stones.
Multimission Software Reuse in an Environment of Large Paradigm Shifts
NASA Technical Reports Server (NTRS)
Wilson, Robert K.
1996-01-01
The ground data systems provided for NASA space mission support are discussed. As space missions expand, the ground systems requirements become more complex. Current ground data systems provide for telemetry, command, and uplink and downlink processing capabilities. The new millennium project (NMP) technology testbed for 21st century NASA missions is discussed. The program demonstrates spacecraft and ground system technologies. The paradigm shift from detailed ground sequencing to a goal oriented planning approach is considered. The work carried out to meet this paradigm for the Deep Space-1 (DS-1) mission is outlined.
Olfactory Cued Learning Paradigm.
Liu, Gary; McClard, Cynthia K; Tepe, Burak; Swanson, Jessica; Pekarek, Brandon; Panneerselvam, Sugi; Arenkiel, Benjamin R
2017-05-05
Sensory stimulation leads to structural changes within the CNS (Central Nervous System), thus providing the fundamental mechanism for learning and memory. The olfactory circuit offers a unique model for studying experience-dependent plasticity, partly due to a continuous supply of integrating adult born neurons. Our lab has recently implemented an olfactory cued learning paradigm in which specific odor pairs are coupled to either a reward or punishment to study downstream circuit changes. The following protocol outlines the basic set up for our learning paradigm. Here, we describe the equipment setup, programming of software, and method of behavioral training.
The Future of the Web, Intelligent Devices, and Education.
ERIC Educational Resources Information Center
Strauss, Howard
1999-01-01
Examines past trends in hardware, software, networking, and education, in an attempt to determine where they are going and what their broad implications might be. Speculates on what will replace the World Wide Web. Describes new applications and telematons along with a new paradigm for education called SMILE (Software-Managed Instruction,…
A Survey of Middleware for Sensor and Network Virtualization
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.
2014-01-01
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737
A survey of middleware for sensor and network virtualization.
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd
2014-12-12
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.
Security Risks of Cloud Computing and Its Emergence as 5th Utility Service
NASA Astrophysics Data System (ADS)
Ahmad, Mushtaq
Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.
NASA Technical Reports Server (NTRS)
Steib, Michael
1991-01-01
The APD software features include: On-line help, Three level architecture, (Logic environments, Setup/Application environment, Data environment), Explanation capability, and File handling. The kinds of experimentation and record keeping that leads to effective expert systems is facilitated by: (1) a library of inferencing modules (in the logic environment); (2) an explanation capability which reveals logic strategies to users; (3) automated file naming conventions; (4) an information retrieval system; and (5) on-line help. These aid with effective use of knowledge, debugging and experimentation. Since the APD software anticipates the logical rules becoming complicated, it is embedded in a production system language (CLIPS) to insure the full power of the production system paradigm of CLIPS and availability of the procedural language C. The development is discussed of the APD software and three example applications: toy, experimental, and operational prototype for submarine maintenance predictions.
Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.
2017-01-01
Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070
Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G
2017-01-01
Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.
A new paradigm on battery powered embedded system design based on User-Experience-Oriented method
NASA Astrophysics Data System (ADS)
Wang, Zhuoran; Wu, Yue
2014-03-01
The battery sustainable time has been an active research topic recently for the development of battery powered embedded products such as tablets and smart phones, which are determined by the battery capacity and power consumption. Despite numerous efforts on the improvement of battery capacity in the field of material engineering, the power consumption also plays an important role and easier to ameliorate in delivering a desirable user-experience, especially considering the moderate advancement on batteries for decades. In this study, a new Top-Down modelling method, User-Experience-Oriented Battery Powered Embedded System Design Paradigm, is proposed to estimate the target average power consumption, to guide the hardware and software design, and eventually to approach the theoretical lowest power consumption that the application is still able to provide the full functionality. Starting from the 10-hour sustainable time standard, average working current is defined with battery design capacity and set as a target. Then an implementation is illustrated from both hardware perspective, which is summarized as Auto-Gating power management, and from software perspective, which introduces a new algorithm, SleepVote, to guide the system task design and scheduling.
A real-time detector system for precise timing of audiovisual stimuli.
Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna
2012-01-01
The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab.
Comparing the OpenMP, MPI, and Hybrid Programming Paradigm on an SMP Cluster
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Hao-Qiang; anMey, Dieter; Hatay, Ferhat F.
2003-01-01
Clusters of SMP (Symmetric Multi-Processors) nodes provide support for a wide range of parallel programming paradigms. The shared address space within each node is suitable for OpenMP parallelization. Message passing can be employed within and across the nodes of a cluster. Multiple levels of parallelism can be achieved by combining message passing and OpenMP parallelization. Which programming paradigm is the best will depend on the nature of the given problem, the hardware components of the cluster, the network, and the available software. In this study we compare the performance of different implementations of the same CFD benchmark application, using the same numerical algorithm but employing different programming paradigms.
Assessment and Rehabilitation of Central Sensory Impairments for Balance in mTBI
2016-10-01
place; 95% complete. ● Purchasing and testing software of Opals ; awaiting release of newer, updated sensor from APDM to determine need for more sensors...2016. ● Develop new algorithm to automatically quantify head movements from Opal sensor; 100% complete 23-Sep-2016. ● Set up and test gait paradigm...Interaction in Balance (mCTSIB), Modified Balance Error Scoring System (mBESS) and walking tests, subjects wear five Opal inertial sensors (APDM, Inc
Paramedir: A Tool for Programmable Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.
SimVascular: An Open Source Pipeline for Cardiovascular Simulation.
Updegrove, Adam; Wilson, Nathan M; Merkow, Jameson; Lan, Hongzhi; Marsden, Alison L; Shadden, Shawn C
2017-03-01
Patient-specific cardiovascular simulation has become a paradigm in cardiovascular research and is emerging as a powerful tool in basic, translational and clinical research. In this paper we discuss the recent development of a fully open-source SimVascular software package, which provides a complete pipeline from medical image data segmentation to patient-specific blood flow simulation and analysis. This package serves as a research tool for cardiovascular modeling and simulation, and has contributed to numerous advances in personalized medicine, surgical planning and medical device design. The SimVascular software has recently been refactored and expanded to enhance functionality, usability, efficiency and accuracy of image-based patient-specific modeling tools. Moreover, SimVascular previously required several licensed components that hindered new user adoption and code management and our recent developments have replaced these commercial components to create a fully open source pipeline. These developments foster advances in cardiovascular modeling research, increased collaboration, standardization of methods, and a growing developer community.
Unidata Cyberinfrastructure in the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Young, J. W.
2016-12-01
Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.
Type Safe Extensible Programming
NASA Astrophysics Data System (ADS)
Chae, Wonseok
2009-10-01
Software products evolve over time. Sometimes they evolve by adding new features, and sometimes by either fixing bugs or replacing outdated implementations with new ones. When software engineers fail to anticipate such evolution during development, they will eventually be forced to re-architect or re-build from scratch. Therefore, it has been common practice to prepare for changes so that software products are extensible over their lifetimes. However, making software extensible is challenging because it is difficult to anticipate successive changes and to provide adequate abstraction mechanisms over potential changes. Such extensibility mechanisms, furthermore, should not compromise any existing functionality during extension. Software engineers would benefit from a tool that provides a way to add extensions in a reliable way. It is natural to expect programming languages to serve this role. Extensible programming is one effort to address these issues. In this thesis, we present type safe extensible programming using the MLPolyR language. MLPolyR is an ML-like functional language whose type system provides type-safe extensibility mechanisms at several levels. After presenting the language, we will show how these extensibility mechanisms can be put to good use in the context of product line engineering. Product line engineering is an emerging software engineering paradigm that aims to manage variations, which originate from successive changes in software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Sadlier, Ronald J
We show how to extend the paradigm of software-defined communication to include quantum communication systems. We introduce the decomposition of a quantum communication terminal into layers separating the concerns of the hardware, software, and middleware. We provide detailed descriptions of how each component operates and we include results of an implementation of the super-dense coding protocol. We argue that the versatility of software-defined quantum communication test beds can be useful for exploring new regimes in communication and rapidly prototyping new systems.
An SSME High Pressure Oxidizer Turbopump diagnostic system using G2 real-time expert system
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei
1991-01-01
An expert system which diagnoses various seal leakage faults in the High Pressure Oxidizer Turbopump of the SSME was developed using G2 real-time expert system. Three major functions of the software were implemented: model-based data generation, real-time expert system reasoning, and real-time input/output communication. This system is proposed as one module of a complete diagnostic system for the SSME. Diagnosis of a fault is defined as the determination of its type, severity, and likelihood. Since fault diagnosis is often accomplished through the use of heuristic human knowledge, an expert system based approach has been adopted as a paradigm to develop this diagnostic system. To implement this approach, a software shell which can be easily programmed to emulate the human decision process, the G2 Real-Time Expert System, was selected. Lessons learned from this implementation are discussed.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
Experimental Internet Environment Software Development
NASA Technical Reports Server (NTRS)
Maddux, Gary A.
1998-01-01
Geographically distributed project teams need an Internet based collaborative work environment or "Intranet." The Virtual Research Center (VRC) is an experimental Intranet server that combines several services such as desktop conferencing, file archives, on-line publishing, and security. Using the World Wide Web (WWW) as a shared space paradigm, the Graphical User Interface (GUI) presents users with images of a lunar colony. Each project has a wing of the colony and each wing has a conference room, library, laboratory, and mail station. In FY95, the VRC development team proved the feasibility of this shared space concept by building a prototype using a Netscape commerce server and several public domain programs. Successful demonstrations of the prototype resulted in approval for a second phase. Phase 2, documented by this report, will produce a seamlessly integrated environment by introducing new technologies such as Java and Adobe Web Links to replace less efficient interface software.
An SSME high pressure oxidizer turbopump diagnostic system using G2(TM) real-time expert system
NASA Technical Reports Server (NTRS)
Guo, Ten-Huei
1991-01-01
An expert system which diagnoses various seal leakage faults in the High Pressure Oxidizer Turbopump of the SSME was developed using G2(TM) real-time expert system. Three major functions of the software were implemented: model-based data generation, real-time expert system reasoning, and real-time input/output communication. This system is proposed as one module of a complete diagnostic system for Space Shuttle Main Engine. Diagnosis of a fault is defined as the determination of its type, severity, and likelihood. Since fault diagnosis is often accomplished through the use of heuristic human knowledge, an expert system based approach was adopted as a paradigm to develop this diagnostic system. To implement this approach, a software shell which can be easily programmed to emulate the human decision process, the G2 Real-Time Expert System, was selected. Lessons learned from this implementation are discussed.
Pynamic: the Python Dynamic Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, G L; Ahn, D H; de Supinksi, B R
2007-07-10
Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, wemore » present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.« less
MAX - An advanced parallel computer for space applications
NASA Technical Reports Server (NTRS)
Lewis, Blair F.; Bunker, Robert L.
1991-01-01
MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.
NASA Astrophysics Data System (ADS)
Nasrollahi, Amir; Ma, Zhaoyun; Rizzo, Piervincenzo
2017-04-01
In this paper we present a structural health monitoring (SHM) paradigm based on the simultaneous use of ultrasounds and electromechanical impedance (EMI) to monitor waveguides. The paradigm uses guided ultrasonic waves (GUWs) in pitch-catch mode and EMI simultaneously. The two methodologies are driven by the same sensing/hardware/software unit. To assess the feasibility of this unified system an aluminum plate was monitored for varying damage location. Damage was simulated by adding small masses to the plate. The results associated with pitch-catch GUW testing mode were used in ultrasonic tomography, and statistical analysis was used to detect the damages using the EMI measurements. The results of GUW and EMI monitoring show that the proposed system is robust and can be developed further to address the challenges associated with the SHM of complex structures.
Analyzing the Web Services and UniFrame Paradigms
2003-04-01
paradigm from a centralized one to a distributed one. Hence, the target environment is no more a centrally managed, but concerned with collaboration...lever (business logic level) and provide a new platform to build software for a distributed environment . UniFrame is a research project that aims to...EAI solutions provide tends to be complex and expensive, despite improving the overall communication. In addition, the EAI interfaces are not reusable
1991-12-01
abstract data type is, what an object-oriented design is and how to apply "software engineering" principles to the design of both of them. I owe a great... Program (ASVP), a research and development effort by two aerospace contractors to redesign and implement subsets of two existing flight simulators in...effort addresses how to implement a simulator designed using the SEI OOD Paradigm on a distributed, parallel, multiple instruction, multiple data (MIMD
NASA Technical Reports Server (NTRS)
Goodrich, Charles C.
1993-01-01
The goal of this project is to investigate the use of visualization software based on the visual programming and data-flow paradigms to meet the needs of the SPOF and through it the International Solar Terrestrial Physics (ISTP) science community. Specific needs we address include science planning, data interpretation, comparisons of data with simulation and model results, and data acquisition. Our accomplishments during the twelve month grant period are discussed below.
QUEST/Ada: Query utility environment for software testing of Ada
NASA Technical Reports Server (NTRS)
Brown, David B.
1989-01-01
Results of research and development efforts are presented for Task 1, Phase 2 of a general project entitled, The Development of a Program Analysis Environment for Ada. A prototype of the QUEST/Ada system was developed to collect data to determine the effectiveness of the rule-based testing paradigm. The prototype consists of five parts: the test data generator, the parser/scanner, the test coverage analyzer, a symbolic evaluator, and a data management facility, known as the Librarian. These components are discussed at length. Also presented is an experimental design for the evaluations, an overview of the project, and a schedule for its completion.
NAS-current status and future plans
NASA Technical Reports Server (NTRS)
Bailey, F. R.
1987-01-01
The Numerical Aerodynamic Simulation (NAS) has met its first major milestone, the NAS Processing System Network (NPSN) Initial Operating Configuration (IOC). The program has met its goal of providing a national supercomputer facility capable of greatly enhancing the Nation's research and development efforts. Furthermore, the program is fulfilling its pathfinder role by defining and implementing a paradigm for supercomputing system environments. The IOC is only the begining and the NAS Program will aggressively continue to develop and implement emerging supercomputer, communications, storage, and software technologies to strengthen computations as a critical element in supporting the Nation's leadership role in aeronautics.
NASA Astrophysics Data System (ADS)
Tošić, Saša; Mitrović, Dejan; Ivanović, Mirjana
2013-10-01
Agent-oriented programming languages are designed to simplify the development of software agents, especially those that exhibit complex, intelligent behavior. This paper presents recent improvements of AgScala, an agent-oriented programming language based on Scala. AgScala includes declarative constructs for managing beliefs, actions and goals of intelligent agents. Combined with object-oriented and functional programming paradigms offered by Scala, it aims to be an efficient framework for developing both purely reactive, and more complex, deliberate agents. Instead of the Prolog back-end used initially, the new version of AgScala relies on Agent Planning Package, a more advanced system for automated planning and reasoning.
ERIC Educational Resources Information Center
Read, Brock
2008-01-01
A parallel between plagiarism and corporate crime raises eyebrows--and ire-- on campuses, but for John Barrie, the comparison is a perfectly natural one. In the 10 years since he founded iParadigms, which sells the antiplagiarism software Turnitin, he has argued--forcefully, and at times combatively--that academic plagiarism is growing, and that…
2006-11-27
clever, but I see that there was nothing in it, after all” – said to Sherlock Holmes – “I begin to think that I make a mistake in explaining... Sherlock Holmes 94 The Criticism from software cont. • Software complexity and performance is improving – Especially in the key area of pattern
Rainsford, M; Palmer, M A; Paine, G
2018-04-01
Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.
Process and information integration via hypermedia
NASA Technical Reports Server (NTRS)
Hammen, David G.; Labasse, Daniel L.; Myers, Robert M.
1990-01-01
Success stories for advanced automation prototypes abound in the literature but the deployments of practical large systems are few in number. There are several factors that militate against the maturation of such prototypes into products. Here, the integration of advanced automation software into large systems is discussed. Advanced automation systems tend to be specific applications that need to be integrated and aggregated into larger systems. Systems integration can be achieved by providing expert user-developers with verified tools to efficiently create small systems that interface to large systems through standard interfaces. The use of hypermedia as such a tool in the context of the ground control centers that support Shuttle and space station operations is explored. Hypermedia can be an integrating platform for data, conventional software, and advanced automation software, enabling data integration through the display of diverse types of information and through the creation of associative links between chunks of information. Further, hypermedia enables process integration through graphical invoking of system functions. Through analysis and examples, researchers illustrate how diverse information and processing paradigms can be integrated into a single software platform.
NASA Astrophysics Data System (ADS)
Shichkina, Y. A.; Kupriyanov, M. S.; Moldachev, S. O.
2018-05-01
Today, a description of various Internet devices very often appears on the Internet. For the efficient operation of the Industrial Internet of things, it is necessary to provide a modern level of data processing starting from getting them from devices ending with returning them to devices in a processed form. Current solutions of the Internet of Things are mainly focused on the development of centralized decisions, projecting the Internet of Things on the set of cloud-based platforms that are open, but limit the ability of participants of the Internet of Things to adapt these systems to their own problems. Therefore, it is often necessary to create specialized software for specific areas of the Internet of Things. This article describes the solution of the problem of virtualization of the system of devices based on the Docker system. This solution allows developers to test any software on any number of devices forming a mesh.
A federated design for a neurobiological simulation engine: the CBI federated software architecture.
Cornelis, Hugo; Coop, Allan D; Bower, James M
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.
A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture
Cornelis, Hugo; Coop, Allan D.; Bower, James M.
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154
Comparison of methods for quantitative evaluation of endoscopic distortion
NASA Astrophysics Data System (ADS)
Wang, Quanzeng; Castro, Kurt; Desai, Viraj N.; Cheng, Wei-Chung; Pfefer, Joshua
2015-03-01
Endoscopy is a well-established paradigm in medical imaging, and emerging endoscopic technologies such as high resolution, capsule and disposable endoscopes promise significant improvements in effectiveness, as well as patient safety and acceptance of endoscopy. However, the field lacks practical standardized test methods to evaluate key optical performance characteristics (OPCs), in particular the geometric distortion caused by fisheye lens effects in clinical endoscopic systems. As a result, it has been difficult to evaluate an endoscope's image quality or assess its changes over time. The goal of this work was to identify optimal techniques for objective, quantitative characterization of distortion that are effective and not burdensome. Specifically, distortion measurements from a commercially available distortion evaluation/correction software package were compared with a custom algorithm based on a local magnification (ML) approach. Measurements were performed using a clinical gastroscope to image square grid targets. Recorded images were analyzed with the ML approach and the commercial software where the results were used to obtain corrected images. Corrected images based on the ML approach and the software were compared. The study showed that the ML method could assess distortion patterns more accurately than the commercial software. Overall, the development of standardized test methods for characterizing distortion and other OPCs will facilitate development, clinical translation, manufacturing quality and assurance of performance during clinical use of endoscopic technologies.
Object technology: A white paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, S.R.; Arrowood, L.F.; Cain, W.D.
1992-05-11
Object-Oriented Technology (OOT), although not a new paradigm, has recently been prominently featured in the trade press and even general business publications. Indeed, the promises of object technology are alluring: the ability to handle complex design and engineering information through the full manufacturing production life cycle or to manipulate multimedia information, and the ability to improve programmer productivity in creating and maintaining high quality software. Groups at a number of the DOE facilities have been exploring the use of object technology for engineering, business, and other applications. In this white paper, the technology is explored thoroughly and compared with previousmore » means of developing software and storing databases of information. Several specific projects within the DOE Complex are described, and the state of the commercial marketplace is indicated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Andrs; Ray Berry; Derek Gaston
The document contains the simulation results of a steady state model PWR problem with the RELAP-7 code. The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at Idaho National Laboratory (INL). The code is based on INL's modern scientific software development framework - MOOSE (Multi-Physics Object-Oriented Simulation Environment). This report summarizes the initial results of simulating a model steady-state single phase PWR problem using the current version of the RELAP-7 code. The major purpose of this demonstration simulation is to show that RELAP-7 code can be rapidly developed to simulate single-phase reactor problems. RELAP-7more » is a new project started on October 1st, 2011. It will become the main reactor systems simulation toolkit for RISMC (Risk Informed Safety Margin Characterization) and the next generation tool in the RELAP reactor safety/systems analysis application series (the replacement for RELAP5). The key to the success of RELAP-7 is the simultaneous advancement of physical models, numerical methods, and software design while maintaining a solid user perspective. Physical models include both PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations) and experimental based closure models. RELAP-7 will eventually utilize well posed governing equations for multiphase flow, which can be strictly verified. Closure models used in RELAP5 and newly developed models will be reviewed and selected to reflect the progress made during the past three decades. RELAP-7 uses modern numerical methods, which allow implicit time integration, higher order schemes in both time and space, and strongly coupled multi-physics simulations. RELAP-7 is written with object oriented programming language C++. Its development follows modern software design paradigms. The code is easy to read, develop, maintain, and couple with other codes. Most importantly, the modern software design allows the RELAP-7 code to evolve with time. RELAP-7 is a MOOSE-based application. MOOSE (Multiphysics Object-Oriented Simulation Environment) is a framework for solving computational engineering problems in a well-planned, managed, and coordinated way. By leveraging millions of lines of open source software packages, such as PETSC (a nonlinear solver developed at Argonne National Laboratory) and LibMesh (a Finite Element Analysis package developed at University of Texas), MOOSE significantly reduces the expense and time required to develop new applications. Numerical integration methods and mesh management for parallel computation are provided by MOOSE. Therefore RELAP-7 code developers only need to focus on physics and user experiences. By using the MOOSE development environment, RELAP-7 code is developed by following the same modern software design paradigms used for other MOOSE development efforts. There are currently over 20 different MOOSE based applications ranging from 3-D transient neutron transport, detailed 3-D transient fuel performance analysis, to long-term material aging. Multi-physics and multiple dimensional analyses capabilities can be obtained by coupling RELAP-7 and other MOOSE based applications and by leveraging with capabilities developed by other DOE programs. This allows restricting the focus of RELAP-7 to systems analysis-type simulations and gives priority to retain and significantly extend RELAP5's capabilities.« less
NASA Technical Reports Server (NTRS)
Guarro, Sergio B.
2010-01-01
This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.
ERIC Educational Resources Information Center
Arar, Khalid
2016-01-01
The study traced the assimilation of new administrative software in an Arab school, assisted by collaboration between the school and an Arab academic teacher-training college in Israel. The research used a mixed-method paradigm. A questionnaire consisting of 81 items was administered to 55 of the school's teachers in two stages to elicit their…
ScriptingRT: A Software Library for Collecting Response Latencies in Online Studies of Cognition
Schubert, Thomas W.; Murteira, Carla; Collins, Elizabeth C.; Lopes, Diniz
2013-01-01
ScriptingRT is a new open source tool to collect response latencies in online studies of human cognition. ScriptingRT studies run as Flash applets in enabled browsers. ScriptingRT provides the building blocks of response latency studies, which are then combined with generic Apache Flex programming. Six studies evaluate the performance of ScriptingRT empirically. Studies 1–3 use specialized hardware to measure variance of response time measurement and stimulus presentation timing. Studies 4–6 implement a Stroop paradigm and run it both online and in the laboratory, comparing ScriptingRT to other response latency software. Altogether, the studies show that Flash programs developed in ScriptingRT show a small lag and an increased variance in response latencies. However, this did not significantly influence measured effects: The Stroop effect was reliably replicated in all studies, and the found effects did not depend on the software used. We conclude that ScriptingRT can be used to test response latency effects online. PMID:23805326
NADIR: A Flexible Archiving System Current Development
NASA Astrophysics Data System (ADS)
Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.
2014-05-01
The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.
NASA Astrophysics Data System (ADS)
Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.
2016-03-01
The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.
Atlas : A library for numerical weather prediction and climate modelling
NASA Astrophysics Data System (ADS)
Deconinck, Willem; Bauer, Peter; Diamantakis, Michail; Hamrud, Mats; Kühnlein, Christian; Maciel, Pedro; Mengaldo, Gianmarco; Quintino, Tiago; Raoult, Baudouin; Smolarkiewicz, Piotr K.; Wedi, Nils P.
2017-11-01
The algorithms underlying numerical weather prediction (NWP) and climate models that have been developed in the past few decades face an increasing challenge caused by the paradigm shift imposed by hardware vendors towards more energy-efficient devices. In order to provide a sustainable path to exascale High Performance Computing (HPC), applications become increasingly restricted by energy consumption. As a result, the emerging diverse and complex hardware solutions have a large impact on the programming models traditionally used in NWP software, triggering a rethink of design choices for future massively parallel software frameworks. In this paper, we present Atlas, a new software library that is currently being developed at the European Centre for Medium-Range Weather Forecasts (ECMWF), with the scope of handling data structures required for NWP applications in a flexible and massively parallel way. Atlas provides a versatile framework for the future development of efficient NWP and climate applications on emerging HPC architectures. The applications range from full Earth system models, to specific tools required for post-processing weather forecast products. The Atlas library thus constitutes a step towards affordable exascale high-performance simulations by providing the necessary abstractions that facilitate the application in heterogeneous HPC environments by promoting the co-design of NWP algorithms with the underlying hardware.
ERIC Educational Resources Information Center
Morgan, Becka S.
2012-01-01
Open Source Software (OSS) communities are homogenous and their lack of diversity is of concern to many within this field. This problem is becoming more pronounced as it is the practice of many technology companies to use OSS participation as a factor in the hiring process, disadvantaging those who are not a part of this community. We should…
Open Source Seismic Software in NOAA's Next Generation Tsunami Warning System
NASA Astrophysics Data System (ADS)
Hellman, S. B.; Baker, B. I.; Hagerty, M. T.; Leifer, J. M.; Lisowski, S.; Thies, D. A.; Donnelly, B. K.; Griffith, F. P.
2014-12-01
The Tsunami Information technology Modernization (TIM) is a project spearheaded by National Oceanic and Atmospheric Administration to update the United States' Tsunami Warning System software currently employed at the Pacific Tsunami Warning Center (Eva Beach, Hawaii) and the National Tsunami Warning Center (Palmer, Alaska). This entirely open source software project will integrate various seismic processing utilities with the National Weather Service Weather Forecast Office's core software, AWIPS2. For the real-time and near real-time seismic processing aspect of this project, NOAA has elected to integrate the open source portions of GFZ's SeisComP 3 (SC3) processing system into AWIPS2. To provide for better tsunami threat assessments we are developing open source tools for magnitude estimations (e.g., moment magnitude, energy magnitude, surface wave magnitude), detection of slow earthquakes with the Theta discriminant, moment tensor inversions (e.g. W-phase and teleseismic body waves), finite fault inversions, and array processing. With our reliance on common data formats such as QuakeML and seismic community standard messaging systems, all new facilities introduced into AWIPS2 and SC3 will be available as stand-alone tools or could be easily integrated into other real time seismic monitoring systems such as Earthworm, Antelope, etc. Additionally, we have developed a template based design paradigm so that the developer or scientist can efficiently create upgrades, replacements, and/or new metrics to the seismic data processing with only a cursory knowledge of the underlying SC3.
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics
2010-01-01
Background Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. Description An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Conclusions Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms. PMID:21210976
An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.
Taylor, Ronald C
2010-12-21
Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.
Towards a flexible middleware for context-aware pervasive and wearable systems.
Muro, Marco; Amoretti, Michele; Zanichelli, Francesco; Conte, Gianni
2012-11-01
Ambient intelligence and wearable computing call for innovative hardware and software technologies, including a highly capable, flexible and efficient middleware, allowing for the reuse of existing pervasive applications when developing new ones. In the considered application domain, middleware should also support self-management, interoperability among different platforms, efficient communications, and context awareness. In the on-going "everything is networked" scenario scalability appears as a very important issue, for which the peer-to-peer (P2P) paradigm emerges as an appealing solution for connecting software components in an overlay network, allowing for efficient and balanced data distribution mechanisms. In this paper, we illustrate how all these concepts can be placed into a theoretical tool, called networked autonomic machine (NAM), implemented into a NAM-based middleware, and evaluated against practical problems of pervasive computing.
Prep-ME Software Implementation and Enhancement
DOT National Transportation Integrated Search
2017-09-01
Highway agencies across the United States are moving from empirical design procedures towards the mechanistic-empirical (ME) based pavement design. Even though the Pavement ME Design presents a new paradigm shift with several dramatic improvements, i...
Automatic Evolution of Molecular Nanotechnology Designs
NASA Technical Reports Server (NTRS)
Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)
1998-01-01
This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
NASA Technical Reports Server (NTRS)
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.
Evolution of a Reconfigurable Processing Platform for a Next Generation Space Software Defined Radio
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Downey, Joseph A.; Anderson, Keffery R.; Baldwin, Keith
2014-01-01
The National Aeronautics and Space Administration (NASA)Harris Ka-Band Software Defined Radio (SDR) is the first, fully reprogrammable space-qualified SDR operating in the Ka-Band frequency range. Providing exceptionally higher data communication rates than previously possible, this SDR offers in-orbit reconfiguration, multi-waveform operation, and fast deployment due to its highly modular hardware and software architecture. Currently in operation on the International Space Station (ISS), this new paradigm of reconfigurable technology is enabling experimenters to investigate navigation and networking in the space environment.The modular SDR and the NASA developed Space Telecommunications Radio System (STRS) architecture standard are the basis for Harris reusable, digital signal processing space platform trademarked as AppSTAR. As a result, two new space radio products are a synthetic aperture radar payload and an Automatic Detection Surveillance Broadcast (ADS-B) receiver. In addition, Harris is currently developing many new products similar to the Ka-Band software defined radio for other applications. For NASAs next generation flight Ka-Band radio development, leveraging these advancements could lead to a more robust and more capable software defined radio.The space environment has special considerations different from terrestrial applications that must be considered for any system operated in space. Each space mission has unique requirements that can make these systems unique. These unique requirements can make products that are expensive and limited in reuse. Space systems put a premium on size, weight and power. A key trade is the amount of reconfigurability in a space system. The more reconfigurable the hardware platform, the easier it is to adapt to the platform to the next mission, and this reduces the amount of non-recurring engineering costs. However, the more reconfigurable platforms often use more spacecraft resources. Software has similar considerations to hardware. Having an architecture standard promotes reuse of software and firmware. Space platforms have limited processor capability, which makes the trade on the amount of amount of flexibility paramount.
Livnat, Yarden; Galli, Nathan; Samore, Matthew H; Gundlapalli, Adi V
2012-01-01
Advances in surveillance science have supported public health agencies in tracking and responding to disease outbreaks. Increasingly, epidemiologists have been tasked with interpreting multiple streams of heterogeneous data arising from varied surveillance systems. As a result public health personnel have experienced an overload of plots and charts as information visualization techniques have not kept pace with the rapid expansion in data availability. This study sought to advance the science of public health surveillance data visualization by conceptualizing a visual paradigm that provides an ‘epidemiological canvas’ for detection, monitoring, exploration and discovery of regional infectious disease activity and developing a software prototype of an ‘infectious disease weather map'. Design objectives were elucidated and the conceptual model was developed using cognitive task analysis with public health epidemiologists. The software prototype was pilot tested using retrospective data from a large, regional pediatric hospital, and gastrointestinal and respiratory disease outbreaks were re-created as a proof of concept. PMID:22358039
Object-oriented approach for gas turbine engine simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
The cloud paradigm applied to e-Health.
Vilaplana, Jordi; Solsona, Francesc; Abella; Filgueira, Rosa; Rius, Josep
2013-03-14
Cloud computing is a new paradigm that is changing how enterprises, institutions and people understand, perceive and use current software systems. With this paradigm, the organizations have no need to maintain their own servers, nor host their own software. Instead, everything is moved to the cloud and provided on demand, saving energy, physical space and technical staff. Cloud-based system architectures provide many advantages in terms of scalability, maintainability and massive data processing. We present the design of an e-health cloud system, modelled by an M/M/m queue with QoS capabilities, i.e. maximum waiting time of requests. Detailed results for the model formed by a Jackson network of two M/M/m queues from the queueing theory perspective are presented. These results show a significant performance improvement when the number of servers increases. Platform scalability becomes a critical issue since we aim to provide the system with high Quality of Service (QoS). In this paper we define an architecture capable of adapting itself to different diseases and growing numbers of patients. This platform could be applied to the medical field to greatly enhance the results of those therapies that have an important psychological component, such as addictions and chronic diseases.
NASA Technical Reports Server (NTRS)
Elrad, Tzilla (Editor); Filman, Robert E. (Editor); Bader, Atef (Editor)
2001-01-01
Computer science has experienced an evolution in programming languages and systems from the crude assembly and machine codes of the earliest computers through concepts such as formula translation, procedural programming, structured programming, functional programming, logic programming, and programming with abstract data types. Each of these steps in programming technology has advanced our ability to achieve clear separation of concerns at the source code level. Currently, the dominant programming paradigm is object-oriented programming - the idea that one builds a software system by decomposing a problem into objects and then writing the code of those objects. Such objects abstract together behavior and data into a single conceptual and physical entity. Object-orientation is reflected in the entire spectrum of current software development methodologies and tools - we have OO methodologies, analysis and design tools, and OO programming languages. Writing complex applications such as graphical user interfaces, operating systems, and distributed applications while maintaining comprehensible source code has been made possible with OOP. Success at developing simpler systems leads to aspirations for greater complexity. Object orientation is a clever idea, but has certain limitations. We are now seeing that many requirements do not decompose neatly into behavior centered on a single locus. Object technology has difficulty localizing concerns invoking global constraints and pandemic behaviors, appropriately segregating concerns, and applying domain-specific knowledge. Post-object programming (POP) mechanisms that look to increase the expressiveness of the OO paradigm are a fertile arena for current research. Examples of POP technologies include domain-specific languages, generative programming, generic programming, constraint languages, reflection and metaprogramming, feature-oriented development, views/viewpoints, and asynchronous message brokering. (Czarneclu and Eisenecker s book includes a good survey of many of these technologies).
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
2015-05-30
study used quantitative and qualitative analytical methods in the examination of software versus hardware maintenance trends and forecasts, human and...financial resources at TYAD and SEC, and overall compliance with Title 10 mandates (e.g., 10 USC 2466). Quantitative methods were executed by...Systems (PEO EIS). These methods will provide quantitative-based analysis on which to base and justify trends and gaps, as well as qualitative methods
A Vision of the Future Air Traffic Control System
NASA Technical Reports Server (NTRS)
Erzberger, Heinz
2000-01-01
The air transportation system is on the verge of gridlock, with delays and cancelled flights this summer reaching all time highs. As demand for air transportation continues to increase, the capacity needed to accommodate the growth in traffic is falling farther and farther behind. Moreover, it has become increasingly apparent that the present system cannot be scaled up to provide the capacity increases needed to meet demand over the next 25 years. NASA, working with the Federal Aviation Administration and industry, is pursuing a major research program to develop air traffic management technologies that have the ultimate goal of doubling capacity while increasing safety and efficiency. This seminar will describe how the current system operates, what its limitations are and why a revolutionary "shift in paradigm" is needed to overcome fundamental limitations in capacity and safety. For the near term, NASA has developed a portfolio of software tools for air traffic controllers, called the Center-TRACON Automation System (CTAS), that provides modest gains in capacity and efficiency while staying within the current paradigm. The outline of a concept for the long term, with a deployment date of 2015 at the earliest, has recently been formulated and presented by NASA to a select group of industry and government stakeholders. Automated decision making software, combined with an Internet in the sky that enables sharing of information and distributes control between the cockpit and the ground, is key to this concept. However, its most revolutionary feature is a fundamental change in the roles and responsibilities assigned to air traffic controllers.
Activity-Centric Approach to Distributed Programming
NASA Technical Reports Server (NTRS)
Levy, Renato; Satapathy, Goutam; Lang, Jun
2004-01-01
The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.
fMRI paradigm designing and post-processing tools
James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan
2014-01-01
In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include mineaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large datasets. Three limiting paradigms are saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage and retrieval off the shelf; and the linear mode of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
Artificial Intelligence and Information Retrieval.
ERIC Educational Resources Information Center
Teodorescu, Ioana
1987-01-01
Compares artificial intelligence and information retrieval paradigms for natural language understanding, reviews progress to date, and outlines the applicability of artificial intelligence to question answering systems. A list of principal artificial intelligence software for database front end systems is appended. (CLB)
Vassilev, Apostol; Mouha, Nicky; Brandão, Luís
2018-01-01
The security of encrypted data depends not only on the theoretical properties of cryptographic primitives but also on the robustness of their implementations in software and hardware. Threshold cryptography introduces a computational paradigm that enables higher assurance for such implementations.
Behavior Models for Software Architecture
2014-11-01
MP. Existing process modeling frameworks (BPEL, BPMN [Grosskopf et al. 2009], IDEF) usually follow the “single flowchart” paradigm. MP separates...Process: Business Process Modeling using BPMN , Meghan Kiffer Press. HAREL, D., 1987, A Visual Formalism for Complex Systems. Science of Computer
A software tool for dataflow graph scheduling
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1994-01-01
A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on multiple processors. The dataflow paradigm is very useful in exposing the parallelism inherent in algorithms. It provides a graphical and mathematical model which describes a partial ordering of algorithm tasks based on data precedence.
Designing a Software Tool for Fuzzy Logic Programming
NASA Astrophysics Data System (ADS)
Abietar, José M.; Morcillo, Pedro J.; Moreno, Ginés
2007-12-01
Fuzzy Logic Programming is an interesting and still growing research area that agglutinates the efforts for introducing fuzzy logic into logic programming (LP), in order to incorporate more expressive resources on such languages for dealing with uncertainty and approximated reasoning. The multi-adjoint logic programming approach is a recent and extremely flexible fuzzy logic paradigm for which, unfortunately, we have not found practical tools implemented so far. In this work, we describe a prototype system which is able to directly translate fuzzy logic programs into Prolog code in order to safely execute these residual programs inside any standard Prolog interpreter in a completely transparent way for the final user. We think that the development of such fuzzy languages and programing tools might play an important role in the design of advanced software applications for computational physics, chemistry, mathematics, medicine, industrial control and so on.
TRSkit: A Simple Digital Library Toolkit
NASA Technical Reports Server (NTRS)
Nelson, Michael L.; Esler, Sandra L.
1997-01-01
This paper introduces TRSkit, a simple and effective toolkit for building digital libraries on the World Wide Web. The toolkit was developed for the creation of the Langley Technical Report Server and the NASA Technical Report Server, but is applicable to most simple distribution paradigms. TRSkit contains a handful of freely available software components designed to be run under the UNIX operating system and served via the World Wide Web. The intended customer is the person that must continuously and synchronously distribute anywhere from 100 - 100,000's of information units and does not have extensive resources to devote to the problem.
Program Helps Simulate Neural Networks
NASA Technical Reports Server (NTRS)
Villarreal, James; Mcintire, Gary
1993-01-01
Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Vassilev, Apostol; Mouha, Nicky; Brandão, Luís
2018-01-01
The security of encrypted data depends not only on the theoretical properties of cryptographic primitives but also on the robustness of their implementations in software and hardware. Threshold cryptography introduces a computational paradigm that enables higher assurance for such implementations. PMID:29576634
Imaging and the completion of the omics paradigm in breast cancer.
Leithner, D; Horvat, J V; Ochoa-Albiztegui, R E; Thakur, S; Wengert, G; Morris, E A; Helbich, T H; Pinker, K
2018-06-08
Within the field of oncology, "omics" strategies-genomics, transcriptomics, proteomics, metabolomics-have many potential applications and may significantly improve our understanding of the underlying processes of cancer development and progression. Omics strategies aim to develop meaningful imaging biomarkers for breast cancer (BC) by rapid assessment of large datasets with different biological information. In BC the paradigm of omics technologies has always favored the integration of multiple layers of omics data to achieve a complete portrait of BC. Advances in medical imaging technologies, image analysis, and the development of high-throughput methods that can extract and correlate multiple imaging parameters with "omics" data have ushered in a new direction in medical research. Radiogenomics is a novel omics strategy that aims to correlate imaging characteristics (i. e., the imaging phenotype) with underlying gene expression patterns, gene mutations, and other genome-related characteristics. Radiogenomics not only represents the evolution in the radiology-pathology correlation from the anatomical-histological level to the molecular level, but it is also a pivotal step in the omics paradigm in BC in order to fully characterize BC. Armed with modern analytical software tools, radiogenomics leads to new discoveries of quantitative and qualitative imaging biomarkers that offer hitherto unprecedented insights into the complex tumor biology and facilitate a deeper understanding of cancer development and progression. The field of radiogenomics in breast cancer is rapidly evolving, and results from previous studies are encouraging. It can be expected that radiogenomics will play an important role in the future and has the potential to revolutionize the diagnosis, treatment, and prognosis of BC patients. This article aims to give an overview of breast radiogenomics, its current role, future applications, and challenges.
The cloud paradigm applied to e-Health
2013-01-01
Background Cloud computing is a new paradigm that is changing how enterprises, institutions and people understand, perceive and use current software systems. With this paradigm, the organizations have no need to maintain their own servers, nor host their own software. Instead, everything is moved to the cloud and provided on demand, saving energy, physical space and technical staff. Cloud-based system architectures provide many advantages in terms of scalability, maintainability and massive data processing. Methods We present the design of an e-health cloud system, modelled by an M/M/m queue with QoS capabilities, i.e. maximum waiting time of requests. Results Detailed results for the model formed by a Jackson network of two M/M/m queues from the queueing theory perspective are presented. These results show a significant performance improvement when the number of servers increases. Conclusions Platform scalability becomes a critical issue since we aim to provide the system with high Quality of Service (QoS). In this paper we define an architecture capable of adapting itself to different diseases and growing numbers of patients. This platform could be applied to the medical field to greatly enhance the results of those therapies that have an important psychological component, such as addictions and chronic diseases. PMID:23496912
Adding intelligent services to an object oriented system
NASA Technical Reports Server (NTRS)
Robideaux, Bret R.; Metzler, Theodore A.
1994-01-01
As today's software becomes increasingly complex, the need grows for intelligence of one sort or another to becomes part of the application, often an intelligence that does not readily fit the paradigm of one's software development. There are many methods of developing software, but at this time, the most promising is the object oriented (OO) method. This method involves an analysis to abstract the problem into separate 'objects' that are unique in the data that describe them and the behavior that they exhibit, and eventually to convert this analysis into computer code using a programming language that was designed (or retrofitted) for OO implementation. This paper discusses the creation of three different applications that are analyzed, designed, and programmed using the Shlaer/Mellor method of OO development and C++ as the programming language. All three, however, require the use of an expert system to provide an intelligence that C++ (or any other 'traditional' language) is not directly suited to supply. The flexibility of CLIPS permitted us to make modifications to it that allow seamless integration with any of our applications that require an expert system. We illustrate this integration with the following applications: (1) an after action review (AAR) station that assists a reviewer in watching a simulated tank battle and developing an AAR to critique the performance of the participants in the battle; (2) an embedded training system and over-the-shoulder coach for howitzer crewmen; and (3) a system to identify various chemical compounds from their infrared absorption spectra.
Enriching and improving the quality of linked data with GIS
NASA Astrophysics Data System (ADS)
Iwaniak, Adam; Kaczmarek, Iwona; Strzelecki, Marek; Lukowicz, Jaromar; Jankowski, Piotr
2016-06-01
Standardization of methods for data exchange in GIS has along history predating the creation of World Wide Web. The advent of World Wide Web brought the emergence of new solutions for data exchange and sharing including; more recently, standards proposed by the W3C for data exchange involving Semantic Web technologies and linked data. Despite the growing interest in integration, GIS and linked data are still two separate paradigms for describing and publishing spatial data on the Web. At the same time, both paradigms offer complementary ways of representing real world phenomena and means of analysis using different processing functions. The complementarity of linked data and GIS can be leveraged to synergize both paradigms resulting in richer data content and more powerful inferencing. The article presents an approach aimed at integrating linked data with GIS. The approach relies on the use of GIS tools for integration, verification and enrichment of linked data. The GIS tools are employed to enrich linked data by furnishing access to collection of data resources, defining relationship between data resources, and subsequently facilitating GIS data integration with linked data. The proposed approach is demonstrated with examples using data from DBpedia, OSM, and tools developed by the authors for standard GIS software.
Information technologies for astrophysics circa 2001
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1991-01-01
It is easy to extrapolate current trends to see where technologies relating to information systems in astrophysics and other disciplines will be by the end of the decade. These technologies include miniaturization, multiprocessing, software technology, networking, databases, graphics, pattern computation, and interdisciplinary studies. It is less easy to see what limits our current paradigms place on our thinking about technologies that will allow us to understand the laws governing very large systems about which we have large data sets. Three limiting paradigms are as follows: saving all the bits collected by instruments or generated by supercomputers; obtaining technology for information compression, storage, and retrieval off the shelf; and the linear model of innovation. We must extend these paradigms to meet our goals for information technology at the end of the decade.
Technological choices for mobile clinical applications.
Ehrler, Frederic; Issom, David; Lovis, Christian
2011-01-01
The rise of cheaper and more powerful mobile devices make them a new and attractive platform for clinical applications. The interaction paradigm and portability of the device facilitates bedside human-machine interactions. The better accessibility to information and decision-support anywhere in the hospital improves the efficiency and the safety of care processes. In this study, we attempt to find out what are the most appropriate Operating System (OS) and Software Development Kit (SDK) to support the development of clinical applications on mobile devices. The Android platform is a Linux-based, open source platform that has many advantages. Two main SDKs are available on this platform: the native Android and the Adobe Flex SDK. Both of them have interesting features, but the latter has been preferred due its portability at comparable performance and ease of development.
Lost in the Cloud - New Challenges for Teaching GIS
NASA Astrophysics Data System (ADS)
Bellman, C. J.; Pupedis, G.
2016-06-01
As cloud based services move towards becoming the dominant paradigm in many areas of information technology, GIS has also moved into `the Cloud', creating a new opportunities for professionals and students alike, while at the same time presenting a range of new challenges and opportunities for GIS educators. Learning for many students in the geospatial science disciplines has been based on desktop software for GIS, building their skills from basic data handling and manipulation to advanced spatial analysis and database storage. Cloud-based systems challenge this paradigm in many ways, with some of the skills being replaced by clever and capable software tools, while the ubiquitous nature of the computing environment offers access and processing from anywhere, on any device. This paper describes our experiences over the past two years in developing and delivering a new course incorporating cloud based technologies for GIS and illustrates the many benefits and pitfalls of a cloud based approach to teaching. Throughout the course, students were encouraged to provide regular feedback on the course through the use of online journals. This allowed students to critique the approach to teaching, the learning materials available and to describe their own level of comfort and engagement with the material in an honest and non-confrontational manner. Many of the students did not have a strong information technology background and the journals provided great insight into the views of the students and the challenges they faced in mastering this technology.
A MATLAB-based eye tracking control system using non-invasive helmet head restraint in the macaque.
De Luna, Paolo; Mohamed Mustafar, Mohamed Faiz Bin; Rainer, Gregor
2014-09-30
Tracking eye position is vital for behavioral and neurophysiological investigations in systems and cognitive neuroscience. Infrared camera systems which are now available can be used for eye tracking without the need to surgically implant magnetic search coils. These systems are generally employed using rigid head fixation in monkeys, which maintains the eye in a constant position and facilitates eye tracking. We investigate the use of non-rigid head fixation using a helmet that constrains only general head orientation and allows some freedom of movement. We present a MATLAB software solution to gather and process eye position data, present visual stimuli, interact with various devices, provide experimenter feedback and store data for offline analysis. Our software solution achieves excellent timing performance due to the use of data streaming, instead of the traditionally employed data storage mode for processing analog eye position data. We present behavioral data from two monkeys, demonstrating that adequate performance levels can be achieved on a simple fixation paradigm and show how performance depends on parameters such as fixation window size. Our findings suggest that non-rigid head restraint can be employed for behavioral training and testing on a variety of gaze-dependent visual paradigms, reducing the need for rigid head restraint systems for some applications. While developed for macaque monkey, our system of course can work equally well for applications in human eye tracking where head constraint is undesirable. Copyright © 2014. Published by Elsevier B.V.
Dataflow Design Tool: User's Manual
NASA Technical Reports Server (NTRS)
Jones, Robert L., III
1996-01-01
The Dataflow Design Tool is a software tool for selecting a multiprocessor scheduling solution for a class of computational problems. The problems of interest are those that can be described with a dataflow graph and are intended to be executed repetitively on a set of identical processors. Typical applications include signal processing and control law problems. The software tool implements graph-search algorithms and analysis techniques based on the dataflow paradigm. Dataflow analyses provided by the software are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool provides performance optimization through the inclusion of artificial precedence constraints among the schedulable tasks. The user interface and tool capabilities are described. Examples are provided to demonstrate the analysis, scheduling, and optimization functions facilitated by the tool.
Chambaron, Stéphanie; Ginhac, Dominique; Perruchet, Pierre
2008-05-01
Serial reaction time tasks and, more generally, the visual-motor sequential paradigms are increasingly popular tools in a variety of research domains, from studies on implicit learning in laboratory contexts to the assessment of residual learning capabilities of patients in clinical settings. A consequence of this success, however, is the increased variability in paradigms and the difficulty inherent in respecting the methodological principles that two decades of experimental investigations have made more and more stringent. The purpose of the present article is to address those problems. We present a user-friendly application that simplifies running classical experiments, but is flexible enough to permit a broad range of nonstandard manipulations for more specific objectives. Basic methodological guidelines are also provided, as are suggestions for using the software to explore unconventional directions of research. The most recent version of gSRT-Soft may be obtained for free by contacting the authors.
Performance analysis of a large-grain dataflow scheduling paradigm
NASA Technical Reports Server (NTRS)
Young, Steven D.; Wills, Robert W.
1993-01-01
A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.
Andrew, C G
1996-08-01
Manufacturing managements and practitioners alike are at long last realizing that the heartbeat of competitive advantage springs from peopleware, not hardware and software. But despite this heightened awareness the problem persists even among manufacturing professionals--they may talk a good game about priortizing people and quality, but all too many have precious little idea of how to go about it with constancy of purpose. This article bridges the gap and addresses the key issues in adopting the powerful new peopleware paradigm that provides the positive motivational climate for the improvement-change journey toward world-class performance through teamwork, innovation, and continuous improvement.
NASA Astrophysics Data System (ADS)
Newman, Andrew J.; Richardson, Casey L.; Kain, Sean M.; Stankiewicz, Paul G.; Guseman, Paul R.; Schreurs, Blake A.; Dunne, Jeffrey A.
2016-05-01
This paper introduces the game of reconnaissance blind multi-chess (RBMC) as a paradigm and test bed for understanding and experimenting with autonomous decision making under uncertainty and in particular managing a network of heterogeneous Intelligence, Surveillance and Reconnaissance (ISR) sensors to maintain situational awareness informing tactical and strategic decision making. The intent is for RBMC to serve as a common reference or challenge problem in fusion and resource management of heterogeneous sensor ensembles across diverse mission areas. We have defined a basic rule set and a framework for creating more complex versions, developed a web-based software realization to serve as an experimentation platform, and developed some initial machine intelligence approaches to playing it.
JACOB: an enterprise framework for computational chemistry.
Waller, Mark P; Dresselhaus, Thomas; Yang, Jack
2013-06-15
Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Robins, N. S.; Rutter, H. K.; Dumpleton, S.; Peach, D. W.
2005-01-01
Groundwater investigation has long depended on the process of developing a conceptual flow model as a precursor to developing a mathematical model, which in turn may lead in complex aquifers to the development of a numerical approximation model. The assumptions made in the development of the conceptual model depend heavily on the geological framework defining the aquifer, and if the conceptual model is inappropriate then subsequent modelling will also be incorrect. Paradoxically, the development of a robust conceptual model remains difficult, not least because this 3D paradigm is usually reduced to 2D plans and sections. 3D visualisation software is now available to facilitate the development of the conceptual model, to make the model more robust and defensible and to assist in demonstrating the hydraulics of the aquifer system. Case studies are presented to demonstrate the role and cost-effectiveness of the visualisation process.
Unidata cyberinfrastructure in the cloud: A progress report
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan
2016-04-01
Data services, software, and committed support are critical components of geosciences cyber-infrastructure that can help scientists address problems of unprecedented complexity, scale, and scope. Unidata is currently working on innovative ideas, new paradigms, and novel techniques to complement and extend its offerings. Our goal is to empower users so that they can tackle major, heretofore difficult problems. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. To realize the above vision, Unidata is working toward: * Providing access to many types of data from a cloud (e.g., TDS, RAMADDA and EDEX); * Deploying data-proximate tools to easily process, analyze and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Fostering partnerships with NOAA and public cloud vendors (e.g., Amazon) to harness their capabilities and resources for the benefit of the academic community.
Temporal and contextual knowledge in model-based expert systems
NASA Technical Reports Server (NTRS)
Toth-Fejel, Tihamer; Heher, Dennis
1987-01-01
A basic paradigm that allows representation of physical systems with a focus on context and time is presented. Paragon provides the capability to quickly capture an expert's knowledge in a cognitively resonant manner. From that description, Paragon creates a simulation model in LISP, which when executed, verifies that the domain expert did not make any mistakes. The Achille's heel of rule-based systems has been the lack of a systematic methodology for testing, and Paragon's developers are certain that the model-based approach overcomes that problem. The reason this testing is now possible is that software, which is very difficult to test, has in essence been transformed into hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
On architecting and composing engineering information services to enable smart manufacturing
Ivezic, Nenad; Srinivasan, Vijay
2016-01-01
Engineering information systems play an important role in the current era of digitization of manufacturing, which is a key component to enable smart manufacturing. Traditionally, these engineering information systems spanned the lifecycle of a product by providing interoperability of software subsystems through a combination of open and proprietary exchange of data. But research and development efforts are underway to replace this paradigm with engineering information services that can be composed dynamically to meet changing needs in the operation of smart manufacturing systems. This paper describes the opportunities and challenges in architecting such engineering information services and composing them to enable smarter manufacturing. PMID:27840595
Giving pandas ROOT to chew on: experiences with the XENON1T Dark Matter experiment
NASA Astrophysics Data System (ADS)
Remenska, D.; Tunnell, C.; Aalbers, J.; Verhoeven, S.; Maassen, J.; Templon, J.
2017-10-01
In preparation for the XENON1T Dark Matter data acquisition, we have prototyped and implemented a new computing model. The XENON signal and data processing software is developed fully in Python 3, and makes extensive use of generic scientific data analysis libraries, such as the SciPy stack. A certain tension between modern “Big Data” solutions and existing HEP frameworks is typically experienced in smaller particle physics experiments. ROOT is still the “standard” data format in our field, defined by large experiments (ATLAS, CMS). To ease the transition, our computing model caters to both analysis paradigms, leaving the choice of using ROOT-specific C++ libraries, or alternatively, Python and its data analytics tools, as a front-end choice of developing physics algorithms. We present our path on harmonizing these two ecosystems, which allowed us to use off-the-shelf software libraries (e.g., NumPy, SciPy, scikit-learn, matplotlib) and lower the cost of development and maintenance. To analyse the data, our software allows researchers to easily create “mini-trees” small, tabular ROOT structures for Python analysis, which can be read directly into pandas DataFrame structures. One of our goals was making ROOT available as a cross-platform binary for an easy installation from the Anaconda Cloud (without going through the “dependency hell”). In addition to helping us discover dark matter interactions, lowering this barrier helps shift the particle physics toward non-domain-specific code.
Software implementation of the SKIPSM paradigm under PIP
NASA Astrophysics Data System (ADS)
Hack, Ralf; Waltz, Frederick M.; Batchelor, Bruce G.
1997-09-01
SKIPSM (separated-kernel image processing using finite state machines) is a technique for implementing large-kernel binary- morphology operators and many other operations. While earlier papers on SKIPSM concentrated mainly on implementations using pipelined hardware, there is considerable scope for achieving major speed improvements in software systems. Using identical control software, one-pass binary erosion and dilation structuring elements (SEs) ranging from the trivial (3 by 3) to the gigantic (51 by 51, or even larger), are readily available. Processing speed is independent of the size of the SE, making the SKIPSM approach practical for work with very large SEs on ordinary desktop computers. PIP (prolog image processing) is an interactive machine vision prototyping environment developed at the University of Wales Cardiff. It consists of a large number of image processing operators embedded within the standard AI language Prolog. This paper describes the SKIPSM implementation of binary morphology operators within PIP. A large set of binary erosion and dilation operations (circles, squares, diamonds, octagons, etc.) is available to the user through a command-line driven dialogue, via pull-down menus, or incorporated into standard (Prolog) programs. Little has been done thus far to optimize speed on this first software implementation of SKIPSM. Nevertheless, the results are impressive. The paper describes sample applications and presents timing figures. Readers have the opportunity to try out these operations on demonstration software written by the University of Wales, or via their WWW home page at http://bruce.cs.cf.ac.uk/bruce/index.html .
Parallel Computing for Probabilistic Response Analysis of High Temperature Composites
NASA Technical Reports Server (NTRS)
Sues, R. H.; Lua, Y. J.; Smith, M. D.
1994-01-01
The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.
NASA Astrophysics Data System (ADS)
Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.
Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
Metrics of a Paradigm for Intelligent Control
NASA Technical Reports Server (NTRS)
Hexmoor, Henry
1999-01-01
We present metrics for quantifying organizational structures of complex control systems intended for controlling long-lived robotic or other autonomous applications commonly found in space applications. Such advanced control systems are often called integration platforms or agent architectures. Reported metrics span concerns about time, resources, software engineering, and complexities in the world.
The Rise of the Super Experiment
ERIC Educational Resources Information Center
Stamper, John C.; Lomas, Derek; Ching, Dixie; Ritter, Steve; Koedinger, Kenneth R.; Steinhart, Jonathan
2012-01-01
Traditional experimental paradigms have focused on executing experiments in a lab setting and eventually moving successful findings to larger experiments in the field. However, data from field experiments can also be used to inform new lab experiments. Now, with the advent of large student populations using internet-based learning software, online…
Integrated web system of geospatial data services for climate research
NASA Astrophysics Data System (ADS)
Okladnikov, Igor; Gordov, Evgeny; Titov, Alexander
2016-04-01
Georeferenced datasets are currently actively used for modeling, interpretation and forecasting of climatic and ecosystem changes on different spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size (up to tens terabytes for a single dataset) a special software supporting studies in the climate and environmental change areas is required. An approach for integrated analysis of georefernced climatological data sets based on combination of web and GIS technologies in the framework of spatial data infrastructure paradigm is presented. According to this approach a dedicated data-processing web system for integrated analysis of heterogeneous georeferenced climatological and meteorological data is being developed. It is based on Open Geospatial Consortium (OGC) standards and involves many modern solutions such as object-oriented programming model, modular composition, and JavaScript libraries based on GeoExt library, ExtJS Framework and OpenLayers software. This work is supported by the Ministry of Education and Science of the Russian Federation, Agreement #14.613.21.0037.
Xu, Dongrong; Hao, Xuejun; Wang, Zhishun; Duan, Yunsuo; Liu, Feng; Marsh, Rachel; Yu, Shan; Peterson, Bradley S.
2015-01-01
An increasing number of functional brain imaging studies are employing computer-based virtual reality (VR) to study changes in brain activity during the performance of high-level psychological and cognitive tasks. We report the development of a VR radial arm maze that adapts for human use in a scanning environment with the same general experimental design of behavioral tasks as that has been used with remarkable effectiveness for the study of multiple memory systems in rodents. The software platform is independent of specific computer hardware and operating systems, as we aim to provide shared access to this technology by the research community. We hope that doing so will provide greater standardization of software platform and study paradigm that will reduce variability and improve the comparability of findings across studies. We report the details of the design and implementation of this platform and provide information for downloading of the system for demonstration and research applications. PMID:26366052
Incorporating BDI Agents into Human-Agent Decision Making Research
NASA Astrophysics Data System (ADS)
Kamphorst, Bart; van Wissen, Arlette; Dignum, Virginia
Artificial agents, people, institutes and societies all have the ability to make decisions. Decision making as a research area therefore involves a broad spectrum of sciences, ranging from Artificial Intelligence to economics to psychology. The Colored Trails (CT) framework is designed to aid researchers in all fields in examining decision making processes. It is developed both to study interaction between multiple actors (humans or software agents) in a dynamic environment, and to study and model the decision making of these actors. However, agents in the current implementation of CT lack the explanatory power to help understand the reasoning processes involved in decision making. The BDI paradigm that has been proposed in the agent research area to describe rational agents, enables the specification of agents that reason in abstract concepts such as beliefs, goals, plans and events. In this paper, we present CTAPL: an extension to CT that allows BDI software agents that are written in the practical agent programming language 2APL to reason about and interact with a CT environment.
Model for Simulating a Spiral Software-Development Process
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Curley, Charles; Nayak, Umanath
2010-01-01
A discrete-event simulation model, and a computer program that implements the model, have been developed as means of analyzing a spiral software-development process. This model can be tailored to specific development environments for use by software project managers in making quantitative cases for deciding among different software-development processes, courses of action, and cost estimates. A spiral process can be contrasted with a waterfall process, which is a traditional process that consists of a sequence of activities that include analysis of requirements, design, coding, testing, and support. A spiral process is an iterative process that can be regarded as a repeating modified waterfall process. Each iteration includes assessment of risk, analysis of requirements, design, coding, testing, delivery, and evaluation. A key difference between a spiral and a waterfall process is that a spiral process can accommodate changes in requirements at each iteration, whereas in a waterfall process, requirements are considered to be fixed from the beginning and, therefore, a waterfall process is not flexible enough for some projects, especially those in which requirements are not known at the beginning or may change during development. For a given project, a spiral process may cost more and take more time than does a waterfall process, but may better satisfy a customer's expectations and needs. Models for simulating various waterfall processes have been developed previously, but until now, there have been no models for simulating spiral processes. The present spiral-process-simulating model and the software that implements it were developed by extending a discrete-event simulation process model of the IEEE 12207 Software Development Process, which was built using commercially available software known as the Process Analysis Tradeoff Tool (PATT). Typical inputs to PATT models include industry-average values of product size (expressed as number of lines of code), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.
Orthographic learning and the role of text-to-speech software in Dutch disabled readers.
Staels, Eva; Van den Broeck, Wim
2015-01-01
In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's self-teaching paradigm. A total of 65 disabled Dutch readers were asked to read eight stories containing embedded homophonic pseudoword targets (e.g., Blot/Blod), with or without the support of text-to-speech software. The amount of orthographic learning was assessed 3 or 7 days later by three measures of orthographic learning. First, the results supported the presence of orthographic learning during independent silent reading by demonstrating that target spellings were correctly identified more often, named more quickly, and spelled more accurately than their homophone foils. Our results support the hypothesis that all readers, even poor readers of transparent orthographies, are capable of developing word-specific knowledge. Second, a negative effect of text-to-speech software on orthographic learning was demonstrated in this study. This negative effect was interpreted as the consequence of passively listening to the auditory presentation of the text. We clarify how these results can be interpreted within current theoretical accounts of orthographic learning and briefly discuss implications for remedial interventions. © Hammill Institute on Disabilities 2013.
Angelcare mobile system: homecare patient monitoring using bluetooth and GPRS.
Ribeiro, Anna G D; Maitelli, Andre L; Valentim, Ricardo A M; Brandao, Glaucio B; Guerreiro, Ana M G
2010-01-01
The quick progress in technology has brought new paradigms to the computing area, bringing with them many benefits to society. The paradigm of ubiquitous computing brings innovations applying computing in people's daily life without being noticed. For this, it has used the combination of several existing technologies like wireless communications and sensors. Several of the benefits have reached the medical area, bringing new methods of surgery, appointments and examinations. This work presents telemedicine software that adds the idea of ubiquity to the medical area, innovating the relation between doctor and patient. It also brings security and confidence to a patient being monitored in homecare.
Development of a Free-Flight Simulation Infrastructure
NASA Technical Reports Server (NTRS)
Miles, Eric S.; Wing, David J.; Davis, Paul C.
1999-01-01
In anticipation of a projected rise in demand for air transportation, NASA and the FAA are researching new air-traffic-management (ATM) concepts that fall under the paradigm known broadly as ":free flight". This paper documents the software development and engineering efforts in progress by Seagull Technology, to develop a free-flight simulation (FFSIM) that is intended to help NASA researchers test mature-state concepts for free flight, otherwise referred to in this paper as distributed air / ground traffic management (DAG TM). Under development is a distributed, human-in-the-loop simulation tool that is comprehensive in its consideration of current and envisioned communication, navigation and surveillance (CNS) components, and will allow evaluation of critical air and ground traffic management technologies from an overall systems perspective. The FFSIM infrastructure is designed to incorporate all three major components of the ATM triad: aircraft flight decks, air traffic control (ATC), and (eventually) airline operational control (AOC) centers.
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.
2011-01-01
Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252
Using hybrid expert system approaches for engineering applications
NASA Technical Reports Server (NTRS)
Allen, R. H.; Boarnet, M. G.; Culbert, C. J.; Savely, R. T.
1987-01-01
In this paper, the use of hybrid expert system shells and hybrid (i.e., algorithmic and heuristic) approaches for solving engineering problems is reported. Aspects of various engineering problem domains are reviewed for a number of examples with specific applications made to recently developed prototype expert systems. Based on this prototyping experience, critical evaluations of and comparisons between commercially available tools, and some research tools, in the United States and Australia, and their underlying problem-solving paradigms are made. Characteristics of the implementation tool and the engineering domain are compared and practical software engineering issues are discussed with respect to hybrid tools and approaches. Finally, guidelines are offered with the hope that expert system development will be less time consuming, more effective, and more cost-effective than it has been in the past.
Hyperspectral Cubesat Constellation for Rapid Natural Hazard Response
NASA Astrophysics Data System (ADS)
Mandl, D.; Huemmrich, K. F.; Ly, V. T.; Handy, M.; Ong, L.; Crum, G.
2015-12-01
With the advent of high performance space networks that provide total coverage for Cubesats, the paradigm for low cost, high temporal coverage with hyperspectral instruments becomes more feasible. The combination of ground cloud computing resources, high performance with low power consumption onboard processing, total coverage for the cubesats and social media provide an opprotunity for an architecture that provides cost-effective hyperspectral data products for natural hazard response and decision support. This paper provides a series of pathfinder efforts to create a scalable Intelligent Payload Module(IPM) that has flown on a variety of airborne vehicles including Cessna airplanes, Citation jets and a helicopter and will fly on an Unmanned Aerial System (UAS) hexacopter to monitor natural phenomena. The IPM's developed thus far were developed on platforms that emulate a satellite environment which use real satellite flight software, real ground software. In addition, science processing software has been developed that perform hyperspectral processing onboard using various parallel processing techniques to enable creation of onboard hyperspectral data products while consuming low power. A cubesat design was developed that is low cost and that is scalable to larger consteallations and thus can provide daily hyperspectral observations for any spot on earth. The design was based on the existing IPM prototypes and metrics that were developed over the past few years and a shrunken IPM that can perform up to 800 Mbps throughput. Thus this constellation of hyperspectral cubesats could be constantly monitoring spectra with spectral angle mappers after Level 0, Level 1 Radiometric Correction, Atmospheric Correction processing. This provides the opportunity daily monitoring of any spot on earth on a daily basis at 30 meter resolution which is not available today.
Long live the Data Scientist, but can he/she persist?
NASA Astrophysics Data System (ADS)
Wyborn, L. A.
2011-12-01
In recent years the fourth paradigm of data intensive science has slowly taken hold as the increased capacity of instruments and an increasing number of instruments (in particular sensor networks) have changed how fundamental research is undertaken. Most modern scientific research is about digital capture of data direct from instruments, processing it by computers, storing the results on computers and only publishing a small fraction of data in hard copy publications. At the same time, the rapid increase in capacity of supercomputers, particularly at petascale, means that far larger data sets can be analysed and to greater resolution than previously possible. The new cloud computing paradigm which allows distributed data, software and compute resources to be linked by seamless workflows, is creating new opportunities in processing of high volumes of data to an increasingly larger number of researchers. However, to take full advantage of these compute resources, data sets for analysis have to be aggregated from multiple sources to create high performance data sets. These new technology developments require that scientists must become more skilled in data management and/or have a higher degree of computer literacy. In almost every science discipline there is now an X-informatics branch and a computational X branch (eg, Geoinformatics and Computational Geoscience): both require a new breed of researcher that has skills in both the science fundamentals and also knowledge of some ICT aspects (computer programming, data base design and development, data curation, software engineering). People that can operate in both science and ICT are increasingly known as 'data scientists'. Data scientists are a critical element of many large scale earth and space science informatics projects, particularly those that are tackling current grand challenges at an international level on issues such as climate change, hazard prediction and sustainable development of our natural resources. These projects by their very nature require the integration of multiple digital data sets from multiple sources. Often the preparation of the data for computational analysis can take months and requires painstaking attention to detail to ensure that anomalies identified are real and are not just artefacts of the data preparation and/or the computational analysis. Although data scientists are increasingly vital to successful data intensive earth and space science projects, unless they are recognised for their capabilities in both the science and the computational domains they are likely to migrate to either a science role or an ICT role as their career advances. Most reward and recognition systems do not recognise those with skills in both, hence, getting trained data scientists to persist beyond one or two projects can be challenge. Those data scientists that persist in the profession are characteristically committed and enthusiastic people who have the support of their organisations to take on this role. They also tend to be people who share developments and are critical to the success of the open source software movement. However, the fact remains that survival of the data scientist as a species is being threatened unless something is done to recognise their invaluable contributions to the new fourth paradigm of science.
Data to Pictures to Data: Outreach Imaging Software and Metadata
NASA Astrophysics Data System (ADS)
Levay, Z.
2011-07-01
A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.
Distance Learners' Perspective on User-Friendly Instructional Materials at the University of Zambia
ERIC Educational Resources Information Center
Simui, F.; Thompson, L. C.; Mundende, K.; Mwewa, G.; Kakana, F.; Chishiba, A.; Namangala, B.
2017-01-01
This case study focuses on print-based instructional materials available to distance education learners at the University of Zambia. Using the Visual Paradigm Software, we model distance education learners' voices into sociograms to make a contribution to the ongoing discourse on quality distance learning in poorly resourced communities. Emerging…
On Parallel Software Engineering Education Using Python
ERIC Educational Resources Information Center
Marowka, Ami
2018-01-01
Python is gaining popularity in academia as the preferred language to teach novices serial programming. The syntax of Python is clean, easy, and simple to understand. At the same time, it is a high-level programming language that supports multi programming paradigms such as imperative, functional, and object-oriented. Therefore, by default, it is…
ERIC Educational Resources Information Center
Cramer, Sharon F.; Tetewsky, Sheldon J.; Marczynski, Kelly S.
2010-01-01
Implementations of new or major upgrades of existing student information systems require incorporation of new paradigms and the exchange of familiar routines for new methods. As a result, implementations are almost always time consuming and expensive. Many people in the field of information technology have identified some of the challenges faced…
Educational Simulation in Practice: A Teaching Experience Using a Flight Simulator
ERIC Educational Resources Information Center
Ruiz, Sergio; Aguado, Carlos; Moreno, Romualdo
2014-01-01
The use of appropriate Educational Simulation systems (software and hardware for learning purposes) may contribute to the application of the "Learning by Doing" (LbD) paradigm in classroom, thus helping the students to assimilate the theoretical concepts of a subject and acquire certain pre-defined competencies in a more didactical way.…
ERIC Educational Resources Information Center
Conn, Samuel S.; Reichgelt, Han
2013-01-01
Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…
Determining the Most Suitable E-Learning Delivery Mode for TUT Students
ERIC Educational Resources Information Center
Odunaike, Solomon Adeyemi; Chuene, Daniel
2011-01-01
Traditionally, in education and business environment, Information Technology has been seen as purely a support or operational tool. Advances in computing, information storage, software, and networking are all leading to new tools for teaching and learning and are also changing the paradigm for new initiative in the classroom teaching. The Internet…
NASA Technical Reports Server (NTRS)
Davis, Bruce E.; Elliot, Gregory
1989-01-01
Jackson State University recently established the Center for Spatial Data Research and Applications, a Geographical Information System (GIS) and remote sensing laboratory. Taking advantage of new technologies and new directions in the spatial (geographic) sciences, JSU is building a Center of Excellence in Spatial Data Management. New opportunities for research, applications, and employment are emerging. GIS requires fundamental shifts and new demands in traditional computer science and geographic training. The Center is not merely another computer lab but is one setting the pace in a new applied frontier. GIS and its associated technologies are discussed. The Center's facilities are described. An ARC/INFO GIS runs on a Vax mainframe, with numerous workstations. Image processing packages include ELAS, LIPS, VICAR, and ERDAS. A host of hardware and software peripheral are used in support. Numerous projects are underway, such as the construction of a Gulf of Mexico environmental data base, development of AI in image processing, a land use dynamics study of metropolitan Jackson, and others. A new academic interdisciplinary program in Spatial Data Management is under development, combining courses in Geography and Computer Science. The broad range of JSU's GIS and remote sensing activities is addressed. The impacts on changing paradigms in the university and in the professional world conclude the discussion.
Applications of software-defined radio (SDR) technology in hospital environments.
Chávez-Santiago, Raúl; Mateska, Aleksandra; Chomu, Konstantin; Gavrilovska, Liljana; Balasingham, Ilangko
2013-01-01
A software-defined radio (SDR) is a radio communication system where the major part of its functionality is implemented by means of software in a personal computer or embedded system. Such a design paradigm has the major advantage of producing devices that can receive and transmit widely different radio protocols based solely on the software used. This flexibility opens several application opportunities in hospital environments, where a large number of wired and wireless electronic devices must coexist in confined areas like operating rooms and intensive care units. This paper outlines some possible applications in the 2360-2500 MHz frequency band. These applications include the integration of wireless medical devices in a common communication platform for seamless interoperability, and cognitive radio (CR) for body area networks (BANs) and wireless sensor networks (WSNs) for medical environmental surveillance. The description of a proof-of-concept CR prototype is also presented.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
A Specification for a Godunov-type Eulerian 2-D Hydrocode, Revision 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nystrom, William D; Robey, Jonathan M
2012-05-01
The purpose of this code specification is to describe an algorithm for solving the Euler equations of hydrodynamics in a 2D rectangular region in sufficient detail to allow a software developer to produce an implementation on their target platform using their programming language of choice without requiring detailed knowledge and experience in the field of computational fluid dynamics. It should be possible for a software developer who is proficient in the programming language of choice and is knowledgable of the target hardware to produce an efficient implementation of this specification if they also possess a thorough working knowledge of parallelmore » programming and have some experience in scientific programming using fields and meshes. On modern architectures, it will be important to focus on issues related to the exploitation of the fine grain parallelism and data locality present in this algorithm. This specification aims to make that task easier by presenting the essential details of the algorithm in a systematic and language neutral manner while also avoiding the inclusion of implementation details that would likely be specific to a particular type of programming paradigm or platform architecture.« less
Parsons, Thomas D; McMahan, Timothy; Kane, Robert
2018-01-01
Clinical neuropsychologists have long underutilized computer technologies for neuropsychological assessment. Given the rapid advances in technology (e.g. virtual reality; tablets; iPhones) and the increased accessibility in the past decade, there is an on-going need to identify optimal specifications for advanced technologies while minimizing potential sources of error. Herein, we discuss concerns raised by a joint American Academy of Clinical Neuropsychology/National Academy of Neuropsychology position paper. Moreover, we proffer parameters for the development and use of advanced technologies in neuropsychological assessments. We aim to first describe software and hardware configurations that can impact a computerized neuropsychological assessment. This is followed by a description of best practices for developers and practicing neuropsychologists to minimize error in neuropsychological assessments using advanced technologies. We also discuss the relevance of weighing potential computer error in light of possible errors associated with traditional testing. Throughout there is an emphasis on the need for developers to provide bench test results for their software's performance on various devices and minimum specifications (documented in manuals) for the hardware (e.g. computer, monitor, input devices) in the neuropsychologist's practice. Advances in computerized assessment platforms offer both opportunities and challenges. The challenges can appear daunting but are a manageable and require informed consumers who can appreciate the issues and ask pertinent questions in evaluating their options.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
User Interface Technology for Formal Specification Development
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Formal specification development and modification are an essential component of the knowledge-based software life cycle. User interface technology is needed to empower end-users to create their own formal specifications. This paper describes the advanced user interface for AMPHION1 a knowledge-based software engineering system that targets scientific subroutine libraries. AMPHION is a generic, domain-independent architecture that is specialized to an application domain through a declarative domain theory. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that provides semantic guidance in creating diagrams denoting formal specifications in an application domain. The diagrams also serve to document the specifications. Automatic deductive program synthesis ensures that end-user specifications are correctly implemented. The tables that drive AMPHION's user interface are automatically compiled from a domain theory; portions of the interface can be customized by the end-user. The user interface facilitates formal specification development by hiding syntactic details, such as logical notation. It also turns some of the barriers for end-user specification development associated with strongly typed formal languages into active sources of guidance, without restricting advanced users. The interface is especially suited for specification modification. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development.
Astronomy Data Visualization with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2015-08-01
We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
Colomb, Julien; Reiter, Lutz; Blaszkiewicz, Jedrzej; Wessnitzer, Jan; Brembs, Bjoern
2012-01-01
Insects have been among the most widely used model systems for studying the control of locomotion by nervous systems. In Drosophila, we implemented a simple test for locomotion: in Buridan's paradigm, flies walk back and forth between two inaccessible visual targets [1]. Until today, the lack of easily accessible tools for tracking the fly position and analyzing its trajectory has probably contributed to the slow acceptance of Buridan's paradigm. We present here a package of open source software designed to track a single animal walking in a homogenous environment (Buritrack) and to analyze its trajectory. The Centroid Trajectory Analysis (CeTrAn) software is coded in the open source statistics project R. It extracts eleven metrics and includes correlation analyses and a Principal Components Analysis (PCA). It was designed to be easily customized to personal requirements. In combination with inexpensive hardware, these tools can readily be used for teaching and research purposes. We demonstrate the capabilities of our package by measuring the locomotor behavior of adult Drosophila melanogaster (whose wings were clipped), either in the presence or in the absence of visual targets, and comparing the latter to different computer-generated data. The analysis of the trajectories confirms that flies are centrophobic and shows that inaccessible visual targets can alter the orientation of the flies without changing their overall patterns of activity. Using computer generated data, the analysis software was tested, and chance values for some metrics (as well as chance value for their correlation) were set. Our results prompt the hypothesis that fixation behavior is observed only if negative phototaxis can overcome the propensity of the flies to avoid the center of the platform. Together with our companion paper, we provide new tools to promote Open Science as well as the collection and analysis of digital behavioral data.
NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.
Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul
2014-09-30
As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Katzman, Braden; Tang, Doris; Santella, Anthony; Bao, Zhirong
2018-04-04
AceTree, a software application first released in 2006, facilitates exploration, curation and editing of tracked C. elegans nuclei in 4-dimensional (4D) fluorescence microscopy datasets. Since its initial release, AceTree has been continuously used to interact with, edit and interpret C. elegans lineage data. In its 11 year lifetime, AceTree has been periodically updated to meet the technical and research demands of its community of users. This paper presents the newest iteration of AceTree which contains extensive updates, demonstrates the new applicability of AceTree in other developmental contexts, and presents its evolutionary software development paradigm as a viable model for maintaining scientific software. Large scale updates have been made to the user interface for an improved user experience. Tools have been grouped according to functionality and obsolete methods have been removed. Internal requirements have been changed that enable greater flexibility of use both in C. elegans contexts and in other model organisms. Additionally, the original 3-dimensional (3D) viewing window has been completely reimplemented. The new window provides a new suite of tools for data exploration. By responding to technical advancements and research demands, AceTree has remained a useful tool for scientific research for over a decade. The updates made to the codebase have extended AceTree's applicability beyond its initial use in C. elegans and enabled its usage with other model organisms. The evolution of AceTree demonstrates a viable model for maintaining scientific software over long periods of time.
NASA Technical Reports Server (NTRS)
Stark, Michael; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
My assertion is that not only are product lines a relevant research topic, but that the tools used by empirical software engineering researchers can address observed practical problems. Our experience at NASA has been there are often externally proposed solutions available, but that we have had difficulties applying them in our particular context. We have also focused on return on investment issues when evaluating product lines, and while these are important, one can not attain objective data on success or failure until several applications from a product family have been deployed. The use of the Quality Improvement Paradigm (QIP) can address these issues: (1) Planning an adoption path from an organization's current state to a product line approach; (2) Constructing a development process to fit the organization's adoption path; (3) Evaluation of product line development processes as the project is being developed. The QIP consists of the following six steps: (1) Characterize the project and its environment; (2) Set quantifiable goals for successful project performance; (3) Choose the appropriate process models, supporting methods, and tools for the project; (4) Execute the process, analyze interim results, and provide real-time feedback for corrective action; (5) Analyze the results of completed projects and recommend improvements; and (6) Package the lessons learned as updated and refined process models. A figure shows the QIP in detail. The iterative nature of the QIP supports an incremental development approach to product lines, and the project learning and feedback provide the necessary early evaluations.
The UMLS Knowledge Source Server: an experience in Web 2.0 technologies.
Thorn, Karen E; Bangalore, Anantha K; Browne, Allen C
2007-10-11
The UMLS Knowledge Source Server (UMLSKS), developed at the National Library of Medicine (NLM), makes the knowledge sources of the Unified Medical Language System (UMLS) available to the research community over the Internet. In 2003, the UMLSKS was redesigned utilizing state-of-the-art technologies available at that time. That design offered a significant improvement over the prior version but presented a set of technology-dependent issues that limited its functionality and usability. Four areas of desired improvement were identified: software interfaces, web interface content, system maintenance/deployment, and user authentication. By employing next generation web technologies, newer authentication paradigms and further refinements in modular design methods, these areas could be addressed and corrected to meet the ever increasing needs of UMLSKS developers. In this paper we detail the issues present with the existing system and describe the new system's design using new technologies considered entrants in the Web 2.0 development era.
Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M
2011-09-10
The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.
GTest: a software tool for graphical assessment of empirical distributions' Gaussianity.
Barca, E; Bruno, E; Bruno, D E; Passarella, G
2016-03-01
In the present paper, the novel software GTest is introduced, designed for testing the normality of a user-specified empirical distribution. It has been implemented with two unusual characteristics; the first is the user option of selecting four different versions of the normality test, each of them suited to be applied to a specific dataset or goal, and the second is the inferential paradigm that informs the output of such tests: it is basically graphical and intrinsically self-explanatory. The concept of inference-by-eye is an emerging inferential approach which will find a successful application in the near future due to the growing need of widening the audience of users of statistical methods to people with informal statistical skills. For instance, the latest European regulation concerning environmental issues introduced strict protocols for data handling (data quality assurance, outliers detection, etc.) and information exchange (areal statistics, trend detection, etc.) between regional and central environmental agencies. Therefore, more and more frequently, laboratory and field technicians will be requested to utilize complex software applications for subjecting data coming from monitoring, surveying or laboratory activities to specific statistical analyses. Unfortunately, inferential statistics, which actually influence the decisional processes for the correct managing of environmental resources, are often implemented in a way which expresses its outcomes in a numerical form with brief comments in a strict statistical jargon (degrees of freedom, level of significance, accepted/rejected H0, etc.). Therefore, often, the interpretation of such outcomes is really difficult for people with poor statistical knowledge. In such framework, the paradigm of the visual inference can contribute to fill in such gap, providing outcomes in self-explanatory graphical forms with a brief comment in the common language. Actually, the difficulties experienced by colleagues and their request for an effective tool for addressing such difficulties motivated us in adopting the inference-by-eye paradigm and implementing an easy-to-use, quick and reliable statistical tool. GTest visualizes its outcomes as a modified version of the Q-Q plot. The application has been developed in Visual Basic for Applications (VBA) within MS Excel 2010, which demonstrated to have all the characteristics of robustness and reliability needed. GTest provides true graphical normality tests which are as reliable as any statistical quantitative approach but much easier to understand. The Q-Q plots have been integrated with the outlining of an acceptance region around the representation of the theoretical distribution, defined in accordance with the alpha level of significance and the data sample size. The test decision rule is the following: if the empirical scatterplot falls completely within the acceptance region, then it can be concluded that the empirical distribution fits the theoretical one at the given alpha level. A comprehensive case study has been carried out with simulated and real-world data in order to check the robustness and reliability of the software.
Looking Forward: Comment on Morgante, Zolfaghari, and Johnson
ERIC Educational Resources Information Center
Creel, Sarah C.
2012-01-01
Morgante et al. (in press) find inconsistencies in the time reporting of a Tobii T60XL eye tracker. Their study raises important questions about the use of the Tobii T-series in particular, and various software and hardware in general, in different infant eye tracking paradigms. It leaves open the question of the source of the inconsistencies.…
ERIC Educational Resources Information Center
Weld, Christopher
2014-01-01
Providing audio files in lieu of written remarks on graded assignments is arguably a more effective means of feedback, allowing students to better process and understand the critique and improve their future work. With emerging technologies and software, this audio feedback alternative to the traditional paradigm of providing written comments…
[Paradigm shift in health: forecasting and causation as a basis for risk management].
Denisov, E I; Prokopenko, L V; Golovaneva, G V; Stepanian, I V
2012-01-01
The problem of occupational risk management (ORM) is discussed using the evidence-based medicine approach and bio- and IT-technologies. The prognosis and causation of work-related health disorders are analyzed as components of ORM system. The Web-based handbook "Occupational risk assessment" with software and information materials as practical tool is presented.
ERIC Educational Resources Information Center
Leventhal, Brian C.; Stone, Clement A.
2018-01-01
Interest in Bayesian analysis of item response theory (IRT) models has grown tremendously due to the appeal of the paradigm among psychometricians, advantages of these methods when analyzing complex models, and availability of general-purpose software. Possible models include models which reflect multidimensionality due to designed test structure,…
Social Software: A Powerful Paradigm for Building Technology for Global Learning
ERIC Educational Resources Information Center
Wooding, Amy; Wooding, Kjell
2018-01-01
It is not difficult to imagine a world where internet-connected mobile devices are accessible to everyone. Can these technologies be used to help solve the challenges of global education? This was the challenge posed by the Global Learning XPRIZE--a $15 million grand challenge competition aimed at addressing this global teaching shortfall. In…
Enabling SDN in VANETs: What is the Impact on Security?
Di Maio, Antonio; Palattella, Maria Rita; Soua, Ridha; Lamorte, Luca; Vilajosana, Xavier; Alonso-Zarate, Jesus; Engel, Thomas
2016-01-01
The demand for safe and secure journeys over roads and highways has been growing at a tremendous pace over recent decades. At the same time, the smart city paradigm has emerged to improve citizens’ quality of life by developing the smart mobility concept. Vehicular Ad hoc NETworks (VANETs) are widely recognized to be instrumental in realizing such concept, by enabling appealing safety and infotainment services. Such networks come with their own set of challenges, which range from managing high node mobility to securing data and user privacy. The Software Defined Networking (SDN) paradigm has been identified as a suitable solution for dealing with the dynamic network environment, the increased number of connected devices, and the heterogeneity of applications. While some preliminary investigations have been already conducted to check the applicability of the SDN paradigm to VANETs, and its presumed benefits for managing resources and mobility, it is still unclear what impact SDN will have on security and privacy. Security is a relevant issue in VANETs, because of the impact that threats can have on drivers’ behavior and quality of life. This paper opens a discussion on the security threats that future SDN-enabled VANETs will have to face, and investigates how SDN could be beneficial in building new countermeasures. The analysis is conducted in real use cases (smart parking, smart grid of electric vehicles, platooning, and emergency services), which are expected to be among the vehicular applications that will most benefit from introducing an SDN architecture. PMID:27929443
Enabling SDN in VANETs: What is the Impact on Security?
Di Maio, Antonio; Palattella, Maria Rita; Soua, Ridha; Lamorte, Luca; Vilajosana, Xavier; Alonso-Zarate, Jesus; Engel, Thomas
2016-12-06
The demand for safe and secure journeys over roads and highways has been growing at a tremendous pace over recent decades. At the same time, the smart city paradigm has emerged to improve citizens' quality of life by developing the smart mobility concept. Vehicular Ad hoc NETworks (VANETs) are widely recognized to be instrumental in realizing such concept, by enabling appealing safety and infotainment services. Such networks come with their own set of challenges, which range from managing high node mobility to securing data and user privacy. The Software Defined Networking (SDN) paradigm has been identified as a suitable solution for dealing with the dynamic network environment, the increased number of connected devices, and the heterogeneity of applications. While some preliminary investigations have been already conducted to check the applicability of the SDN paradigm to VANETs, and its presumed benefits for managing resources and mobility, it is still unclear what impact SDN will have on security and privacy. Security is a relevant issue in VANETs, because of the impact that threats can have on drivers' behavior and quality of life. This paper opens a discussion on the security threats that future SDN-enabled VANETs will have to face, and investigates how SDN could be beneficial in building new countermeasures. The analysis is conducted in real use cases (smart parking, smart grid of electric vehicles, platooning, and emergency services), which are expected to be among the vehicular applications that will most benefit from introducing an SDN architecture.
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
NASA Astrophysics Data System (ADS)
Steiger, Damian S.; Haener, Thomas; Troyer, Matthias
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.
Assessment of physical server reliability in multi cloud computing system
NASA Astrophysics Data System (ADS)
Kalyani, B. J. D.; Rao, Kolasani Ramchand H.
2018-04-01
Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.
Visualizing Astronomical Data with Blender
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2014-01-01
We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.
The MGDO software library for data analysis in Ge neutrinoless double-beta decay experiments
NASA Astrophysics Data System (ADS)
Agostini, M.; Detwiler, J. A.; Finnerty, P.; Kröninger, K.; Lenz, D.; Liu, J.; Marino, M. G.; Martin, R.; Nguyen, K. D.; Pandola, L.; Schubert, A. G.; Volynets, O.; Zavarise, P.
2012-07-01
The Gerda and Majorana experiments will search for neutrinoless double-beta decay of 76Ge using isotopically enriched high-purity germanium detectors. Although the experiments differ in conceptual design, they share many aspects in common, and in particular will employ similar data analysis techniques. The collaborations are jointly developing a C++ software library, MGDO, which contains a set of data objects and interfaces to encapsulate, store and manage physical quantities of interest, such as waveforms and high-purity germanium detector geometries. These data objects define a common format for persistent data, whether it is generated by Monte Carlo simulations or an experimental apparatus, to reduce code duplication and to ease the exchange of information between detector systems. MGDO also includes general-purpose analysis tools that can be used for the processing of measured or simulated digital signals. The MGDO design is based on the Object-Oriented programming paradigm and is very flexible, allowing for easy extension and customization of the components. The tools provided by the MGDO libraries are used by both Gerda and Majorana.
NASA Astrophysics Data System (ADS)
Moulds, S.; Buytaert, W.; Mijic, A.
2015-10-01
We present the lulcc software package, an object-oriented framework for land use change modelling written in the R programming language. The contribution of the work is to resolve the following limitations associated with the current land use change modelling paradigm: (1) the source code for model implementations is frequently unavailable, severely compromising the reproducibility of scientific results and making it impossible for members of the community to improve or adapt models for their own purposes; (2) ensemble experiments to capture model structural uncertainty are difficult because of fundamental differences between implementations of alternative models; and (3) additional software is required because existing applications frequently perform only the spatial allocation of change. The package includes a stochastic ordered allocation procedure as well as an implementation of the CLUE-S algorithm. We demonstrate its functionality by simulating land use change at the Plum Island Ecosystems site, using a data set included with the package. It is envisaged that lulcc will enable future model development and comparison within an open environment.
A practical approach to virtualization in HEP
NASA Astrophysics Data System (ADS)
Buncic, P.; Aguado Sánchez, C.; Blomer, J.; Harutyunyan, A.; Mudrinic, M.
2011-01-01
In the attempt to solve the problem of processing data coming from LHC experiments at CERN at a rate of 15PB per year, for almost a decade the High Enery Physics (HEP) community has focused its efforts on the development of the Worldwide LHC Computing Grid. This generated large interest and expectations promising to revolutionize computing. Meanwhile, having initially taken part in the Grid standardization process, industry has moved in a different direction and started promoting the Cloud Computing paradigm which aims to solve problems on a similar scale and in equally seamless way as it was expected in the idealized Grid approach. A key enabling technology behind Cloud computing is server virtualization. In early 2008, an R&D project was established in the PH-SFT group at CERN to investigate how virtualization technology could be used to improve and simplify the daily interaction of physicists with experiment software frameworks and the Grid infrastructure. In this article we shall first briefly compare Grid and Cloud computing paradigms and then summarize the results of the R&D activity pointing out where and how virtualization technology could be effectively used in our field in order to maximize practical benefits whilst avoiding potential pitfalls.
Higher-order neural network software for distortion invariant object recognition
NASA Technical Reports Server (NTRS)
Reid, Max B.; Spirkovska, Lilly
1991-01-01
The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.
The search for understanding: the role of paradigms.
Kelly, Marcella; Dowling, Maura; Millar, Michelle
2018-03-16
Kuhn's ( 1962 ) acknowledgement of a paradigm as a way that scientists make sense of their world and its reality gave recognition to the idea of 'paradigm shift'. This shift exposes the transience of paradigm development shaped by societal and scientific evolution. This ongoing evolutionary development provides the researcher with many paradigms to consider regarding how research is undertaken and the search for understanding achieved. An understanding of paradigm development is necessary when planning a study and can shape the search for understanding. It is hoped that the discussion presented here will assist novice and experienced researchers in articulating the rationales for their paradigm choices. An overview of the dominant paradigms is presented, reflecting ongoing paradigm development shaped by ontological, epistemological and methodological perspectives. Potential paradigm choices that shape research aims, objectives and focus in the search for understanding are considered. The inherent debates about paradigm shift, division, war and synthesis leave the researcher many perspectives to consider. Articulating the world views underpinning constructivism, interpretivism and pragmatism is particularly challenging because of the blurring of boundaries between them. The evolutionary nature of paradigmatic development has provided nurse researchers with the opportunity for methodological openness to the myriad research approaches, methods and designs that they may choose to answer their research question. However, it is imperative that researchers consider their ontological stances and the nature of their research questions. This is challenging in constructivism, interpretivism and pragmatism, where there is often an overlap of paradigm world views. ©2018 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1992-01-01
The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
The NASA Constellation Program Procedure System
NASA Technical Reports Server (NTRS)
Phillips, Robert G.; Wang, Lui
2010-01-01
NASA has used procedures to describe activities to be performed onboard vehicles by astronaut crew and on the ground by flight controllers since Apollo. Starting with later Space Shuttle missions and the International Space Station, NASA moved forward to electronic presentation of procedures. For the Constellation Program, another large step forward is being taken - to make procedures more interactive with the vehicle and to assist the crew in controlling the vehicle more efficiently and with less error. The overall name for the project is the Constellation Procedure Applications Software System (CxPASS). This paper describes some of the history behind this effort, the key concepts and operational paradigms that the work is based upon, and the actual products being developed to implement procedures for Constellation
A Formal Approach to Domain-Oriented Software Design Environments
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper describes a formal approach to domain-oriented software design environments, based on declarative domain theories, formal specifications, and deductive program synthesis. A declarative domain theory defines the semantics of a domain-oriented specification language and its relationship to implementation-level subroutines. Formal specification development and reuse is made accessible to end-users through an intuitive graphical interface that guides them in creating diagrams denoting formal specifications. The diagrams also serve to document the specifications. Deductive program synthesis ensures that end-user specifications are correctly implemented. AMPHION has been applied to the domain of solar system kinematics through the development of a declarative domain theory, which includes an axiomatization of JPL's SPICELIB subroutine library. Testing over six months with planetary scientists indicates that AMPHION's interactive specification acquisition paradigm enables users to develop, modify, and reuse specifications at least an order of magnitude more rapidly than manual program development. Furthermore, AMPHION synthesizes one to two page programs consisting of calls to SPICELIB subroutines from these specifications in just a few minutes. Test results obtained by metering AMPHION's deductive program synthesis component are examined. AMPHION has been installed at JPL and is currently undergoing further refinement in preparation for distribution to hundreds of SPICELIB users worldwide. Current work to support end-user customization of AMPHION's specification acquisition subsystem is briefly discussed, as well as future work to enable domain-expert creation of new AMPHION applications through development of suitable domain theories.
NASA Technical Reports Server (NTRS)
Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.
2013-01-01
In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.
NASA Technical Reports Server (NTRS)
Smith, Kelly; Gay, Robert; Stachowiak, Susan
2013-01-01
In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter to improve altitude knowledge. In order to increase overall robustness, the vehicle also has an alternate method of triggering the parachute deployment sequence based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this backup trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to semi-automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a statistical classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers improved performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles
NASA Technical Reports Server (NTRS)
Smith, Kelly M.; Gay, Robert S.; Stachowiak, Susan J.
2013-01-01
In late 2014, NASA will fly the Orion capsule on a Delta IV-Heavy rocket for the Exploration Flight Test-1 (EFT-1) mission. For EFT-1, the Orion capsule will be flying with a new GPS receiver and new navigation software. Given the experimental nature of the flight, the flight software must be robust to the loss of GPS measurements. Once the high-speed entry is complete, the drogue parachutes must be deployed within the proper conditions to stabilize the vehicle prior to deploying the main parachutes. When GPS is available in nominal operations, the vehicle will deploy the drogue parachutes based on an altitude trigger. However, when GPS is unavailable, the navigated altitude errors become excessively large, driving the need for a backup barometric altimeter. In order to increase overall robustness, the vehicle also has an alternate method of triggering the drogue parachute deployment based on planet-relative velocity if both the GPS and the barometric altimeter fail. However, this velocity-based trigger results in large altitude errors relative to the targeted altitude. Motivated by this challenge, this paper demonstrates how logistic regression may be employed to automatically generate robust triggers based on statistical analysis. Logistic regression is used as a ground processor pre-flight to develop a classifier. The classifier would then be implemented in flight software and executed in real-time. This technique offers excellent performance even in the face of highly inaccurate measurements. Although the logistic regression-based trigger approach will not be implemented within EFT-1 flight software, the methodology can be carried forward for future missions and vehicles.
An Agent Inspired Reconfigurable Computing Implementation of a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Weir, John M.; Wells, B. Earl
2003-01-01
Many software systems have been successfully implemented using an agent paradigm which employs a number of independent entities that communicate with one another to achieve a common goal. The distributed nature of such a paradigm makes it an excellent candidate for use in high speed reconfigurable computing hardware environments such as those present in modem FPGA's. In this paper, a distributed genetic algorithm that can be applied to the agent based reconfigurable hardware model is introduced. The effectiveness of this new algorithm is evaluated by comparing the quality of the solutions found by the new algorithm with those found by traditional genetic algorithms. The performance of a reconfigurable hardware implementation of the new algorithm on an FPGA is compared to traditional single processor implementations.
A vision of network-centric military communications
NASA Astrophysics Data System (ADS)
Conklin, Ross, Jr.; Burbank, Jack; Nichols, Robert, Jr.
2005-05-01
This paper presents a vision for a future capability-based military communications system that considers user requirements. Historically, the military has developed and fielded many specialized communications systems. While these systems solved immediate communications problems, they were not designed to operate with other systems. As information has become more important to the execution of war, the "stove-pipe" nature of the communications systems deployed by the military is no longer acceptable. Realizing this, the military has begun the transformation of communications to a network-centric communications paradigm. However, the specialized communications systems were developed in response to the widely varying environments related to military communications. These environments, and the necessity for effective communications within these environments, do not disappear under the network-centric paradigm. In fact, network-centric communications allows for one message to cross many of these environments by transiting multiple networks. The military would also like one communications approach that is capable of working well in multiple environments. This paper presents preliminary work on the creation of a framework that allows for a reconfigurable device that is capable of adapting to the physical and network environments. The framework returns to the Open Systems Interconnect (OSI) architecture with the addition of a standardized intra-layer control interface for control information exchange, a standardized data interface and a proposed device architecture based on the software radio.
NASA Astrophysics Data System (ADS)
Herrick, Gregory Paul
The quest to accurately capture flow phenomena with length-scales both short and long and to accurately represent complex flow phenomena within disparately sized geometry inspires a need for an efficient, high-fidelity, multi-block structured computational fluid dynamics (CFD) parallel computational scheme. This research presents and demonstrates a more efficient computational method by which to perform multi-block structured CFD parallel computational simulations, thus facilitating higher-fidelity solutions of complicated geometries (due to the inclusion of grids for "small'' flow areas which are often merely modeled) and their associated flows. This computational framework offers greater flexibility and user-control in allocating the resource balance between process count and wall-clock computation time. The principal modifications implemented in this revision consist of a "multiple grid block per processing core'' software infrastructure and an analytic computation of viscous flux Jacobians. The development of this scheme is largely motivated by the desire to simulate axial compressor stall inception with more complete gridding of the flow passages (including rotor tip clearance regions) than has been previously done while maintaining high computational efficiency (i.e., minimal consumption of computational resources), and thus this paradigm shall be demonstrated with an examination of instability in a transonic axial compressor. However, the paradigm presented herein facilitates CFD simulation of myriad previously impractical geometries and flows and is not limited to detailed analyses of axial compressor flows. While the simulations presented herein were technically possible under the previous structure of the subject software, they were much less computationally efficient and thus not pragmatically feasible; the previous research using this software to perform three-dimensional, full-annulus, time-accurate, unsteady, full-stage (with sliding-interface) simulations of rotating stall inception in axial compressors utilized tip clearance periodic models, while the scheme here is demonstrated by a simulation of axial compressor stall inception utilizing gridded rotor tip clearance regions. As will be discussed, much previous research---experimental, theoretical, and computational---has suggested that understanding clearance flow behavior is critical to understanding stall inception, and previous computational research efforts which have used tip clearance models have begged the question, "What about the clearance flows?''. This research begins to address that question.
An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Lytle, John K. (Technical Monitor)
2002-01-01
Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.
Parallel Algorithms for Computational Models of Geophysical Systems
NASA Astrophysics Data System (ADS)
Carrillo Ledesma, A.; Herrera, I.; de la Cruz, L. M.; Hernández, G.; Grupo de Modelacion Matematica y Computacional
2013-05-01
Mathematical models of many systems of interest, including very important continuous systems of Earth Sciences and Engineering, lead to a great variety of partial differential equations (PDEs) whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by scientific and engineering applications. Parallel computing is outstanding among the new computational tools and, in order to effectively use the most advanced computers available today, massively parallel software is required. Domain decomposition methods (DDMs) have been developed precisely for effectively treating PDEs in paralle. Ideally, the main objective of domain decomposition research is to produce algorithms capable of 'obtaining the global solution by exclusively solving local problems', but up-to-now this has only been an aspiration; that is, a strong desire for achieving such a property and so we call it 'the DDM-paradigm'. In recent times, numerically competitive DDM-algorithms are non-overlapping, preconditioned and necessarily incorporate constraints which pose an additional challenge for achieving the DDM-paradigm. Recently a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm, was developed. To derive them a new discretization method, which uses a non-overlapping system of nodes (the derived-nodes), was introduced. This discretization procedure can be applied to any boundary-value problem, or system of such equations. In turn, the resulting system of discrete equations can be treated using any available DDM-algorithm. In particular, two of the four DVS-algorithms mentioned above were obtained by application of the well-known and very effective algorithms BDDC and FETI-DP; these will be referred to as the DVS-BDDC and DVS-FETI-DP algorithms. The other two, which will be referred to as the DVS-PRIMAL and DVS-DUAL algorithms, were obtained by application of two new algorithms that had not been previously reported in the literature. As said before, the four DVS-algorithms constitute a group of preconditioned and constrained algorithms that, for the first time, fulfill the DDM-paradigm. Both, BDDC and FETI-DP, are very well-known; and both are highly efficient. Recently, it was established that these two methods are closely related and its numerical performance is quite similar. On the other hand, through numerical experiments, we have established that the numerical performances of each one of the members of DVS-algorithms group (DVS-BDDC, DVS-FETI-DP, DVS-PRIMAL and DVS-DUAL) are very similar too. Furthermore, we have carried out comparisons of the performances of the standard versions of BDDC and FETI-DP with DVS-BDDC and DVS-FETI-DP, and in all such numerical experiments the DVS algorithms have performed significantly better.
INTEGRITY -- Integrated Human Exploration Mission Simulation Facility
NASA Astrophysics Data System (ADS)
Henninger, D.; Tri, T.; Daues, K.
It is proposed to develop a high -fidelity ground facil ity to carry out long-duration human exploration mission simulations. These would not be merely computer simulations - they would in fact comprise a series of actual missions that just happen to stay on earth. These missions would include all elements of an actual mission, using actual technologies that would be used for the real mission. These missions would also include such elements as extravehicular activities, robotic systems, telepresence and teleoperation, surface drilling technology--all using a simulated planetary landscape. A sequence of missions would be defined that get progressively longer and more robust, perhaps a series of five or six missions over a span of 10 to 15 years ranging in durat ion from 180 days up to 1000 days. This high-fidelity ground facility would operate hand-in-hand with a host of other terrestrial analog sites such as the Antarctic, Haughton Crater, and the Arizona desert. Of course, all of these analog mission simulations will be conducted here on earth in 1-g, and NASA will still need the Shuttle and ISS to carry out all the microgravity and hypogravity science experiments and technology validations. The proposed missions would have sufficient definition such that definitive requirements could be derived from them to serve as direction for all the program elements of the mission. Additionally, specific milestones would be established for the "launch" date of each mission so that R&D programs would have both good requirements and solid milestones from which to build their implementation plans. Mission aspects that could not be directly incorporated into the ground facility would be simulated via software. New management techniques would be developed for evaluation in this ground test facility program. These new techniques would have embedded metrics which would allow them to be continuously evaluated and adjusted so that by the time the sequence of missions is completed, the best management techniques will have been developed, implemented, and validated. A trained cadre of managers experienced with a large, complex program would then be available. Three other critical items of this approach are as follows: 1) International Cooperation/Collaboration. New paradigms and new techniques for international collaboration would be developed. These paradigms can be developed to include built-in metrics to allow for improvements ultimately to yield proven paradigms for application in the real mission. Note that since this approach is much lower cost than an actual flight mission, smaller countries that could not afford to participate in a program as large as the ISS can become partners. As a result, these nations--along with their citizens--become advocates for human space exploration as well. Since eventual human planetary exploration missions are likely to be truly international, the means for building the requisite working relationships are through cooperative research and technology development activities. 2) Commercial Partnering. Improved paradigms for commercial partnering would be developed - both U.S. and international commercial entities. An examination of what commercial entities would like to gain, what they would expect to contribute, and what NASA wants out of such a relationship would be determined to develop appropriate paradigms. Again, metrics would be included such that continual evaluations can be conducted and adjustments can be made to the working paradigms. Then, after these ground missions are completed, a proven set of paradigms (and a cadre of people trained and comfortable with their use) would be available for the actual mission. Again, since this is a much lower cost program (lower than an actual flight mission), smaller domestic and international commercial entities can participate. 3) Academic Partnering. Improved paradigms for academic partnering can be developed -- both U.S. and international academic institutions. Academic institutions represent a tremendous pool of expertise and creative talent - just what is need for a human planetary exp loration mission. Academia would likely view this ground test facility as a tremendous teaching tool for a variety of disciplines, including science, engineering, medicine, and management.
NASA Astrophysics Data System (ADS)
Sewell, Stephen
This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.
NASA Technical Reports Server (NTRS)
Lopez, Antonio M., Jr.
1993-01-01
Development of an Intelligent Information System (IIS) involves application of numerous artificial intelligence (AI) paradigms and advanced technologies. The National Aeronautics and Space Administration (NASA) is interested in an IIS that can automatically collect, classify, store and retrieve data, as well as develop, manipulate and restructure knowledge regarding the data and its application (Campbell et al., 1987, p.3). This interest stems in part from a NASA initiative in support of the interagency Global Change Research program. NASA's space data problems are so large and varied that scientific researchers will find it almost impossible to access the most suitable information from a software system if meta-information (metadata and meta-knowledge) is not embedded in that system. Even if more, faster, larger hardware is used, new innovative software systems will be required to organize, link, maintain, and properly archive the Earth Observing System (EOS) data that is to be stored and distributed by the EOS Data and Information System (EOSDIS) (Dozier, 1990). Although efforts are being made to specify the metadata that will be used in EOSDIS, meta-knowledge specification issues are not clear. With the expectation that EOSDIS might evolve into an IIS, this paper presents certain ideas on the concept of meta-knowledge and demonstrates how meta-knowledge might be represented in a pixel classification problem.
IntellWheels: modular development platform for intelligent wheelchairs.
Braga, Rodrigo Antonio Marques; Petry, Marcelo; Reis, Luis Paulo; Moreira, António Paulo
2011-01-01
Intelligent wheelchairs (IWs) can become an important solution to the challenge of assisting individuals who have disabilities and are thus unable to perform their daily activities using classic powered wheelchairs. This article describes the concept and design of IntellWheels, a modular platform to facilitate the development of IWs through a multiagent system paradigm. In fact, modularity is achieved not only in the software perspective, but also through a generic hardware framework that was designed to fit, in a straightforward manner, almost any commercial powered wheelchair. Experimental results demonstrate the successful integration of all modules in the platform, providing safe motion to the IW. Furthermore, the results achieved with a prototype running in autonomous mode in simulated and mixed-reality environments also demonstrate the potential of our approach. Although some future research is still necessary to fully accomplish our objectives, preliminary tests have shown that IntellWheels will effectively reduce users' limitations, offering them a much more independent life.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
Web-based training: a new paradigm in computer-assisted instruction in medicine.
Haag, M; Maylein, L; Leven, F J; Tönshoff, B; Haux, R
1999-01-01
Computer-assisted instruction (CAI) programs based on internet technologies, especially on the world wide web (WWW), provide new opportunities in medical education. The aim of this paper is to examine different aspects of such programs, which we call 'web-based training (WBT) programs', and to differentiate them from conventional CAI programs. First, we will distinguish five different interaction types: presentation; browsing; tutorial dialogue; drill and practice; and simulation. In contrast to conventional CAI, there are four architectural types of WBT programs: client-based; remote data and knowledge; distributed teaching; and server-based. We will discuss the implications of the different architectures for developing WBT software. WBT programs have to meet other requirements than conventional CAI programs. The most important tools and programming languages for developing WBT programs will be listed and assigned to the architecture types. For the future, we expect a trend from conventional CAI towards WBT programs.
A computer architecture for intelligent machines
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, G. N.
1991-01-01
The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.
The ADEPT Framework for Intelligent Autonomy
NASA Technical Reports Server (NTRS)
Ricard, Michael; Kolitz, Stephan
2003-01-01
This paper describes the design and implementation of Draper Laboratory's All-Domain Execution and Planning Technology (ADEPT) architecture for intelligent autonomy. Intelligent autonomy is the ability to plan and execute complex activities in a manner that provides rapid, effective response to stochastic and dynamic mission events. Thus, intelligent autonomy enables the high-level reasoning and adaptive behavior for an unmanned vehicle that is provided by an operator in man-in-the-loop systems. Draper s intelligent autonomy has architecture evolved over a decade and a half beginning in the mid 1980's culminating in an operational experiment funded under DARPA's Autonomous Minehunting and Mapping Technologies (AMMT) unmanned undersea vehicle program. ADEPT continues to be refined through its application to current programs that involve air vehicles, satellites and higher-level planning used to direct multiple vehicles. The objective of ADEPT is to solidify a proven, dependable software approach that can be quickly applied to new vehicles and domains. The architecture can be viewed as a hierarchical extension of the sense-think-act paradigm of intelligence and has strong parallels with the military's Observe-Orient-Decide-Act (OODA) loop. The key elements of the architecture are planning and decision-making nodes comprising modules for situation assessment, plan generation, plan implementation and coordination. A reusable, object-oriented software framework has been developed that implements these functions. As the architecture is applied to new areas, only the application specific software needs to be developed. This paper describes the core architecture in detail and discusses how this has been applied in the undersea, air, ground and space domains.
Cross-Paradigm Simulation Modeling: Challenges and Successes
2011-12-01
is also highlighted. 2.1 Discrete-Event Simulation Discrete-event simulation ( DES ) is a modeling method for stochastic, dynamic models where...which almost anything can be coded; models can be incredibly detailed. Most commercial DES software has a graphical interface which allows the user to...results. Although the above definition is the commonly accepted definition of DES , there are two different worldviews that dominate DES modeling today: a
Assisting Design Given Multiple Performance Criteria
1988-08-01
with uninstantiated operators is created then each operator’s implementation is selected. g - Keywords: computer-aided design, artificial...IEEE Trans- actions on Software Engineering, SE-7(1), 1981. [BG86] Forrest D. Brewer and Daniel D. Gajski . An expert-system paradigm for de- sign. In...Teukolsky, api William T. Vet- terling. Numerical Recipes. Cambridge University Press, Cambridge, England, 1987. [RFS83] G . G . Rassweiler, M. D
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
Biomedical and development paradigms in AIDS prevention.
Wolffers, I.
2000-01-01
In the fight against the HIV/AIDS pandemic different approaches can be distinguished, reflecting professional backgrounds, world views and political interests. One important distinction is between the biomedical and the development paradigms. The biomedical paradigm is characterized by individualization and the concept of "risk". This again is related to the concept of the market where health is a product of services and progress a series of new discoveries that can be marketed. The development paradigm is characterized by participation of the different stakeholders and by community work. The concept "vulnerability" is important in the development paradigm and emphasis is placed on efforts to decrease this vulnerability in a variety of sustainable ways. Biomedical technology is definitely one of the tools in these efforts. In the beginning of the pandemic the biomedical approach was important for the discovery of the virus and understanding its epidemiology. Later, stakeholders became involved. In the light of absence of treatment or vaccines, the development paradigm became more important and the two approaches were more in balance. However, since the reports about effective treatment of AIDS and hope of development of vaccines, the biomedical paradigm has become a leading principle in many HIV/AIDS prevention programmes. There is a need for a better balance between the two paradigms. Especially in developing countries, where it is not realistic to think that sustainable biomedical interventions can be organized on a short-term basis, it would be counterproductive to base our efforts to deal with HIV/AIDS exclusively on the biomedical approach. PMID:10743300
Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC
NASA Astrophysics Data System (ADS)
Alruwaili, Manal
With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.
Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Gil, Joon-Min
2015-03-01
The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.
Dorst, J; Haag, A; Knake, S; Oertel, W H; Hamer, H M; Rosenow, F
2008-10-01
Functional transcranial Doppler sonography (fTCD) during word generation is well established for language lateralization. In this study, we evaluated a fTCD paradigm to reliably identify the non-dominant hemisphere. Twenty-nine right-handed healthy subjects (27.1+/-7.6 years) performed the 'cube perspective test' [Stumpf, H., & Fay, E. (1983). Schlauchfiguren: Ein Test zur Beurteilung des räumlichen Vorstellungsvermögens. Verlag für Psychologie Dr. C. J. Hogrefe, Göttingen, Toronto, Zürich] a spatial orientation task, while the cerebral blood flow velocity (CBFV) was simultaneously measured in both middle cerebral arteries (MCAs). In addition, the established word generation paradigm for language lateralization was performed. Subjects with atypical language representation were excluded. Data were analysed offline with the software Average, which performed a heart-cycle integration and a baseline-correction and calculated a lateralization index (LI) with its standard error of the mean increase in CBFV separately for both MCAs. Twenty-one of 29 subjects (72.4%) lateralized to the right hemisphere (chi2=5.828, p=0.016). The mean LI of the spatial orientation paradigm pointed to the right hemisphere (x =-1.9+/-3.2) and was different from the LI of word generation (x =3.9+/-2.2;p<0.001). There was no correlation between the LI of spatial orientation and word generation (R=0.095, p=0.624). Age of the subjects did not correlate with the LI during spatial orientation (p>0.05) but negatively with the LI during word generation (R=-0.468, p=0.010). The maximum increase of CBFV was greater in the spatial orientation (14.0%+/-3.6%) than in the word generation paradigm (9.4%+/-4.0%; p<0.001). In more than two thirds of the subjects with left-sided language dominance, the spatial orientation paradigm was able to identify the non-dominant hemisphere. The results suggest both paradigms to be independent of each other. The spatial orientation paradigm, therefore, appears to be a non-verbal fTCD paradigm with possible clinical relevance.
Fernandes, J P; Freire, M; Guiomar, N; Gil, A
2017-03-15
The present study deals with the development of systematic conservation planning as management instrument in small oceanic islands, ensuring open systems of governance, and able to integrate an informed and involved participation of the stakeholders. Marxan software was used to define management areas according a set of alternative land use scenarios considering different conservation and management paradigms. Modeled conservation zones were interpreted and compared with the existing protected areas allowing more fused information for future trade-outs and stakeholder's involvement. The results, allowing the identification of Target Management Units (TMU) based on the consideration of different development scenarios proved to be consistent with a feasible development of evaluation approaches able to support sound governance systems. Moreover, the detailed geographic identification of TMU seems to be able to support participated policies towards a more sustainable management of the entire island. Copyright © 2016 Elsevier Ltd. All rights reserved.
Designing normative open virtual enterprises
NASA Astrophysics Data System (ADS)
Garcia, Emilia; Giret, Adriana; Botti, Vicente
2016-03-01
There is an increasing interest on developing virtual enterprises in order to deal with the globalisation of the economy, the rapid growth of information technologies and the increase of competitiveness. In this paper we deal with the development of normative open virtual enterprises (NOVEs). They are systems with a global objective that are composed of a set of heterogeneous entities and enterprises that exchange services following a specific normative context. In order to analyse and design systems of this kind the multi-agent paradigm seems suitable because it offers a specific solution for supporting the social and contractual relationships between enterprises and for formalising their business processes. This paper presents how the Regulated Open Multi-agent systems (ROMAS) methodology, an agent-oriented software methodology, can be used to analyse and design NOVEs. ROMAS offers a complete development process that allows identifying and formalising of the structure of NOVEs, their normative context and the interactions among their members. The use of ROMAS is exemplified by means of a case study that represents an automotive supply chain.
Enabling Flexible and Continuous Capability Invocation in Mobile Prosumer Environments
Alcarria, Ramon; Robles, Tomas; Morales, Augusto; López-de-Ipiña, Diego; Aguilera, Unai
2012-01-01
Mobile prosumer environments require the communication with heterogeneous devices during the execution of mobile services. These environments integrate sensors, actuators and smart devices, whose availability continuously changes. The aim of this paper is to design a reference architecture for implementing a model for continuous service execution and access to capabilities, i.e., the functionalities provided by these devices. The defined architecture follows a set of software engineering patterns and includes some communication paradigms to cope with the heterogeneity of sensors, actuators, controllers and other devices in the environment. In addition, we stress the importance of the flexibility in capability invocation by allowing the communication middleware to select the access technology and change the communication paradigm when dealing with smart devices, and by describing and evaluating two algorithms for resource access management. PMID:23012526
Integrating MPI and deduplication engines: a software architecture roadmap.
Baksi, Dibyendu
2009-03-01
The objective of this paper is to clarify the major concepts related to architecture and design of patient identity management software systems so that an implementor looking to solve a specific integration problem in the context of a Master Patient Index (MPI) and a deduplication engine can address the relevant issues. The ideas presented are illustrated in the context of a reference use case from Integrating the Health Enterprise Patient Identifier Cross-referencing (IHE PIX) profile. Sound software engineering principles using the latest design paradigm of model driven architecture (MDA) are applied to define different views of the architecture. The main contribution of the paper is a clear software architecture roadmap for implementors of patient identity management systems. Conceptual design in terms of static and dynamic views of the interfaces is provided as an example of platform independent model. This makes the roadmap applicable to any specific solutions of MPI, deduplication library or software platform. Stakeholders in need of integration of MPIs and deduplication engines can evaluate vendor specific solutions and software platform technologies in terms of fundamental concepts and can make informed decisions that preserve investment. This also allows freedom from vendor lock-in and the ability to kick-start integration efforts based on a solid architecture.
Verheggen, Kenneth; Raeder, Helge; Berven, Frode S; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
2017-09-13
Sequence database search engines are bioinformatics algorithms that identify peptides from tandem mass spectra using a reference protein sequence database. Two decades of development, notably driven by advances in mass spectrometry, have provided scientists with more than 30 published search engines, each with its own properties. In this review, we present the common paradigm behind the different implementations, and its limitations for modern mass spectrometry datasets. We also detail how the search engines attempt to alleviate these limitations, and provide an overview of the different software frameworks available to the researcher. Finally, we highlight alternative approaches for the identification of proteomic mass spectrometry datasets, either as a replacement for, or as a complement to, sequence database search engines. © 2017 Wiley Periodicals, Inc.
Run control techniques for the Fermilab DART data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, G.; Engelfried, J.; Mengel, L.
1995-10-01
DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control and monitoring of data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniquesmore » we believe are extensible to future use, and potential future modifications and extensions for those we feel are not.« less
Understanding paradigms used for nursing research.
Weaver, Kathryn; Olson, Joanne K
2006-02-01
The aims of this paper are to add clarity to the discussion about paradigms for nursing research and to consider integrative strategies for the development of nursing knowledge. Paradigms are sets of beliefs and practices, shared by communities of researchers, which regulate inquiry within disciplines. The various paradigms are characterized by ontological, epistemological and methodological differences in their approaches to conceptualizing and conducting research, and in their contribution towards disciplinary knowledge construction. Researchers may consider these differences so vast that one paradigm is incommensurable with another. Alternatively, researchers may ignore these differences and either unknowingly combine paradigms inappropriately or neglect to conduct needed research. To accomplish the task of developing nursing knowledge for use in practice, there is a need for a critical, integrated understanding of the paradigms used for nursing inquiry. We describe the evolution and influence of positivist, postpositivist, interpretive and critical theory research paradigms. Using integrative review, we compare and contrast the paradigms in terms of their philosophical underpinnings and scientific contribution. A pragmatic approach to theory development through synthesis of cumulative knowledge relevant to nursing practice is suggested. This requires that inquiry start with assessment of existing knowledge from disparate studies to identify key substantive content and gaps. Knowledge development in under-researched areas could be accomplished through integrative strategies that preserve theoretical integrity and strengthen research approaches associated with various philosophical perspectives. These strategies may include parallel studies within the same substantive domain using different paradigms; theoretical triangulation to combine findings from paradigmatically diverse studies; integrative reviews; and mixed method studies. Nurse scholars are urged to consider the benefits and limitations of inquiry within each paradigm, and the theoretical needs of the discipline.
Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Building Software Agents for Planning, Monitoring, and Optimizing Travel
2004-01-01
defined as plans in the Theseus Agent Execution language (Barish et al. 2002). In the Web environment, sources can be quite slow and the latencies of...executor is based on a dataflow paradigm, actions are executed as soon as the data becomes available. Second, Theseus performs the actions in a...while Thesues provides an expressive language for defining information gathering and monitoring plans. The Theseus language supports capabilities
In-Storage Embedded Accelerator for Sparse Pattern Processing
2016-08-13
performance of RAM disk. Since this configuration offloads most of processing onto the FPGA, the host software consists of only two threads for...more. Fig. 13. Document Processed vs CPU Threads Note that BlueDBM efficiency comes from our in-store processing paradigm that uses the FPGA...In-Storage Embedded Accelerator for Sparse Pattern Processing Sang-Woo Jun*, Huy T. Nguyen#, Vijay Gadepally#*, and Arvind* #MIT Lincoln Laboratory
Is the work flow model a suitable candidate for an observatory supervisory control infrastructure?
NASA Astrophysics Data System (ADS)
Daly, Philip N.; Schumacher, Germán.
2016-08-01
This paper reports on the early investigation of using the work flow model for observatory infrastructure software. We researched several work ow engines and identified 3 for further detailed, study: Bonita BPM, Activiti and Taverna. We discuss the business process model and how it relates to observatory operations and identify a path finder exercise to further evaluate the applicability of these paradigms.
Application of bayesian networks to real-time flood risk estimation
NASA Astrophysics Data System (ADS)
Garrote, L.; Molina, M.; Blasco, G.
2003-04-01
This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models
[Nursing knowledge: the evolution of scientific philosophies and paradigm trends].
Hung, Hsuan-Man; Wang, Hui-Ling; Chang, Yun-Hsuan; Chen, Chung-Hey
2010-02-01
Different aspects of philosophy are derived from different paradigms that contain various main points, some of which are repeated or overlap. Belief and practice are two components of a paradigm that provide perspective and framework and lead to nursing research. Changes in healthcare have popularized empirical and evidence-based research in the field of nursing research. However, the evidence-base study approach has given rise to a certain level of debate. Until now, no standard paradigm has been established for the nursing field, as different professionals use different paradigms in their studies. Such provides certain limitations as well as advantages. The quantitative aspects of a nursing paradigm were developed by Peplau and Henderson (1950) and Orem (1980). Such remained the standard until 1990, when Guba and Parse proposed qualitative viewpoints in contextual features. Therefore, the nursing paradigm has made great contributions to the development of knowledge in nursing care, although debate continues due to incomplete knowledge attributable to the presentation of knowledge and insight within individually developed paradigms. It is better to apply multiple paradigms to different research questions. It is suggested that better communication amongst experts regarding their individual points of view would help nursing members to integrate findings within the global pool of knowledge and allow replication over multiple studies.
NASA Technical Reports Server (NTRS)
Jaap, John; Meyer, Patrick; Davis, Elizabeth
1997-01-01
The experiments planned for the International Space Station promise to be complex, lengthy and diverse. The scarcity of the space station resources will cause significant competition for resources between experiments. The scheduling job facing the Space Station mission planning software requires a concise and comprehensive description of the experiments' requirements (to ensure a valid schedule) and a good description of the experiments' flexibility (to effectively utilize available resources). In addition, the continuous operation of the station, the wide geographic dispersion of station users, and the budgetary pressure to reduce operations manpower make a low-cost solution mandatory. A graphical representation of the scheduling requirements for station payloads implemented via an Internet-based application promises to be an elegant solution that addresses all of these issues. The graphical representation of experiment requirements permits a station user to describe his experiment by defining "activities" and "sequences of activities". Activities define the resource requirements (with alternatives) and other quantitative constraints of tasks to be performed. Activities definitions use an "outline" graphics paradigm. Sequences define the time relationships between activities. Sequences may also define time relationships with activities of other payloads or space station systems. Sequences of activities are described by a "network" graphics paradigm. The bulk of this paper will describe the graphical approach to representing requirements and provide examples that show the ease and clarity with which complex requirements can be represented. A Java applet, to run in a web browser, is being developed to support the graphical representation of payload scheduling requirements. Implementing the entry and editing of requirements via the web solves the problems introduced by the geographic dispersion of users. Reducing manpower is accomplished by developing a concise representation which eliminates the misunderstanding possible with verbose representations and which captures the complete requirements and flexibility of the experiments.
The Development of Genetics in the Light of Thomas Kuhn's Theory of Scientific Revolutions.
Portin, Petter
2015-01-01
The concept of a paradigm is in the key position in Thomas Kuhn's theory of scientific revolutions. A paradigm is the framework within which the results, concepts, hypotheses and theories of scientific research work are understood. According to Kuhn, a paradigm guides the working and efforts of scientists during the time period which he calls the period of normal science. Before long, however, normal science leads to unexplained matters, a situation that then leads the development of the scientific discipline in question to a paradigm shift--a scientific revolution. When a new theory is born, it has either gradually emerged as an extension of the past theory, or the old theory has become a borderline case in the new theory. In the former case, one can speak of a paradigm extension. According to the present author, the development of modern genetics has, until very recent years, been guided by a single paradigm, the Mendelian paradigm which Gregor Mendel launched 150 years ago, and under the guidance of this paradigm the development of genetics has proceeded in a normal fashion in the spirit of logical positivism. Modern discoveries in genetics have, however, created a situation which seems to be leading toward a paradigm shift. The most significant of these discoveries are the findings of adaptive mutations, the phenomenon of transgenerational epigenetic inheritance, and, above all, the present deeply critical state of the concept of the gene.
Integrated Service Provisioning in an Ipv6 over ATM Research Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eli Dart; Helen Chen; Jerry Friesen
1999-02-01
During the past few years, the worldwide Internet has grown at a phenomenal rate, which has spurred the proposal of innovative network technologies to support the fast, efficient and low-latency transport of a wide spectrum of multimedia traffic types. Existing network infrastructures have been plagued by their inability to provide for real-time application traffic as well as their general lack of resources and resilience to congestion. This work proposes to address these issues by implementing a prototype high-speed network infrastructure consisting of Internet Protocol Version 6 (IPv6) on top of an Asynchronous Transfer Mode (ATM) transport medium. Since ATM ismore » connection-oriented whereas IP uses a connection-less paradigm, the efficient integration of IPv6 over ATM is especially challenging and has generated much interest in the research community. We propose, in collaboration with an industry partner, to implement IPv6 over ATM using a unique approach that integrates IP over fast A TM hardware while still preserving IP's connection-less paradigm. This is achieved by replacing ATM's control software with IP's routing code and by caching IP's forwarding decisions in ATM's VPI/VCI translation tables. Prototype ''VR'' and distributed-parallel-computing applications will also be developed to exercise the realtime capability of our IPv6 over ATM network.« less
Rethinking School Effectiveness and Improvement: A Question of Paradigms
ERIC Educational Resources Information Center
Wrigley, Terry
2013-01-01
The purpose of this article is to contribute to progressive school change by developing a more systematic critique of school effectiveness (SE) and school improvement (SI) as paradigms. Diverse examples of paradigms and paradigm change in non-educational fields are used to create a model of paradigms for application to SE and SI, and to explore…
Thompson, J; Hogg, P; Thompson, S; Manning, D; Szczepura, K
2012-01-01
ROCView has been developed as an image display and response capture (IDRC) solution to image display and consistent recording of reader responses in relation to the free-response receiver operating characteristic paradigm. A web-based solution to IDRC for observer response studies allows observations to be completed from any location, assuming that display performance and viewing conditions are consistent with the study being completed. The simplistic functionality of the software allows observations to be completed without supervision. ROCView can display images from multiple modalities, in a randomised order if required. Following registration, observers are prompted to begin their image evaluation. All data are recorded via mouse clicks, one to localise (mark) and one to score confidence (rate) using either an ordinal or continuous rating scale. Up to nine “mark-rating” pairs can be made per image. Unmarked images are given a default score of zero. Upon completion of the study, both true-positive and false-positive reports can be downloaded and adapted for analysis. ROCView has the potential to be a useful tool in the assessment of modality performance difference for a range of imaging methods. PMID:22573294
Sample, Paul J.; Gaston, Kirk W.; Alfonzo, Juan D.; Limbach, Patrick A.
2015-01-01
Ribosomal ribonucleic acid (RNA), transfer RNA and other biological or synthetic RNA polymers can contain nucleotides that have been modified by the addition of chemical groups. Traditional Sanger sequencing methods cannot establish the chemical nature and sequence of these modified-nucleotide containing oligomers. Mass spectrometry (MS) has become the conventional approach for determining the nucleotide composition, modification status and sequence of modified RNAs. Modified RNAs are analyzed by MS using collision-induced dissociation tandem mass spectrometry (CID MS/MS), which produces a complex dataset of oligomeric fragments that must be interpreted to identify and place modified nucleosides within the RNA sequence. Here we report the development of RoboOligo, an interactive software program for the robust analysis of data generated by CID MS/MS of RNA oligomers. There are three main functions of RoboOligo: (i) automated de novo sequencing via the local search paradigm. (ii) Manual sequencing with real-time spectrum labeling and cumulative intensity scoring. (iii) A hybrid approach, coined ‘variable sequencing’, which combines the user intuition of manual sequencing with the high-throughput sampling of automated de novo sequencing. PMID:25820423
Cloudbus Toolkit for Market-Oriented Cloud Computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian
This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.
Agile hardware and software systems engineering for critical military space applications
NASA Astrophysics Data System (ADS)
Huang, Philip M.; Knuth, Andrew A.; Krueger, Robert O.; Garrison-Darrin, Margaret A.
2012-06-01
The Multi Mission Bus Demonstrator (MBD) is a successful demonstration of agile program management and system engineering in a high risk technology application where utilizing and implementing new, untraditional development strategies were necessary. MBD produced two fully functioning spacecraft for a military/DOD application in a record breaking time frame and at dramatically reduced costs. This paper discloses the adaptation and application of concepts developed in agile software engineering to hardware product and system development for critical military applications. This challenging spacecraft did not use existing key technology (heritage hardware) and created a large paradigm shift from traditional spacecraft development. The insertion of new technologies and methods in space hardware has long been a problem due to long build times, the desire to use heritage hardware, and lack of effective process. The role of momentum in the innovative process can be exploited to tackle ongoing technology disruptions and allowing risk interactions to be mitigated in a disciplined manner. Examples of how these concepts were used during the MBD program will be delineated. Maintaining project momentum was essential to assess the constant non recurring technological challenges which needed to be retired rapidly from the engineering risk liens. Development never slowed due to tactical assessment of the hardware with the adoption of the SCRUM technique. We adapted this concept as a representation of mitigation of technical risk while allowing for design freeze later in the program's development cycle. By using Agile Systems Engineering and Management techniques which enabled decisive action, the product development momentum effectively was used to produce two novel space vehicles in a fraction of time with dramatically reduced cost.
Manyscale Computing for Sensor Processing in Support of Space Situational Awareness
NASA Astrophysics Data System (ADS)
Schmalz, M.; Chapman, W.; Hayden, E.; Sahni, S.; Ranka, S.
2014-09-01
Increasing image and signal data burden associated with sensor data processing in support of space situational awareness implies continuing computational throughput growth beyond the petascale regime. In addition to growing applications data burden and diversity, the breadth, diversity and scalability of high performance computing architectures and their various organizations challenge the development of a single, unifying, practicable model of parallel computation. Therefore, models for scalable parallel processing have exploited architectural and structural idiosyncrasies, yielding potential misapplications when legacy programs are ported among such architectures. In response to this challenge, we have developed a concise, efficient computational paradigm and software called Manyscale Computing to facilitate efficient mapping of annotated application codes to heterogeneous parallel architectures. Our theory, algorithms, software, and experimental results support partitioning and scheduling of application codes for envisioned parallel architectures, in terms of work atoms that are mapped (for example) to threads or thread blocks on computational hardware. Because of the rigor, completeness, conciseness, and layered design of our manyscale approach, application-to-architecture mapping is feasible and scalable for architectures at petascales, exascales, and above. Further, our methodology is simple, relying primarily on a small set of primitive mapping operations and support routines that are readily implemented on modern parallel processors such as graphics processing units (GPUs) and hybrid multi-processors (HMPs). In this paper, we overview the opportunities and challenges of manyscale computing for image and signal processing in support of space situational awareness applications. We discuss applications in terms of a layered hardware architecture (laboratory > supercomputer > rack > processor > component hierarchy). Demonstration applications include performance analysis and results in terms of execution time as well as storage, power, and energy consumption for bus-connected and/or networked architectures. The feasibility of the manyscale paradigm is demonstrated by addressing four principal challenges: (1) architectural/structural diversity, parallelism, and locality, (2) masking of I/O and memory latencies, (3) scalability of design as well as implementation, and (4) efficient representation/expression of parallel applications. Examples will demonstrate how manyscale computing helps solve these challenges efficiently on real-world computing systems.
Franz, Eelco; Delaquis, Pascal; Morabito, Stefano; Beutin, Lothar; Gobius, Kari; Rasko, David A; Bono, Jim; French, Nigel; Osek, Jacek; Lindstedt, Bjørn-Arne; Muniesa, Maite; Manning, Shannon; LeJeune, Jeff; Callaway, Todd; Beatson, Scott; Eppinger, Mark; Dallman, Tim; Forbes, Ken J; Aarts, Henk; Pearl, David L; Gannon, Victor P J; Laing, Chad R; Strachan, Norval J C
2014-09-18
The rates of foodborne disease caused by gastrointestinal pathogens continue to be a concern in both the developed and developing worlds. The growing world population, the increasing complexity of agri-food networks and the wide range of foods now associated with STEC are potential drivers for increased risk of human disease. It is vital that new developments in technology, such as whole genome sequencing (WGS), are effectively utilized to help address the issues associated with these pathogenic microorganisms. This position paper, arising from an OECD funded workshop, provides a brief overview of next generation sequencing technologies and software. It then uses the agent-host-environment paradigm as a basis to investigate the potential benefits and pitfalls of WGS in the examination of (1) the evolution and virulence of STEC, (2) epidemiology from bedside diagnostics to investigations of outbreaks and sporadic cases and (3) food protection from routine analysis of foodstuffs to global food networks. A number of key recommendations are made that include: validation and standardization of acquisition, processing and storage of sequence data including the development of an open access "WGSNET"; building up of sequence databases from both prospective and retrospective isolates; development of a suite of open-access software specific for STEC accessible to non-bioinformaticians that promotes understanding of both the computational and biological aspects of the problems at hand; prioritization of research funding to both produce and integrate genotypic and phenotypic information suitable for risk assessment; training to develop a supply of individuals working in bioinformatics/software development; training for clinicians, epidemiologists, the food industry and other stakeholders to ensure uptake of the technology and finally review of progress of implementation of WGS. Currently the benefits of WGS are being slowly teased out by academic, government, and industry or private sector researchers around the world. The next phase will require a coordinated international approach to ensure that it's potential to contribute to the challenge of STEC disease can be realized in a cost effective and timely manner. Copyright © 2014. Published by Elsevier B.V.
Evolution of Structural DNA Nanotechnology.
Nummelin, Sami; Kommeri, Juhana; Kostiainen, Mauri A; Linko, Veikko
2018-06-01
The research field entitled structural DNA nanotechnology emerged in the beginning of the 1980s as the first immobile synthetic nucleic acid junctions were postulated and demonstrated. Since then, the field has taken huge leaps toward advanced applications, especially during the past decade. This Progress Report summarizes how the controllable, custom, and accurate nanostructures have recently evolved together with powerful design and simulation software. Simultaneously they have provided a significant expansion of the shape space of the nanostructures. Today, researchers can select the most suitable fabrication methods, and design paradigms and software from a variety of options when creating unique DNA nanoobjects and shapes for a plethora of implementations in materials science, optics, plasmonics, molecular patterning, and nanomedicine. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A High-Availability, Distributed Hardware Control System Using Java
NASA Technical Reports Server (NTRS)
Niessner, Albert F.
2011-01-01
Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences
An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis
NASA Technical Reports Server (NTRS)
Tsow, Alex
2008-01-01
Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.
AutoDrug: fully automated macromolecular crystallography workflows for fragment-based drug discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Yingssu; Stanford University, 333 Campus Drive, Mudd Building, Stanford, CA 94305-5080; McPhillips, Scott E.
New software has been developed for automating the experimental and data-processing stages of fragment-based drug discovery at a macromolecular crystallography beamline. A new workflow-automation framework orchestrates beamline-control and data-analysis software while organizing results from multiple samples. AutoDrug is software based upon the scientific workflow paradigm that integrates the Stanford Synchrotron Radiation Lightsource macromolecular crystallography beamlines and third-party processing software to automate the crystallography steps of the fragment-based drug-discovery process. AutoDrug screens a cassette of fragment-soaked crystals, selects crystals for data collection based on screening results and user-specified criteria and determines optimal data-collection strategies. It then collects and processes diffraction data,more » performs molecular replacement using provided models and detects electron density that is likely to arise from bound fragments. All processes are fully automated, i.e. are performed without user interaction or supervision. Samples can be screened in groups corresponding to particular proteins, crystal forms and/or soaking conditions. A single AutoDrug run is only limited by the capacity of the sample-storage dewar at the beamline: currently 288 samples. AutoDrug was developed in conjunction with RestFlow, a new scientific workflow-automation framework. RestFlow simplifies the design of AutoDrug by managing the flow of data and the organization of results and by orchestrating the execution of computational pipeline steps. It also simplifies the execution and interaction of third-party programs and the beamline-control system. Modeling AutoDrug as a scientific workflow enables multiple variants that meet the requirements of different user groups to be developed and supported. A workflow tailored to mimic the crystallography stages comprising the drug-discovery pipeline of CoCrystal Discovery Inc. has been deployed and successfully demonstrated. This workflow was run once on the same 96 samples that the group had examined manually and the workflow cycled successfully through all of the samples, collected data from the same samples that were selected manually and located the same peaks of unmodeled density in the resulting difference Fourier maps.« less
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Möckel, M.; Wiedemann, C.; Flegel, S.; Gelhaus, J.; Vörsmann, P.; Klinkrad, H.; Krag, H.
2011-07-01
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction to OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
Using parallel computing for the display and simulation of the space debris environment
NASA Astrophysics Data System (ADS)
Moeckel, Marek; Wiedemann, Carsten; Flegel, Sven Kevin; Gelhaus, Johannes; Klinkrad, Heiner; Krag, Holger; Voersmann, Peter
Parallelism is becoming the leading paradigm in today's computer architectures. In order to take full advantage of this development, new algorithms have to be specifically designed for parallel execution while many old ones have to be upgraded accordingly. One field in which parallel computing has been firmly established for many years is computer graphics. Calculating and displaying three-dimensional computer generated imagery in real time requires complex numerical operations to be performed at high speed on a large number of objects. Since most of these objects can be processed independently, parallel computing is applicable in this field. Modern graphics processing units (GPUs) have become capable of performing millions of matrix and vector operations per second on multiple objects simultaneously. As a side project, a software tool is currently being developed at the Institute of Aerospace Systems that provides an animated, three-dimensional visualization of both actual and simulated space debris objects. Due to the nature of these objects it is possible to process them individually and independently from each other. Therefore, an analytical orbit propagation algorithm has been implemented to run on a GPU. By taking advantage of all its processing power a huge performance increase, compared to its CPU-based counterpart, could be achieved. For several years efforts have been made to harness this computing power for applications other than computer graphics. Software tools for the simulation of space debris are among those that could profit from embracing parallelism. With recently emerged software development tools such as OpenCL it is possible to transfer the new algorithms used in the visualization outside the field of computer graphics and implement them, for example, into the space debris simulation environment. This way they can make use of parallel hardware such as GPUs and Multi-Core-CPUs for faster computation. In this paper the visualization software will be introduced, including a comparison between the serial and the parallel method of orbit propagation. Ways of how to use the benefits of the latter method for space debris simulation will be discussed. An introduction of OpenCL will be given as well as an exemplary algorithm from the field of space debris simulation.
ERIC Educational Resources Information Center
Prew, Martin
2009-01-01
The article posits a paradigm for school development (SD) in the context of a developing country, which is somewhat different from the dominant SD and school improvement (SI) paradigm in the West. Within this paradigm the norm of a school-parent engagement over pedagogical issues as in the West is replaced by imperatives based on full community…
Martinez-Espronceda, Miguel; Martinez, Ignacio; Serrano, Luis; Led, Santiago; Trigo, Jesús Daniel; Marzo, Asier; Escayola, Javier; Garcia, José
2011-05-01
Traditionally, e-Health solutions were located at the point of care (PoC), while the new ubiquitous user-centered paradigm draws on standard-based personal health devices (PHDs). Such devices place strict constraints on computation and battery efficiency that encouraged the International Organization for Standardization/IEEE11073 (X73) standard for medical devices to evolve from X73PoC to X73PHD. In this context, low-voltage low-power (LV-LP) technologies meet the restrictions of X73PHD-compliant devices. Since X73PHD does not approach the software architecture, the accomplishment of an efficient design falls directly on the software developer. Therefore, computational and battery performance of such LV-LP-constrained devices can even be outperformed through an efficient X73PHD implementation design. In this context, this paper proposes a new methodology to implement X73PHD into microcontroller-based platforms with LV-LP constraints. Such implementation methodology has been developed through a patterns-based approach and applied to a number of X73PHD-compliant agents (including weighing scale, blood pressure monitor, and thermometer specializations) and microprocessor architectures (8, 16, and 32 bits) as a proof of concept. As a reference, the results obtained in the weighing scale guarantee all features of X73PHD running over a microcontroller architecture based on ARM7TDMI requiring only 168 B of RAM and 2546 B of flash memory.
NASA Astrophysics Data System (ADS)
Pimentel, Maria Da Graça C.; Cattelan, Renan G.; Melo, Erick L.; Freitas, Giliard B.; Teixeira, Cesar A.
In earlier work we proposed the Watch-and-Comment (WaC) paradigm as the seamless capture of multimodal comments made by one or more users while watching a video, resulting in the automatic generation of multimedia documents specifying annotated interactive videos. The aim is to allow services to be offered by applying document engineering techniques to the multimedia document generated automatically. The WaC paradigm was demonstrated with a WaCTool prototype application which supports multimodal annotation over video frames and segments, producing a corresponding interactive video. In this chapter, we extend the WaC paradigm to consider contexts in which several viewers may use their own mobile devices while watching and commenting on an interactive-TV program. We first review our previous work. Next, we discuss scenarios in which mobile users can collaborate via the WaC paradigm. We then present a new prototype application which allows users to employ their mobile devices to collaboratively annotate points of interest in video and interactive-TV programs. We also detail the current software infrastructure which supports our new prototype; the infrastructure extends the Ginga middleware for the Brazilian Digital TV with an implementation of the UPnP protocol - the aim is to provide the seamless integration of the users' mobile devices into the TV environment. As a result, the work reported in this chapter defines the WaC paradigm for the mobile-user as an approach to allow the collaborative annotation of the points of interest in video and interactive-TV programs.
Bujarski, Spencer; Ray, Lara A.
2016-01-01
In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. PMID:27266992
Bujarski, Spencer; Ray, Lara A
2016-11-01
In spite of high prevalence and disease burden, scientific consensus on the etiology and treatment of Alcohol Use Disorder (AUD) has yet to be reached. The development and utilization of experimental psychopathology paradigms in the human laboratory represents a cornerstone of AUD research. In this review, we describe and critically evaluate the major experimental psychopathology paradigms developed for AUD, with an emphasis on their implications, strengths, weaknesses, and methodological considerations. Specifically we review alcohol administration, self-administration, cue-reactivity, and stress-reactivity paradigms. We also provide an introduction to the application of experimental psychopathology methods to translational research including genetics, neuroimaging, pharmacological and behavioral treatment development, and translational science. Through refining and manipulating key phenotypes of interest, these experimental paradigms have the potential to elucidate AUD etiological factors, improve the efficiency of treatment developments, and refine treatment targets thus advancing precision medicine. Copyright © 2016 Elsevier Ltd. All rights reserved.
Multi-Purpose, Application-Centric, Scalable I/O Proxy Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, M. C.
2015-06-15
MACSio is a Multi-purpose, Application-Centric, Scalable I/O proxy application. It is designed to support a number of goals with respect to parallel I/O performance testing and benchmarking including the ability to test and compare various I/O libraries and I/O paradigms, to predict scalable performance of real applications and to help identify where improvements in I/O performance can be made within the HPC I/O software stack.
NASA Astrophysics Data System (ADS)
Ali, Mufajjul
This paper proposes a Green Cloud model for mobile Cloud computing. The proposed model leverage on the current trend of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service), and look at new paradigm called "Network as a Service" (NaaS). The Green Cloud model proposes various Telco's revenue generating streams and services with the CaaS (Cloud as a Service) for the near future.
A New Paradigm for Ovarian Sex Cord-Stromal Tumor Development
2016-05-01
1 AWARD NUMBER: W81XWH-15-1-0082 TITLE: A New Paradigm for Ovarian Sex Cord-Stromal Tumor Development PRINCIPAL INVESTIGATOR: Qinglei Li...5a. CONTRACT NUMBER A New Paradigm for Ovarian Sex Cord-Stromal Tumor Development 5b. GRANT NUMBER W81XWH-15-1-0082 5c. PROGRAM ELEMENT NUMBER 6...role of sustained activation of oocyte TGFBR1 in ovarian tumor development using an in vitro approach. 2. Keywords Ovarian tumor, Sex cord
Information Management and the Biological Warfare Threat
2002-03-01
24 2. Scientific-Security Paradigm Interaction........................................ 25 3. Business - Security Paradigm...policies of openness and guardedness and discuss the three paradigms (scientific, business , security ) as a developing factor for information sharing...Trade Center. 3. Business - Security Paradigm Interaction Gene patenting (discussed previously) is utilized by business to protect their
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
Association Between Increased Vascular Density and Loss of Protective RAS in Early-Stage NPDR
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Raghunandan, Sneha; Vyas, Ruchi J.; Vu, Amanda C.; Bryant, Douglas; Yaqian, Duan; Knecht, Brenda E.; Grant, Maria B.; Chalam, K . V.; Parsons-Wingerter, Patricia
2016-01-01
Our hypothesis predicts that retinal blood vessels increase in density during early-stage progression to moderate nonproliferative diabetic retinopathy (NPDR). The prevailing paradigm of NPDR progression is that vessels drop out prior to abnormal, vision-impairing regrowth at late-stage proliferative diabetic retinopathy (DR). However, surprising results for our previous preliminary study 1 with NASA's VESsel GENeration Analysis (VESGEN) software showed that vessels proliferated considerably during moderate NPDR compared to drop out at both mild and severe NPDR. Validation of our hypothesis will support development of successful early-stage regenerative therapies such as vascular repair by circulating angiogenic cells (CACs). The renin-angiotensin system (RAS)is implicated in the pathogenesis of DR and in the function of CACs, a critical bone marrow-derived population that is instrumental in vascular repair.
Suciu, George; Suciu, Victor; Martian, Alexandru; Craciunescu, Razvan; Vulpe, Alexandru; Marcu, Ioana; Halunga, Simona; Fratu, Octavian
2015-11-01
Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.
Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Fisher, W.; Yoksas, T.
2014-12-01
Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high student expectations. These changes are upending traditional approaches to accessing and using data and software. It is clear that Unidata's products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our initial efforts to deploy a subset of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.
Strategic Intellectual Capital Development: A Defining Paradigm for HRD?
ERIC Educational Resources Information Center
Holton, Elwood F., III; Yamkovenko, Bogdan
2008-01-01
The performance paradigm of human resource development (HRD) practice has served the field well, particularly in enhancing the relevance and impact of HRD interventions. However, in this article, it is argued that the time has come for a new defining paradigm to advance the field of HRD to a higher level of organizational impact. This article…
Collaborative business processes for enhancing partnerships among software services providers
NASA Astrophysics Data System (ADS)
Heil Cancian, Maiara; Rabelo, Ricardo; Gresse von Wangenheim, Christiane
2015-08-01
Software services have represented a powerful view to support the realisation of the service-oriented architecture (SOA) paradigm. Using open standards and facilitating systems projects, they have increasingly been used as a corporate architectural approach to create interoperable services-based software solutions that can more easily be reused and shared across disparate applications. In the context of software companies, most of them are small firms having enormous difficulties to keep competitive. One strategy to enhance their sustainability is to enlarge partnerships among them at a more valuable level by jointly offering (web) services-based solutions. However, their culture of collaboration is low, and partnerships are usually done with the same companies and sporadically. This article presents an approach to support a more intense collaboration among software companies to attend business opportunities in a more agile way, joining capacities and capabilities which they would not have if they worked alone. This requires, however, some preparedness. From the perspective of business processes, they should understand how to carry out a collaboration more properly. This is essentially what this article is about. It presents a comprehensive list of collaborative business processes and base practices that can also act as a guide for service providers' managers to implement and manage the collaboration along its lifecycle. Processes have been validated and results are discussed.
Engineering paradigms and anthropogenic global change
NASA Astrophysics Data System (ADS)
Bohle, Martin
2016-04-01
This essay discusses 'paradigms' as means to conceive anthropogenic global change. Humankind alters earth-systems because of the number of people, the patterns of consumption of resources, and the alterations of environments. This process of anthropogenic global change is a composite consisting of societal (in the 'noosphere') and natural (in the 'bio-geosphere') features. Engineering intercedes these features; e.g. observing stratospheric ozone depletion has led to understanding it as a collateral artefact of a particular set of engineering choices. Beyond any specific use-case, engineering works have a common function; e.g. civil-engineering intersects economic activity and geosphere. People conceive their actions in the noosphere including giving purpose to their engineering. The 'noosphere' is the ensemble of social, cultural or political concepts ('shared subjective mental insights') of people. Among people's concepts are the paradigms how to shape environments, production systems and consumption patterns given their societal preferences. In that context, engineering is a means to implement a given development path. Four paradigms currently are distinguishable how to make anthropogenic global change happening. Among the 'engineering paradigms' for anthropogenic global change, 'adaptation' is a paradigm for a business-as-usual scenario and steady development paths of societies. Applying this paradigm implies to forecast the change to come, to appropriately design engineering works, and to maintain as far as possible the current production and consumption patterns. An alternative would be to adjust incrementally development paths of societies, namely to 'dovetail' anthropogenic and natural fluxes of matter and energy. To apply that paradigm research has to identify 'natural boundaries', how to modify production and consumption patterns, and how to tackle process in the noosphere to render alterations of common development paths acceptable. A further alternative, the paradigm of 'ecomodernism' implies to accentuate some of the current development paths of societies with the goal to 'decouple' anthropogenic and natural fluxes of matter and energy. Applying the paradigm 'geoengineering', engineering works shall 'modulate' natural fluxes of matter to counter the effect of anthropogenic fluxes of matter instead to alter the development paths of societies. Thus, anthropogenic global change is a composite process in which engineering intercedes the 'noosphere' and in the 'bio-geosphere'. Paradigms 'how to engineering earth systems' reflect different concepts ('shared subjective insights') how to combine knowledge with use, function and purpose. Currently, four paradigms are distinguishable how to engineer anthropogenic global change. They convene recipes human activity and bio-geosphere should intersect.
National Wind Distance Learning Collaborative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. James B. Beddow
2013-03-29
Executive Summary The energy development assumptions identified in the Department of Energy's position paper, 20% Wind Energy by 2030, projected an exploding demand for wind energy-related workforce development. These primary assumptions drove a secondary set of assumptions that early stage wind industry workforce development and training paradigms would need to undergo significant change if the workforce needs were to be met. The current training practice and culture within the wind industry is driven by a relatively small number of experts with deep field experience and knowledge. The current training methodology is dominated by face-to-face, classroom based, instructor present training. Givenmore » these assumptions and learning paradigms, the purpose of the National Wind Distance Learning Collaborative was to determine the feasibility of developing online learning strategies and products focused on training wind technicians. The initial project scope centered on (1) identifying resources that would be needed for development of subject matter and course design/delivery strategies for industry-based (non-academic) training, and (2) development of an appropriate Learning Management System (LMS). As the project unfolded, the initial scope was expanded to include development of learning products and the addition of an academic-based training partner. The core partners included two training entities, industry-based Airstreams Renewables and academic-based Lake Area Technical Institute. A third partner, Vision Video Interactive, Inc. provided technology-based learning platforms (hardware and software). The revised scope yielded an expanded set of results beyond the initial expectation. Eight learning modules were developed for the industry-based Electrical Safety course. These modules were subsequently redesigned and repurposed for test application in an academic setting. Software and hardware developments during the project's timeframe enabled redesign providing for student access through the use of tablet devices such as iPads. Early prototype Learning Management Systems (LMS) featuring more student-centric access and interfaces with emerging social media were developed and utilized during the testing applications. The project also produced soft results involving cross learning between and among the partners regarding subject matter expertise, online learning pedagogy, and eLearning technology-based platforms. The partners believe that the most significant, overarching accomplishment of the project was the development and implementation of goals, activities, and outcomes that significantly exceeded those proposed in the initial grant application submitted in 2009. Key specific accomplishments include: (1) development of a set of 8 online learning modules addressing electrical safety as it relates to the work of wind technicians; (3) development of a flexible, open-ended Learning Management System (LMS): (3) creation of a robust body of learning (knowledge, experience, skills, and relationships). Project leaders have concluded that there is substantial resource equity that could be leverage and recommend that it be carried forward to pursue a Next Stage Opportunity relating to development of an online core curriculum for institute and community college energy workforce development programs.« less
Bally, Jill M G
2012-07-01
The purpose of this study was to examine the strengths and limitations of common research paradigms used in the study of the hope of parents who have children with a variety of illnesses. Research findings on parental hope extracted from only one paradigm present limitations to related knowledge development. To take into account the contributions from each paradigm and to allow for a multidimensional understanding of parental hope, a multiparadigmatic approach is needed. The complementary findings from multiple research paradigms can lead to a comprehensive base of knowledge that can guide future research and develop effective, family-centered pediatric nursing care. © 2012, Wiley Periodicals, Inc.
WebGL and web audio software lightweight components for multimedia education
NASA Astrophysics Data System (ADS)
Chang, Xin; Yuksel, Kivanc; Skarbek, Władysław
2017-08-01
The paper presents the results of our recent work on development of contemporary computing platform DC2 for multimedia education usingWebGL andWeb Audio { the W3C standards. Using literate programming paradigm the WEBSA educational tools were developed. It offers for a user (student), the access to expandable collection of WEBGL Shaders and web Audio scripts. The unique feature of DC2 is the option of literate programming, offered for both, the author and the reader in order to improve interactivity to lightweightWebGL andWeb Audio components. For instance users can define: source audio nodes including synthetic sources, destination audio nodes, and nodes for audio processing such as: sound wave shaping, spectral band filtering, convolution based modification, etc. In case of WebGL beside of classic graphics effects based on mesh and fractal definitions, the novel image processing analysis by shaders is offered like nonlinear filtering, histogram of gradients, and Bayesian classifiers.
[Problem based learning from the perspective of tutors].
Navarro Hernández, Nancy; Illesca P, Mónica; Cabezas G, Mirtha
2009-02-01
Problem based learning is a student centered learning technique that develops deductive, constructive and reasoning capacities among the students. Teachers must adapt to this paradigm of constructing rather than transmitting knowledge. To interpret the importance of tutors in problem based learning during a module of Health research and management given to medical, nursing, physical therapy, midwifery, technology and nutrition students. Eight teachers that participated in a module using problem based learning accepted to participate in an in depth interview. The qualitative analysis of the textual information recorded, was performed using the ATLAS software. We identified 662 meaning units, grouped in 29 descriptive categories, with eight emerging meta categories. The sequential and cross-generated qualitative analysis generated four domains: competence among students, competence of teachers, student-centered learning and evaluation process. Multiprofessional problem based learning contributes to the development of generic competences among future health professionals, such as multidisciplinary work, critical capacity and social skills. Teachers must shelter the students in the context of their problems and social situation.
Managing resource capacity using hybrid simulation
NASA Astrophysics Data System (ADS)
Ahmad, Norazura; Ghani, Noraida Abdul; Kamil, Anton Abdulbasah; Tahar, Razman Mat
2014-12-01
Due to the diversity of patient flows and interdependency of the emergency department (ED) with other units in hospital, the use of analytical models seems not practical for ED modeling. One effective approach to study the dynamic complexity of ED problems is by developing a computer simulation model that could be used to understand the structure and behavior of the system. Attempts to build a holistic model using DES only will be too complex while if only using SD will lack the detailed characteristics of the system. This paper discusses the combination of DES and SD in order to get a better representation of the actual system than using either modeling paradigm solely. The model is developed using AnyLogic software that will enable us to study patient flows and the complex interactions among hospital resources for ED operations. Results from the model show that patients' length of stay is influenced by laboratories turnaround time, bed occupancy rate and ward admission rate.
The Cambridge Structural Database in retrospect and prospect.
Groom, Colin R; Allen, Frank H
2014-01-13
The Cambridge Crystallographic Data Centre (CCDC) was established in 1965 to record numerical, chemical and bibliographic data relating to published organic and metal-organic crystal structures. The Cambridge Structural Database (CSD) now stores data for nearly 700,000 structures and is a comprehensive and fully retrospective historical archive of small-molecule crystallography. Nearly 40,000 new structures are added each year. As X-ray crystallography celebrates its centenary as a subject, and the CCDC approaches its own 50th year, this article traces the origins of the CCDC as a publicly funded organization and its onward development into a self-financing charitable institution. Principally, however, we describe the growth of the CSD and its extensive associated software system, and summarize its impact and value as a basis for research in structural chemistry, materials science and the life sciences, including drug discovery and drug development. Finally, the article considers the CCDC's funding model in relation to open access and open data paradigms. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A modeling paradigm for interdisciplinary water resources modeling: Simple Script Wrappers (SSW)
NASA Astrophysics Data System (ADS)
Steward, David R.; Bulatewicz, Tom; Aistrup, Joseph A.; Andresen, Daniel; Bernard, Eric A.; Kulcsar, Laszlo; Peterson, Jeffrey M.; Staggenborg, Scott A.; Welch, Stephen M.
2014-05-01
Holistic understanding of a water resources system requires tools capable of model integration. This team has developed an adaptation of the OpenMI (Open Modelling Interface) that allows easy interactions across the data passed between models. Capabilities have been developed to allow programs written in common languages such as matlab, python and scilab to share their data with other programs and accept other program's data. We call this interface the Simple Script Wrapper (SSW). An implementation of SSW is shown that integrates groundwater, economic, and agricultural models in the High Plains region of Kansas. Output from these models illustrates the interdisciplinary discovery facilitated through use of SSW implemented models. Reference: Bulatewicz, T., A. Allen, J.M. Peterson, S. Staggenborg, S.M. Welch, and D.R. Steward, The Simple Script Wrapper for OpenMI: Enabling interdisciplinary modeling studies, Environmental Modelling & Software, 39, 283-294, 2013. http://dx.doi.org/10.1016/j.envsoft.2012.07.006 http://code.google.com/p/simple-script-wrapper/
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
Medical Data Architecture Project Status
NASA Technical Reports Server (NTRS)
Krihak, M.; Middour, C.; Lindsey, A.; Marker, N.; Wolfe, S.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.
2017-01-01
The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current International Space Station (ISS) medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable an increasingly autonomous crew than the current ISS paradigm. The MDA will develop capabilities that support automated data collection, and the necessary functionality and challenges in executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. To attain this goal, the first year of the MDA project focused on reducing technical risk, developing documentation and instituting iterative development processes that established the basis for the first version of MDA software (or Test Bed 1). Test Bed 1 is based on a nominal operations scenario authored by the ExMC Element Scientist. This narrative was decomposed into a Concept of Operations that formed the basis for Test Bed 1 requirements. These requirements were successfully vetted through the MDA Test Bed 1 System Requirements Review, which permitted the MDA project to begin software code development and component integration. This paper highlights the MDA objectives, development processes, and accomplishments, and identifies the fiscal year 2017 milestones and deliverables in the upcoming year.
ERIC Educational Resources Information Center
Verbruggen, Frederick; Logan, Gordon D.
2008-01-01
In 5 experiments, the authors examined the development of automatic response inhibition in the go/no-go paradigm and a modified version of the stop-signal paradigm. They hypothesized that automatic response inhibition may develop over practice when stimuli are consistently associated with stopping. All 5 experiments consisted of a training phase…
A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools
2015-07-14
computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network
Cutting edge technology to enhance nursing classroom instruction at Coppin State University.
Black, Crystal Day; Watties-Daniels, A Denyce
2006-01-01
Educational technologies have changed the paradigm of the teacher-student relationship in nursing education. Nursing students expect to use and to learn from cutting edge technology during their academic careers. Varied technology, from specified software programs (Tegrity and Blackboard) to the use of the Internet as a research medium, can enhance student learning. The authors provide an overview of current cutting edge technologies in nursing classroom instruction and its impact on future nursing practice.
NASA Technical Reports Server (NTRS)
Shearrow, Charles A.
1999-01-01
One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.
Multiphysics Simulations: Challenges and Opportunities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, David; McInnes, Lois C.; Woodward, Carol
2013-02-12
We consider multiphysics applications from algorithmic and architectural perspectives, where ‘‘algorithmic’’ includes both mathematical analysis and computational complexity, and ‘‘architectural’’ includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not always practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose somemore » commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S; Sadlier, Ronald J
Quantum communication systems harness modern physics through state-of-the-art optical engineering to provide revolutionary capabilities. An important concern for quantum communication engineering is designing and prototyping these systems to prototype proposed capabilities. We apply the paradigm of software-defined communica- tion for engineering quantum communication systems to facilitate rapid prototyping and prototype comparisons. We detail how to decompose quantum communication terminals into functional layers defining hardware, software, and middleware concerns, and we describe how each layer behaves. Using the super-dense coding protocol as a test case, we describe implementations of both the transmitter and receiver, and we present results from numerical simulationsmore » of the behavior. We find that while the theoretical benefits of super dense coding are maintained, there is a classical overhead associated with the full implementation.« less
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility.
Zaballos, Agustín; Navarro, Joan; Martín De Pozuelo, Ramon
2018-02-28
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid's data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
A Custom Approach for a Flexible, Real-Time and Reliable Software Defined Utility
2018-01-01
Information and communication technologies (ICTs) have enabled the evolution of traditional electric power distribution networks towards a new paradigm referred to as the smart grid. However, the different elements that compose the ICT plane of a smart grid are usually conceived as isolated systems that typically result in rigid hardware architectures, which are hard to interoperate, manage and adapt to new situations. In the recent years, software-defined systems that take advantage of software and high-speed data network infrastructures have emerged as a promising alternative to classic ad hoc approaches in terms of integration, automation, real-time reconfiguration and resource reusability. The purpose of this paper is to propose the usage of software-defined utilities (SDUs) to address the latent deployment and management limitations of smart grids. More specifically, the implementation of a smart grid’s data storage and management system prototype by means of SDUs is introduced, which exhibits the feasibility of this alternative approach. This system features a hybrid cloud architecture able to meet the data storage requirements of electric utilities and adapt itself to their ever-evolving needs. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29495599
Cloud@Home: A New Enhanced Computing Paradigm
NASA Astrophysics Data System (ADS)
Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco
Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeure, I.M.
The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less
A Nursing Informatics Research Agenda for 2008–18: Contextual Influences and Key Components
Bakken, Suzanne; Stone, Patricia W.; Larson, Elaine L.
2008-01-01
The context for nursing informatics research has changed significantly since the National Institute of Nursing Research-funded Nursing Informatics Research Agenda was published in 1993 and the Delphi study of nursing informatics research priorities reported a decade ago. The authors focus on three specific aspects of context - genomic health care, shifting research paradigms, and social (Web 2.0) technologies - that must be considered in formulating a nursing informatics research agenda. These influences are illustrated using the significant issue of healthcare associated infections (HAI). A nursing informatics research agenda for 2008–18 must expand users of interest to include interdisciplinary researchers; build upon the knowledge gained in nursing concept representation to address genomic and environmental data; guide the reengineering of nursing practice; harness new technologies to empower patients and their caregivers for collaborative knowledge development; develop user-configurable software approaches that support complex data visualization, analysis, and predictive modeling; facilitate the development of middle-range nursing informatics theories; and encourage innovative evaluation methodologies that attend to human-computer interface factors and organizational context. PMID:18922269
NASA Technical Reports Server (NTRS)
Valley, Lois
1989-01-01
The SPS product, Classic-Ada, is a software tool that supports object-oriented Ada programming with powerful inheritance and dynamic binding. Object Oriented Design (OOD) is an easy, natural development paradigm, but it is not supported by Ada. Following the DOD Ada mandate, SPS developed Classic-Ada to provide a tool which supports OOD and implements code in Ada. It consists of a design language, a code generator and a toolset. As a design language, Classic-Ada supports the object-oriented principles of information hiding, data abstraction, dynamic binding, and inheritance. It also supports natural reuse and incremental development through inheritance, code factoring, and Ada, Classic-Ada, dynamic binding and static binding in the same program. Only nine new constructs were added to Ada to provide object-oriented design capabilities. The Classic-Ada code generator translates user application code into fully compliant, ready-to-run, standard Ada. The Classic-Ada toolset is fully supported by SPS and consists of an object generator, a builder, a dictionary manager, and a reporter. Demonstrations of Classic-Ada and the Classic-Ada Browser were given at the workshop.
Integrating medical devices in the operating room using service-oriented architectures.
Ibach, Bastian; Benzko, Julia; Schlichting, Stefan; Zimolong, Andreas; Radermacher, Klaus
2012-08-01
Abstract With the increasing documentation requirements and communication capabilities of medical devices in the operating room, the integration and modular networking of these devices have become more and more important. Commercial integrated operating room systems are mainly proprietary developments using usually proprietary communication standards and interfaces, which reduce the possibility of integrating devices from different vendors. To overcome these limitations, there is a need for an open standardized architecture that is based on standard protocols and interfaces enabling the integration of devices from different vendors based on heterogeneous software and hardware components. Starting with an analysis of the requirements for device integration in the operating room and the techniques used for integrating devices in other industrial domains, a new concept for an integration architecture for the operating room based on the paradigm of a service-oriented architecture is developed. Standardized communication protocols and interface descriptions are used. As risk management is an important factor in the field of medical engineering, a risk analysis of the developed concept has been carried out and the first prototypes have been implemented.
Animal to human translational paradigms relevant for approach avoidance conflict decision making.
Kirlic, Namik; Young, Jared; Aupperle, Robin L
2017-09-01
Avoidance behavior in clinical anxiety disorders is often a decision made in response to approach-avoidance conflict, resulting in a sacrifice of potential rewards to avoid potential negative affective consequences. Animal research has a long history of relying on paradigms related to approach-avoidance conflict to model anxiety-relevant behavior. This approach includes punishment-based conflict, exploratory, and social interaction tasks. There has been a recent surge of interest in the translation of paradigms from animal to human, in efforts to increase generalization of findings and support the development of more effective mental health treatments. This article briefly reviews animal tests related to approach-avoidance conflict and results from lesion and pharmacologic studies utilizing these tests. We then provide a description of translational human paradigms that have been developed to tap into related constructs, summarizing behavioral and neuroimaging findings. Similarities and differences in findings from analogous animal and human paradigms are discussed. Lastly, we highlight opportunities for future research and paradigm development that will support the clinical utility of this translational work. Copyright © 2017 Elsevier Ltd. All rights reserved.
LabVIEW: a software system for data acquisition, data analysis, and instrument control.
Kalkman, C J
1995-01-01
Computer-based data acquisition systems play an important role in clinical monitoring and in the development of new monitoring tools. LabVIEW (National Instruments, Austin, TX) is a data acquisition and programming environment that allows flexible acquisition and processing of analog and digital data. The main feature that distinguishes LabVIEW from other data acquisition programs is its highly modular graphical programming language, "G," and a large library of mathematical and statistical functions. The advantage of graphical programming is that the code is flexible, reusable, and self-documenting. Subroutines can be saved in a library and reused without modification in other programs. This dramatically reduces development time and enables researchers to develop or modify their own programs. LabVIEW uses a large amount of processing power and computer memory, thus requiring a powerful computer. A large-screen monitor is desirable when developing larger applications. LabVIEW is excellently suited for testing new monitoring paradigms, analysis algorithms, or user interfaces. The typical LabVIEW user is the researcher who wants to develop a new monitoring technique, a set of new (derived) variables by integrating signals from several existing patient monitors, closed-loop control of a physiological variable, or a physiological simulator.
Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544
The experiences of parents who report youth bullying victimization to school officials.
Brown, James R; Aalsma, Matthew C; Ott, Mary A
2013-02-01
Current research offers a limited understanding of parental experiences when reporting bullying to school officials. This research examines the experiences of middle-school parents as they took steps to protect their bullied youth. The qualitative tradition of interpretive phenomenology was used to provide in-depth analysis of the phenomena. A criterion-based, purposeful sample of 11 parents was interviewed face-to-face with subsequent phone call follow-ups. Interviews were taped, transcribed, and coded. MAX qda software was used for data coding. In analyzing the interviews, paradigm cases, themes, and patterns were identified. Three parent stages were found: discovering, reporting, and living with the aftermath. In the discovery stage, parents reported using advice-giving in hopes of protecting their youth. As parents noticed negative psychosocial symptoms in their youth escalate, they shifted their focus to reporting the bullying to school officials. All but one parent experienced ongoing resistance from school officials in fully engaging the bullying problem. In the aftermath, 10 of the 11 parents were left with two choices: remove their youth from the school or let the victimization continue. One paradigm case illustrates how a school official met parental expectations of protection. This study highlights a parental sense of ambiguity of school officials' roles and procedures related to school reporting and intervention. The results of this study have implications in the development and use of school-wide bullying protocols and parental advocacy.
Design and implementation of a general main axis controller for the ESO telescopes
NASA Astrophysics Data System (ADS)
Sandrock, Stefan; Di Lieto, Nicola; Pettazzi, Lorenzo; Erm, Toomas
2012-09-01
Most of the real-time control systems at the existing ESO telescopes were developed with "traditional" methods, using general purpose VMEbus electronics, and running applications that were coded by hand, mostly using the C programming language under VxWorks. As we are moving towards more modern design methods, we have explored a model-based design approach for real-time applications in the telescope area, and used the control algorithm of a standard telescope main axis as a first example. We wanted to have a clear work-flow that follows the "correct-by-construction" paradigm, where the implementation is testable in simulation on the development host, and where the testing time spent by debugging on target is minimized. It should respect the domains of control, electronics, and software engineers in the choice of tools. It should be a targetindependent approach so that the result could be deployed on various platforms. We have selected the Mathworks tools Simulink, Stateflow, and Embedded Coder for design and implementation, and LabVIEW with NI hardware for hardware-in-the-loop testing, all of which are widely used in industry. We describe how these tools have been used in order to model, simulate, and test the application. We also evaluate the benefits of this approach compared to the traditional method with respect to testing effort and maintainability. For a specific axis controller application we have successfully integrated the result into the legacy platform of the existing VLT software, as well as demonstrated how to use the same design for a new development with a completely different environment.
OnEarth: An Open Source Solution for Efficiently Serving High-Resolution Mapped Image Products
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Plesea, L.; Hall, J. R.; Roberts, J. T.; Cechini, M. F.; Schmaltz, J. E.; Alarcon, C.; Huang, T.; McGann, J. M.; Chang, G.; Boller, R. A.; Ilavajhala, S.; Murphy, K. J.; Bingham, A. W.
2013-12-01
This presentation introduces OnEarth, a server side software package originally developed at the Jet Propulsion Laboratory (JPL), that facilitates network-based, minimum-latency geolocated image access independent of image size or spatial resolution. The key component in this package is the Meta Raster Format (MRF), a specialized raster file extension to the Geospatial Data Abstraction Library (GDAL) consisting of an internal indexed pyramid of image tiles. Imagery to be served is converted to the MRF format and made accessible online via an expandable set of server modules handling requests in several common protocols, including the Open Geospatial Consortium (OGC) compliant Web Map Tile Service (WMTS) as well as Tiled WMS and Keyhole Markup Language (KML). OnEarth has recently transitioned to open source status and is maintained and actively developed as part of GIBS (Global Imagery Browse Services), a collaborative project between JPL and Goddard Space Flight Center (GSFC). The primary function of GIBS is to enhance and streamline the data discovery process and to support near real-time (NRT) applications via the expeditious ingestion and serving of full-resolution imagery representing science products from across the NASA Earth Science spectrum. Open source software solutions are leveraged where possible in order to utilize existing available technologies, reduce development time, and enlist wider community participation. We will discuss some of the factors and decision points in transitioning OnEarth to a suitable open source paradigm, including repository and licensing agreement decision points, institutional hurdles, and perceived benefits. We will also provide examples illustrating how OnEarth is integrated within GIBS and other applications.
Geographic Object-Based Image Analysis - Towards a new paradigm.
Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
2014-01-01
The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ' per-pixel paradigm ' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C., E-mail: terwilliger@lanl.gov; Bricogne, Gerard, E-mail: terwilliger@lanl.gov; Los Alamos National Laboratory, Mail Stop M888, Los Alamos, NM 87507
Macromolecular structures deposited in the PDB can and should be continually reinterpreted and improved on the basis of their accompanying experimental X-ray data, exploiting the steady progress in methods and software that the deposition of such data into the PDB on a massive scale has made possible. Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray datamore » continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Hydrography for the non-Hydrographer: A Paradigm shift in Data Processing
NASA Astrophysics Data System (ADS)
Malzone, C.; Bruce, S.
2017-12-01
Advancements in technology have led to overall systematic improvements including; hardware design, software architecture, data transmission/ telepresence. Historically, utilization of this technology has required a high knowledge level obtained with many years of experience, training and/or education. High training costs are incurred to achieve and maintain an acceptable level proficiency within an organization. Recently, engineers have developed off-the-shelf software technology called Qimera that has simplified the processing of hydrographic data. The core technology is centered around the isolation of tasks within the work- flow to capitalize on the technological advances in computing technology to automate the mundane error prone tasks to bring more value to the stages in which the human brain brings value. Key design features include: guided workflow, transcription automation, processing state management, real-time QA, dynamic workflow for validation, collaborative cleaning and production line processing. Since, Qimera is designed to guide the user, it allows expedition leaders to focus on science while providing an educational opportunity for students to quickly learn the hydrographic processing workflow including ancillary data analysis, trouble-shooting, calibration and cleaning. This paper provides case studies on how Qimera is currently implemented in scientific expeditions, benefits of implementation and how it is directing the future of on-board research for the non-hydrographer.
A Versatile and Reproducible Multi-Frequency Electrical Impedance Tomography System
Avery, James; Dowrick, Thomas; Faulkner, Mayo; Goren, Nir; Holder, David
2017-01-01
A highly versatile Electrical Impedance Tomography (EIT) system, nicknamed the ScouseTom, has been developed. The system allows control over current amplitude, frequency, number of electrodes, injection protocol and data processing. Current is injected using a Keithley 6221 current source, and voltages are recorded with a 24-bit EEG system with minimum bandwidth of 3.2 kHz. Custom PCBs interface with a PC to control the measurement process, electrode addressing and triggering of external stimuli. The performance of the system was characterised using resistor phantoms to represent human scalp recordings, with an SNR of 77.5 dB, stable across a four hour recording and 20 Hz to 20 kHz. In studies of both haeomorrhage using scalp electrodes, and evoked activity using epicortical electrode mats in rats, it was possible to reconstruct images matching established literature at known areas of onset. Data collected using scalp electrode in humans matched known tissue impedance spectra and was stable over frequency. The experimental procedure is software controlled and is readily adaptable to new paradigms. Where possible, commercial or open-source components were used, to minimise the complexity in reproduction. The hardware designs and software for the system have been released under an open source licence, encouraging contributions and allowing for rapid replication. PMID:28146122
Entering medical practice for the very first time: emotional talk, meaning and identity development.
Helmich, Esther; Bolhuis, Sanneke; Dornan, Tim; Laan, Roland; Koopmans, Raymond
2012-11-01
During early clinical exposure, medical students have many emotive experiences. Through participation in social practice, they learn to give personal meaning to their emotional states. This meaningful social act of participation may lead to a sense of belonging and identity construction. The aim of this study was to broaden and deepen our understanding of the interplay between those experiences and students' identity development. Our research questions asked how medical students give meaning to early clinical experiences and how that affects their professional identity development. Our method was phenomenology. Within that framework we used a narrative interviewing technique. Interviews with 17 medical students on Year 1 attachments to nurses in hospitals and nursing homes were analysed by listening to audio-recordings and reading transcripts. Nine transcripts, which best exemplified the students' range of experiences, were purposively sampled for deeper analysis. Two researchers carried out a systematic analysis using qualitative research software. Finally, cases representing four paradigms were chosen to exemplify the study findings. Students experienced their relationships with the people they met during early clinical experiences in very different ways, particularly in terms of feeling and displaying emotions, adjusting, role finding and participation. The interplay among emotions, meaning and identity was complex and four different 'paradigms' of lived experience were apparent: feeling insecure; complying; developing, and participating. We found large differences in the way students related to other people and gave meaning to their first experiences as doctors-to-be. They differed in their ability to engage in ward practices, the way they experienced their roles as medical students and future doctors, and how they experienced and expressed their emotions. Medical educators should help students to be sensitive to their emotions, offer space to explore different meanings, and be ready to suggest alternative interpretations that foster the development of desired professional identities. © Blackwell Publishing Ltd 2012.
NASA Astrophysics Data System (ADS)
Portnoy, David; Fisher, Brian; Phifer, Daniel
2015-06-01
The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal spectral data at 1 s time intervals, which represents data collected by a mobile system operating in a dynamic radiation background environment; and one that represents static measurements with a foreground spectrum (background plus source) and a background spectrum. These data include controlled variations in both Source Related Factors (nuclide, nuclide combinations, activities, distances, collection times, shielding configurations, and background spectra) and Detector Related Factors (currently only gain shifts, but resolution changes and non-linear energy calibration errors will be added soon). The software tools will allow the developer to evaluate the performance impact of each of these factors. Although this first implementation is somewhat limited in scope, considering only NaI-based detection systems and two application domains, it is hoped that (with community feedback) a wider range of detector types and applications will be included in the future. This article describes the methods used for dataset creation, the software validation/performance measurement tools, the performance metrics used, and examples of baseline performance.
Database Search Engines: Paradigms, Challenges and Solutions.
Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc
2016-01-01
The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.
1993-03-01
possible over a RF link when surfaced and over acoustic telemetry when submerged . Lockheed Missiles and Space Company has been awarded the contract to...ACL), is purely hierarchical and consists of three major components: the Data Manager, the ACL Controller, and the Model- 22 Based Reasoner ( MBR ). The...Data Manager receives, processes, and analyzes sensor and status data for use by the MBR and ACL Controller. The ACL Controller communicates commands
Real-time Electrophysiology: Using Closed-loop Protocols to Probe Neuronal Dynamics and Beyond
Linaro, Daniele; Couto, João; Giugliano, Michele
2015-01-01
Experimental neuroscience is witnessing an increased interest in the development and application of novel and often complex, closed-loop protocols, where the stimulus applied depends in real-time on the response of the system. Recent applications range from the implementation of virtual reality systems for studying motor responses both in mice1 and in zebrafish2, to control of seizures following cortical stroke using optogenetics3. A key advantage of closed-loop techniques resides in the capability of probing higher dimensional properties that are not directly accessible or that depend on multiple variables, such as neuronal excitability4 and reliability, while at the same time maximizing the experimental throughput. In this contribution and in the context of cellular electrophysiology, we describe how to apply a variety of closed-loop protocols to the study of the response properties of pyramidal cortical neurons, recorded intracellularly with the patch clamp technique in acute brain slices from the somatosensory cortex of juvenile rats. As no commercially available or open source software provides all the features required for efficiently performing the experiments described here, a new software toolbox called LCG5 was developed, whose modular structure maximizes reuse of computer code and facilitates the implementation of novel experimental paradigms. Stimulation waveforms are specified using a compact meta-description and full experimental protocols are described in text-based configuration files. Additionally, LCG has a command-line interface that is suited for repetition of trials and automation of experimental protocols. PMID:26132434
Gunnar, Megan R; Talge, Nicole M; Herrera, Adriana
2009-08-01
The stress response system is comprised of an intricate interconnected network that includes the hypothalamic-pituitary-adrenocortical (HPA) axis. The HPA axis maintains the organism's capacity to respond to acute and prolonged stressors and is a focus of research on the sequelae of stress. Human studies of the HPA system have been facilitated enormously by the development of salivary assays which measure cortisol, the steroid end-product of the HPA axis. The use of salivary cortisol is prevalent in child development stress research. However, in order to measure children's acute cortisol reactivity to circumscribed stressors, researchers must put children in stressful situations which produce elevated levels of cortisol. Unfortunately, many studies on the cortisol stress response in children use paradigms that fail to produce mean elevations in cortisol. This paper reviews stressor paradigms used with infants, children, and adolescents to guide researchers in selecting effective stressor tasks. A number of different types of stressor paradigms were examined, including: public speaking, negative emotion, relationship disruption/threatening, novelty, handling, and mild pain paradigms. With development, marked changes are evident in the effectiveness of the same stressor paradigm to provoke elevations in cortisol. Several factors appear to be critical in determining whether a stressor paradigm is successful, including the availability of coping resources and the extent to which, in older children, the task threatens the social self. A consideration of these issues is needed to promote the implementation of more effective stressor paradigms in human developmental psychoendocrine research.
NASA Astrophysics Data System (ADS)
Criado, Javier; Padilla, Nicolás; Iribarne, Luis; Asensio, Jose-Andrés
Due to the globalization of the information and knowledge society on the Internet, modern Web-based Information Systems (WIS) must be flexible and prepared to be easily accessible and manageable in real-time. In recent times it has received a special interest the globalization of information through a common vocabulary (i.e., ontologies), and the standardized way in which information is retrieved on the Web (i.e., powerful search engines, and intelligent software agents). These same principles of globalization and standardization should also be valid for the user interfaces of the WIS, but they are built on traditional development paradigms. In this paper we present an approach to reduce the gap of globalization/standardization in the generation of WIS user interfaces by using a real-time "bottom-up" composition perspective with COTS-interface components (type interface widgets) and trading services.
Application Modernization at LLNL and the Sierra Center of Excellence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, J. Robert; de Supinski, Bronis R.
We repport that in 2014, Lawrence Livermore National Laboratory began acquisition of Sierra, a pre-exascale system from IBM and Nvidia. It marks a significant shift in direction for LLNL by introducing the concept of heterogeneous computing via GPUs. LLNL’s mission requires application teams to prepare for this paradigm shift. Thus, the Sierra procurement required a proposed Center of Excellence that would align the expertise of the chosen vendors with laboratory personnel that represent the application developers, system software, and tool providers in a concentrated effort to prepare the laboratory’s codes in advance of the system transitioning to production in 2018.more » Finally, this article presents LLNL’s overall application strategy, with a focus on how LLNL is collaborating with IBM and Nvidia to ensure a successful transition of its mission-oriented applications into the exascale era.« less
A framework for building real-time expert systems
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1991-01-01
The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.
C-Language Integrated Production System, Version 6.0
NASA Technical Reports Server (NTRS)
Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris
1995-01-01
C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.
Application Modernization at LLNL and the Sierra Center of Excellence
Neely, J. Robert; de Supinski, Bronis R.
2017-09-01
We repport that in 2014, Lawrence Livermore National Laboratory began acquisition of Sierra, a pre-exascale system from IBM and Nvidia. It marks a significant shift in direction for LLNL by introducing the concept of heterogeneous computing via GPUs. LLNL’s mission requires application teams to prepare for this paradigm shift. Thus, the Sierra procurement required a proposed Center of Excellence that would align the expertise of the chosen vendors with laboratory personnel that represent the application developers, system software, and tool providers in a concentrated effort to prepare the laboratory’s codes in advance of the system transitioning to production in 2018.more » Finally, this article presents LLNL’s overall application strategy, with a focus on how LLNL is collaborating with IBM and Nvidia to ensure a successful transition of its mission-oriented applications into the exascale era.« less
A Calculus for Boxes and Traits in a Java-Like Setting
NASA Astrophysics Data System (ADS)
Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan
The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.
Faith Development in Older Adults.
ERIC Educational Resources Information Center
Shulik, Richard N.
1988-01-01
Introduces the faith development paradigm of James Fowler, describing six stages of faith development: intuitive-projective faith, mythic-literal faith, synthetic-conventional faith, individuating-reflective faith, conjunctive faith, and universalizing faith. Reviews one research project in which Fowler's paradigm was applied to older adults.…
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
Introducing high performance distributed logging service for ACS
NASA Astrophysics Data System (ADS)
Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca
2010-07-01
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
The Failure of Progressive Paradigm Reversal
ERIC Educational Resources Information Center
Guthrie, Gerard
2017-01-01
The student-centred, progressive paradigm has not had sustained success in changing teacher-centred, formalistic practices in "developing" country classrooms. Does "Gestalt-switch" and paradigm reversal demonstrate that progressive theory has realigned with formalistic reality, or has it remained axiomatic in the research and…
Crossing the chasm: how to develop weather and climate models for next generation computers?
NASA Astrophysics Data System (ADS)
Lawrence, Bryan N.; Rezny, Michael; Budich, Reinhard; Bauer, Peter; Behrens, Jörg; Carter, Mick; Deconinck, Willem; Ford, Rupert; Maynard, Christopher; Mullerworth, Steven; Osuna, Carlos; Porter, Andrew; Serradell, Kim; Valcke, Sophie; Wedi, Nils; Wilson, Simon
2018-05-01
Weather and climate models are complex pieces of software which include many individual components, each of which is evolving under pressure to exploit advances in computing to enhance some combination of a range of possible improvements (higher spatio-temporal resolution, increased fidelity in terms of resolved processes, more quantification of uncertainty, etc.). However, after many years of a relatively stable computing environment with little choice in processing architecture or programming paradigm (basically X86 processors using MPI for parallelism), the existing menu of processor choices includes significant diversity, and more is on the horizon. This computational diversity, coupled with ever increasing software complexity, leads to the very real possibility that weather and climate modelling will arrive at a chasm which will separate scientific aspiration from our ability to develop and/or rapidly adapt codes to the available hardware. In this paper we review the hardware and software trends which are leading us towards this chasm, before describing current progress in addressing some of the tools which we may be able to use to bridge the chasm. This brief introduction to current tools and plans is followed by a discussion outlining the scientific requirements for quality model codes which have satisfactory performance and portability, while simultaneously supporting productive scientific evolution. We assert that the existing method of incremental model improvements employing small steps which adjust to the changing hardware environment is likely to be inadequate for crossing the chasm between aspiration and hardware at a satisfactory pace, in part because institutions cannot have all the relevant expertise in house. Instead, we outline a methodology based on large community efforts in engineering and standardisation, which will depend on identifying a taxonomy of key activities - perhaps based on existing efforts to develop domain-specific languages, identify common patterns in weather and climate codes, and develop community approaches to commonly needed tools and libraries - and then collaboratively building up those key components. Such a collaborative approach will depend on institutions, projects, and individuals adopting new interdependencies and ways of working.
Zhu, A-Xing; Chen, La-Jiao; Qin, Cheng-Zhi; Wang, Ping; Liu, Jun-Zhi; Li, Run-Kui; Cai, Qiang-Guo
2012-07-01
With the increase of severe soil erosion problem, soil and water conservation has become an urgent concern for sustainable development. Small watershed experimental observation is the traditional paradigm for soil and water control. However, the establishment of experimental watershed usually takes long time, and has the limitations of poor repeatability and high cost. Moreover, the popularization of the results from the experimental watershed is limited for other areas due to the differences in watershed conditions. Therefore, it is not sufficient to completely rely on this old paradigm for soil and water loss control. Recently, scenario analysis based on watershed modeling has been introduced into watershed management, which can provide information about the effectiveness of different management practices based on the quantitative simulation of watershed processes. Because of its merits such as low cost, short period, and high repeatability, scenario analysis shows great potential in aiding the development of watershed management strategy. This paper elaborated a new paradigm using watershed modeling and scenario analysis for soil and water conservation, illustrated this new paradigm through two cases for practical watershed management, and explored the future development of this new soil and water conservation paradigm.
C++, objected-oriented programming, and astronomical data models
NASA Technical Reports Server (NTRS)
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
The development and modeling of devices and paradigms for transcranial magnetic stimulation
Goetz, Stefan M.; Deng, Zhi-De
2017-01-01
Magnetic stimulation is a noninvasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modeling. PMID:28443696
The development and modelling of devices and paradigms for transcranial magnetic stimulation.
Goetz, Stefan M; Deng, Zhi-De
2017-04-01
Magnetic stimulation is a non-invasive neurostimulation technique that can evoke action potentials and modulate neural circuits through induced electric fields. Biophysical models of magnetic stimulation have become a major driver for technological developments and the understanding of the mechanisms of magnetic neurostimulation and neuromodulation. Major technological developments involve stimulation coils with different spatial characteristics and pulse sources to control the pulse waveform. While early technological developments were the result of manual design and invention processes, there is a trend in both stimulation coil and pulse source design to mathematically optimize parameters with the help of computational models. To date, macroscopically highly realistic spatial models of the brain, as well as peripheral targets, and user-friendly software packages enable researchers and practitioners to simulate the treatment-specific and induced electric field distribution in the brains of individual subjects and patients. Neuron models further introduce the microscopic level of neural activation to understand the influence of activation dynamics in response to different pulse shapes. A number of models that were designed for online calibration to extract otherwise covert information and biomarkers from the neural system recently form a third branch of modelling.
ERIC Educational Resources Information Center
Hammack, Phillip L.
2005-01-01
Through the application of life course theory to the study of sexual orientation, this paper specifies a new paradigm for research on human sexual orientation that seeks to reconcile divisions among biological, social science, and humanistic paradigms. Recognizing the historical, social, and cultural relativity of human development, this paradigm…
Paradigms, Citations, and Maps of Science: A Personal History.
ERIC Educational Resources Information Center
Small, Henry
2003-01-01
Discusses mapping science and Kuhn's theories of paradigms and scientific development. Highlights include cocitation clustering; bibliometric definition of a paradigm; specialty dynamics; pathways through science; a new Web tool called Essential Science Indicators (ESI) for studying the structure of science; and microrevolutions. (Author/LRW)
Rodríguez-Domínguez, Carlos; Benghazi, Kawtar; Noguera, Manuel; Garrido, José Luis; Rodríguez, María Luisa; Ruiz-López, Tomás
2012-01-01
The Request-Response (RR) paradigm is widely used in ubiquitous systems to exchange information in a secure, reliable and timely manner. Nonetheless, there is also an emerging need for adopting the Publish-Subscribe (PubSub) paradigm in this kind of systems, due to the advantages that this paradigm offers in supporting mobility by means of asynchronous, non-blocking and one-to-many message distribution semantics for event notification. This paper analyzes the strengths and weaknesses of both the RR and PubSub paradigms to support communications in ubiquitous systems and proposes an abstract communication model in order to enable their seamless integration. Thus, developers will be focused on communication semantics and the required quality properties, rather than be concerned about specific communication mechanisms. The aim is to provide developers with abstractions intended to decrease the complexity of integrating different communication paradigms commonly needed in ubiquitous systems. The proposal has been applied to implement a middleware and a real home automation system to show its applicability and benefits.
Integration, Telecommunication, and Development: Power in the Paradigms.
ERIC Educational Resources Information Center
Samarajiva, Rohan; Shields, Peter
1990-01-01
Notes that much research in telecommunication and development continues to treat technology as neutral and neglects its impact on the powerless. Argues that the view of development as desirable often goes uncontested. Calls for a paradigm which places power in the center of the analysis. (SG)
Novel Cognitive Paradigms for the Detection of Memory Impairment in Preclinical Alzheimer’s Disease
Loewenstein, David A.; Curiel, Rosie E.; Duara, Ranjan; Buschke, Herman
2017-01-01
In spite of advances in neuroimaging and other brain biomarkers to assess preclinical Alzheimer’s disease (AD), cognitive assessment has relied on traditional memory paradigms developed well over six decades ago. This has led to a growing concern about their effectiveness in the early diagnosis of AD which is essential to develop preventive and early targeted interventions before the occurrence of multisystem brain degeneration. We describe the development of novel tests that are more cognitively challenging, minimize variability in learning strategies, enhance initial acquisition and retrieval using cues, and exploit vulnerabilities in persons with incipient AD such as the susceptibility to proactive semantic interference, and failure to recover from proactive semantic interference. The advantages of various novel memory assessment paradigms are examined as well as how they compare with traditional neuropsychological assessments of memory. Finally, future directions for the development of more effective assessment paradigms are suggested. PMID:29214859
Space Flight Software Development Software for Intelligent System Health Management
NASA Technical Reports Server (NTRS)
Trevino, Luis C.; Crumbley, Tim
2004-01-01
The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.
A novel N-input voting algorithm for X-by-wire fault-tolerant systems.
Karimi, Abbas; Zarafshan, Faraneh; Al-Haddad, S A R; Ramli, Abdul Rahman
2014-01-01
Voting is an important operation in multichannel computation paradigm and realization of ultrareliable and real-time control systems that arbitrates among the results of N redundant variants. These systems include N-modular redundant (NMR) hardware systems and diversely designed software systems based on N-version programming (NVP). Depending on the characteristics of the application and the type of selected voter, the voting algorithms can be implemented for either hardware or software systems. In this paper, a novel voting algorithm is introduced for real-time fault-tolerant control systems, appropriate for applications in which N is large. Then, its behavior has been software implemented in different scenarios of error-injection on the system inputs. The results of analyzed evaluations through plots and statistical computations have demonstrated that this novel algorithm does not have the limitations of some popular voting algorithms such as median and weighted; moreover, it is able to significantly increase the reliability and availability of the system in the best case to 2489.7% and 626.74%, respectively, and in the worst case to 3.84% and 1.55%, respectively.
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, D.; McInnes, L. C.; Woodward, C.
This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less
Transforming Psychological Practice and Society: Policies that Reflect the New Paradigm.
ERIC Educational Resources Information Center
Gill, Carol J.; Kewman, Donald G.; Brannon, Ruth W.
2003-01-01
Understanding disability through a social paradigm offers opportunities to reframe the way psychologists define problems related to disability, develop more collaborative relationships between psychologists and people with disabilities, and adopt new professional responsibilities. Addresses the impact of the social paradigm on policies within…
Towards a New Paradigm of Moral Personhood
ERIC Educational Resources Information Center
Frimer, Jeremy A.; Walker, Lawrence J.
2008-01-01
Moral psychology is between paradigms. Kohlberg's model of moral rationality has proved inadequate in explaining action; yet its augmentation--moral personality--awaits empirical embodiment. This article addresses some critical issues in developing a comprehensive empirical paradigm of moral personhood. Is a first-person or a third-person…
Gunnar, Megan R.; Talge, Nicole M.; Herrera, Adriana
2009-01-01
Summary The stress response system is comprised of an intricate interconnected network that includes the hypothalamic–pituitary–adrenocortical (HPA) axis. The HPA axis maintains the organism’s capacity to respond to acute and prolonged stressors and is a focus of research on the sequelae of stress. Human studies of the HPA system have been facilitated enormously by the development of salivary assays which measure cortisol, the steroid end-product of the HPA axis. The use of salivary cortisol is prevalent in child development stress research. However, in order to measure children’s acute cortisol reactivity to circumscribed stressors, researchers must put children in stressful situations which produce elevated levels of cortisol. Unfortunately, many studies on the cortisol stress response in children use paradigms that fail to produce mean elevations in cortisol. This paper reviews stressor paradigms used with infants, children, and adolescents to guide researchers in selecting effective stressor tasks. A number of different types of stressor paradigms were examined, including: public speaking, negative emotion, relationship disruption/threatening, novelty, handling, and mild pain paradigms. With development, marked changes are evident in the effectiveness of the same stressor paradigm to provoke elevations in cortisol. Several factors appear to be critical in determining whether a stressor paradigm is successful, including the availability of coping resources and the extent to which, in older children, the task threatens the social self. A consideration of these issues is needed to promote the implementation of more effective stressor paradigms in human developmental psychoendocrine research. PMID:19321267
Information processing as a paradigm for decision making.
Oppenheimer, Daniel M; Kelso, Evan
2015-01-03
For decades, the dominant paradigm for studying decision making--the expected utility framework--has been burdened by an increasing number of empirical findings that question its validity as a model of human cognition and behavior. However, as Kuhn (1962) argued in his seminal discussion of paradigm shifts, an old paradigm cannot be abandoned until a new paradigm emerges to replace it. In this article, we argue that the recent shift in researcher attention toward basic cognitive processes that give rise to decision phenomena constitutes the beginning of that replacement paradigm. Models grounded in basic perceptual, attentional, memory, and aggregation processes have begun to proliferate. The development of this new approach closely aligns with Kuhn's notion of paradigm shift, suggesting that this is a particularly generative and revolutionary time to be studying decision science.
Co-"Lab"oration: A New Paradigm for Building a Management Information Systems Course
ERIC Educational Resources Information Center
Breimer, Eric; Cotler, Jami; Yoder, Robert
2010-01-01
We propose a new paradigm for building a Management Information Systems course that focuses on laboratory activities developed collaboratively using Computer-Mediated Communication and Collaboration tools. A highlight of our paradigm is the "practice what you preach" concept where the computer communication tools and collaboration…
Toward a Quality-of-Life Paradigm for Sustainable Communities.
ERIC Educational Resources Information Center
Hyman, Drew
This paper suggests that current paradigms and world views guiding research for social action are inadequate for directing rural community change in a high-tech, global community. For several generations, the agrarian and industrial paradigms have been accepted as appropriate for guiding social change and development. However, there are problems…
The Use of Narrative Paradigm Theory in Assessing Audience Value Conflict in Image Advertising.
ERIC Educational Resources Information Center
Stutts, Nancy B.; Barker, Randolph T.
1999-01-01
Presents an analysis of image advertisement developed from Narrative Paradigm Theory. Suggests that the nature of postmodern culture makes image advertising an appropriate external communication strategy for generating stake holder loyalty. Suggests that Narrative Paradigm Theory can identify potential sources of audience conflict by illuminating…
Illustrating a Model-Game-Model Paradigm for Using Human Wargames in Analysis
2017-02-01
Working Paper Illustrating a Model- Game -Model Paradigm for Using Human Wargames in Analysis Paul K. Davis RAND National Security Research...paper proposes and illustrates an analysis-centric paradigm (model- game -model or what might be better called model-exercise-model in some cases) for...to involve stakehold- ers in model development from the outset. The model- game -model paradigm was illustrated in an application to crisis planning
Software Support during a Control Room Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michele Joyce; Michael Spata; Thomas Oren
2005-09-21
In 2004, after 14 years of accelerator operations and commissioning, Jefferson Lab renovated its main control room. Changes in technology and lessons learned during those 14 years drove the control room redesign in a new direction, one that optimizes workflow and makes critical information and controls available to everyone in the control room. Fundamental changes in a variety of software applications were required to facilitate the new operating paradigm. A critical component of the new control room design is a large-format video wall that is used to make a variety of operating information available to everyone in the room. Analogmore » devices such as oscilloscopes and function generators are now displayed on the video wall through two crosspoint switchers: one for analog signals and another for video signals. A new software GUI replaces manual configuration of the oscilloscopes and function generators and helps automate setup. Monitoring screens, customized for the video wall, now make important operating information visible to everyone, not just a single operator. New alarm handler software gives any operator, on any workstation, access to all alarm handler functionality, and multiple users can now contribute to a single electronic logbook entry. To further support the shift to distributed access and control, many applications have been redesigned to run on servers instead of on individual workstations.« less
ERIC Educational Resources Information Center
Khurshid, Ayesha
2016-01-01
The contemporary paradigm of international development invests in individuals and communities as the main agents of development. In this paradigm, education is presented as the central avenue for individuals and communities to generate resources and networks to empower themselves. Some development and feminist scholars have critiqued this intense…
Developing sustainability: a new metaphor for progress.
Bensimon, Cécile M; Benatar, Solomon R
2006-01-01
In this paper, we propose a new model for development, one that transcends the North-South dichotomy and goes beyond a narrow conception of development as an economic process. This model requires a paradigm shift toward a new metaphor that develops sustainability, rather than sustains development. We conclude by defending a 'report card on development' as a means for evaluating how countries perform within this new paradigm.
Using ontologies for structuring organizational knowledge in Home Care assistance.
Valls, Aida; Gibert, Karina; Sánchez, David; Batet, Montserrat
2010-05-01
Information Technologies and Knowledge-based Systems can significantly improve the management of complex distributed health systems, where supporting multidisciplinarity is crucial and communication and synchronization between the different professionals and tasks becomes essential. This work proposes the use of the ontological paradigm to describe the organizational knowledge of such complex healthcare institutions as a basis to support their management. The ontology engineering process is detailed, as well as the way to maintain the ontology updated in front of changes. The paper also analyzes how such an ontology can be exploited in a real healthcare application and the role of the ontology in the customization of the system. The particular case of senior Home Care assistance is addressed, as this is a highly distributed field as well as a strategic goal in an ageing Europe. The proposed ontology design is based on a Home Care medical model defined by an European consortium of Home Care professionals, framed in the scope of the K4Care European project (FP6). Due to the complexity of the model and the knowledge gap existing between the - textual - medical model and the strict formalization of an ontology, an ontology engineering methodology (On-To-Knowledge) has been followed. After applying the On-To-Knowledge steps, the following results were obtained: the feasibility study concluded that the ontological paradigm and the expressiveness of modern ontology languages were enough to describe the required medical knowledge; after the kick-off and refinement stages, a complete and non-ambiguous definition of the Home Care model, including its main components and interrelations, was obtained; the formalization stage expressed HC medical entities in the form of ontological classes, which are interrelated by means of hierarchies, properties and semantically rich class restrictions; the evaluation, carried out by exploiting the ontology into a knowledge-driven e-health application running on a real scenario, showed that the ontology design and its exploitation brought several benefits with regards to flexibility, adaptability and work efficiency from the end-user point of view; for the maintenance stage, two software tools are presented, aimed to address the incorporation and modification of healthcare units and the personalization of ontological profiles. The paper shows that the ontological paradigm and the expressiveness of modern ontology languages can be exploited not only to represent terminology in a non-ambiguous way, but also to formalize the interrelations and organizational structures involved in a real and distributed healthcare environment. This kind of ontologies facilitates the adaptation in front of changes in the healthcare organization or Care Units, supports the creation of profile-based interaction models in a transparent and seamless way, and increases the reusability and generality of the developed software components. As a conclusion of the exploitation of the developed ontology in a real medical scenario, we can say that an ontology formalizing organizational interrelations is a key component for building effective distributed knowledge-driven e-health systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Negative pressure darwinism: survival of the fittest paradigm.
Miller, Michael; Bybordi, Farhad
2009-07-01
The use of negative pressure for wound healing has been based on a set of parameters and devices that until recently were combined into a single paradigm. Despite historical and more recent evidence providing viable alternative considerations, it is only recently that this paradigm and its tenets have come into question. As the understanding of the limits of the current paradigm and specific instances of its benefits and drawbacks are identified, shifts in the paradigm must take place if the therapy is to evolve, develop, and continue to be efficacious. The pertinence of the concept of survival of the fittest is used to explore the need for a paradigm shift in negative pressure wound therapy.
Ancient Ethical Practices of Dualism and Ethical Implications for Future Paradigms in Nursing.
Milton, Constance L
2016-07-01
Paradigms contain theoretical structures to guide scientific disciplines. Since ancient times, Cartesian dualism has been a prominent philosophy incorporated in the practice of medicine. The discipline of nursing has continued the body-mind emphasis with similar paradigmatic thinking and theories of nursing that separate body and mind. Future trends for paradigm and nursing theory development are harkening to former ways of thinking. In this article the author discusses the origins of Cartesian dualism and implications for its current usage. The author shall illuminate what it potentially means to engage in dualism in nursing and discuss possible ethical implications for future paradigm and theory development in nursing. © The Author(s) 2016.
Terwilliger, Thomas C; Bricogne, Gerard
2014-10-01
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.
Terwilliger, Thomas C.; Bricogne, Gerard
2014-09-30
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when itmore » was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
Terwilliger, Thomas C.; Bricogne, Gerard
2014-01-01
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when it was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering. PMID:25286839
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Bricogne, Gerard
Accurate crystal structures of macromolecules are of high importance in the biological and biomedical fields. Models of crystal structures in the Protein Data Bank (PDB) are in general of very high quality as deposited. However, methods for obtaining the best model of a macromolecular structure from a given set of experimental X-ray data continue to progress at a rapid pace, making it possible to improve most PDB entries after their deposition by re-analyzing the original deposited data with more recent software. This possibility represents a very significant departure from the situation that prevailed when the PDB was created, when itmore » was envisioned as a cumulative repository of static contents. A radical paradigm shift for the PDB is therefore proposed, away from the static archive model towards a much more dynamic body of continuously improving results in symbiosis with continuously improving methods and software. These simultaneous improvements in methods and final results are made possible by the current deposition of processed crystallographic data (structure-factor amplitudes) and will be supported further by the deposition of raw data (diffraction images). It is argued that it is both desirable and feasible to carry out small-scale and large-scale efforts to make this paradigm shift a reality. Small-scale efforts would focus on optimizing structures that are of interest to specific investigators. Large-scale efforts would undertake a systematic re-optimization of all of the structures in the PDB, or alternatively the redetermination of groups of structures that are either related to or focused on specific questions. All of the resulting structures should be made generally available, along with the precursor entries, with various views of the structures being made available depending on the types of questions that users are interested in answering.« less
Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.
2018-01-01
Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks
Mahjoub, Reem K.; Elleithy, Khaled
2017-01-01
The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation. PMID:28420102
Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks.
Mahjoub, Reem K; Elleithy, Khaled
2017-04-14
The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation.
Reinventing User Applications for Mission Control
NASA Technical Reports Server (NTRS)
Trimble, Jay Phillip; Crocker, Alan R.
2010-01-01
In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.
Rowe, Steven P; Siddiqui, Adeel; Bonekamp, David
2014-07-01
To create novel radiology key image software that is easy to use for novice users, incorporates elements adapted from social networking Web sites, facilitates resident and fellow education, and can serve as the engine for departmental sharing of interesting cases and follow-up studies. Using open-source programming languages and software, radiology key image software (the key image and case log application, KICLA) was developed. This system uses a lightweight interface with the institutional picture archiving and communications systems and enables the storage of key images, image series, and cine clips. It was designed to operate with minimal disruption to the radiologists' daily workflow. Many features of the user interface have been inspired by social networking Web sites, including image organization into private or public folders, flexible sharing with other users, and integration of departmental teaching files into the system. We also review the performance, usage, and acceptance of this novel system. KICLA was implemented at our institution and achieved widespread popularity among radiologists. A large number of key images have been transmitted to the system since it became available. After this early experience period, the most commonly encountered radiologic modalities are represented. A survey distributed to users revealed that most of the respondents found the system easy to use (89%) and fast at allowing them to record interesting cases (100%). Hundred percent of respondents also stated that they would recommend a system such as KICLA to their colleagues. The system described herein represents a significant upgrade to the Digital Imaging and Communications in Medicine teaching file paradigm with efforts made to maximize its ease of use and inclusion of characteristics inspired by social networking Web sites that allow the system additional functionality such as individual case logging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Geographic Object-Based Image Analysis – Towards a new paradigm
Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
2014-01-01
The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958
Bazrafkan, Leila; Hemmati, Mehdi
2018-04-01
One of the important tasks of nurses in intensive care unit is interpretation of ECG. The use of training simulator is a new paradigm in the age of computers. This study was performed to evaluate the impact of cardiac arrhythmias simulator software on nurses' learning in the subspecialty Vali-Asr Hospital in 2016. This study was conducted by quasi-experimental randomized Salomon four group design with the participation of 120 nurses in subspecialty Vali-Asr Hospital in Tehran, Iran in 2016 that were selected purposefully and allocated in 4 groups. By this design other confounding factors such as the prior information, maturation and the role of sex and age were controlled by Solomon 4 design. The valid and reliable multiple choice test tools were used to gather information; the validity of the test was approved by experts and its reliability was obtained by Cronbach's alpha coefficient 0.89. At first, the knowledge and skills of the participants were assessed by a pre-test; following the educational intervention with cardiac arrhythmias simulator software during 14 days in ICUs, the mentioned factors were measured for the two groups again by a post-test in the four groups. Data were analyzed using the two way ANOVA. The significance level was considered as p<0.05. Based on randomized four-group Solomon designs and our test results, using cardiac arrhythmias simulator software as an intervention was effective in the nurses' learning since a significant difference was found between pre-test and post-test in the first group (p<0.05). Also, other comparisons by ANOVA test showed that there was no interaction between pre-test and intervention in all of the three knowledge areas of cardiac arrhythmias, their treatments and their diagnosis (P>0.05). The use of software-based simulator for cardiac arrhythmias was effective in nurses' learning in light of its attractive components and interactive method. This intervention increased the knowledge of the nurses in cognitive domain of cardiac arrhythmias in addition to their diagnosis and treatment. Also, the package can be used for training in other areas such as continuing medical education.
Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud
NASA Astrophysics Data System (ADS)
Ramamurthy, Mohan; Fisher, Ward; Yoksas, Tom
2015-04-01
Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high expectations from students who have grown up with smartphones and tablets. These changes are upending traditional approaches to accessing and using data and software. Unidata recognizes that its products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable in the form of downloadable Unidata-in-a-box virtual images, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our ongoing efforts to deploy a suite of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.
Implementation of an OAIS Repository Using Free, Open Source Software
NASA Astrophysics Data System (ADS)
Flathers, E.; Gessler, P. E.; Seamon, E.
2015-12-01
The Northwest Knowledge Network (NKN) is a regional data repository located at the University of Idaho that focuses on the collection, curation, and distribution of research data. To support our home institution and others in the region, we offer services to researchers at all stages of the data lifecycle—from grant application and data management planning to data distribution and archive. In this role, we recognize the need to work closely with other data management efforts at partner institutions and agencies, as well as with larger aggregation efforts such as our state geospatial data clearinghouses, data.gov, DataONE, and others. In the past, one of our challenges with monolithic, prepackaged data management solutions is that customization can be difficult to implement and maintain, especially as new versions of the software are released that are incompatible with our local codebase. Our solution is to break the monolith up into its constituent parts, which offers us several advantages. First, any customizations that we make are likely to fall into areas that can be accessed through Application Program Interfaces (API) that are likely to remain stable over time, so our code stays compatible. Second, as components become obsolete or insufficient to meet new demands that arise, we can replace the individual components with minimal effect on the rest of the infrastructure, causing less disruption to operations. Other advantages include increased system reliability, staggered rollout of new features, enhanced compatibility with legacy systems, reduced dependence on a single software company as a point of failure, and the separation of development into manageable tasks. In this presentation, we describe our application of the Service Oriented Architecture (SOA) design paradigm to assemble a data repository that conforms to the Open Archival Information System (OAIS) Reference Model primarily using a collection of free and open-source software. We detail the design of the repository, based upon open standards to support interoperability with other institutions' systems and with future versions of our own software components. We also describe the implementation process, including our use of GitHub as a collaboration tool and code repository.
fMRI Validation of fNIRS Measurements During a Naturalistic Task
Noah, J. Adam; Ono, Yumie; Nomoto, Yasunori; Shimada, Sotaro; Tachibana, Atsumichi; Zhang, Xian; Bronner, Shaw; Hirsch, Joy
2015-01-01
We present a method to compare brain activity recorded with near-infrared spectroscopy (fNIRS) in a dance video game task to that recorded in a reduced version of the task using fMRI (functional magnetic resonance imaging). Recently, it has been shown that fNIRS can accurately record functional brain activities equivalent to those concurrently recorded with functional magnetic resonance imaging for classic psychophysical tasks and simple finger tapping paradigms. However, an often quoted benefit of fNIRS is that the technique allows for studying neural mechanisms of complex, naturalistic behaviors that are not possible using the constrained environment of fMRI. Our goal was to extend the findings of previous studies that have shown high correlation between concurrently recorded fNIRS and fMRI signals to compare neural recordings obtained in fMRI procedures to those separately obtained in naturalistic fNIRS experiments. Specifically, we developed a modified version of the dance video game Dance Dance Revolution (DDR) to be compatible with both fMRI and fNIRS imaging procedures. In this methodology we explain the modifications to the software and hardware for compatibility with each technique as well as the scanning and calibration procedures used to obtain representative results. The results of the study show a task-related increase in oxyhemoglobin in both modalities and demonstrate that it is possible to replicate the findings of fMRI using fNIRS in a naturalistic task. This technique represents a methodology to compare fMRI imaging paradigms which utilize a reduced-world environment to fNIRS in closer approximation to naturalistic, full-body activities and behaviors. Further development of this technique may apply to neurodegenerative diseases, such as Parkinson’s disease, late states of dementia, or those with magnetic susceptibility which are contraindicated for fMRI scanning. PMID:26132365
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
A client/server system for Internet access to biomedical text/image databanks.
Thoma, G R; Long, L R; Berman, L E
1996-01-01
Internet access to mixed text/image databanks is finding application in the medical world. An example is a database of medical X-rays and associated data consisting of demographic, socioeconomic, physician's exam, medical laboratory and other information collected as part of a nationwide health survey conducted by the government. Another example is a collection of digitized cryosection images, CT and MR taken of cadavers as part of the National Library of Medicine's Visible Human Project. In both cases, the challenge is to provide access to both the image and the associated text for a wide end user community to create atlases, conduct epidemiological studies, to develop image-specific algorithms for compression, enhancement and other types of image processing, among many other applications. The databanks mentioned above are being created in prototype form. This paper describes the prototype system developed for the archiving of the data and the client software to enable a broad range of end users to access the archive, retrieve text and image data, display the data and manipulate the images. System design considerations include; data organization in a relational database management system with object-oriented extensions; a hierarchical organization of the image data by different resolution levels for different user classes; client design based on common hardware and software platforms incorporating SQL search capability, X Window, Motif and TAE (a development environment supporting rapid prototyping and management of graphic-oriented user interfaces); potential to include ultra high resolution display monitors as a user option; intuitive user interface paradigm for building complex queries; and contrast enhancement, magnification and mensuration tools for better viewing by the user.
myBrain: a novel EEG embedded system for epilepsy monitoring.
Pinho, Francisco; Cerqueira, João; Correia, José; Sousa, Nuno; Dias, Nuno
2017-10-01
The World Health Organisation has pointed that a successful health care delivery, requires effective medical devices as tools for prevention, diagnosis, treatment and rehabilitation. Several studies have concluded that longer monitoring periods and outpatient settings might increase diagnosis accuracy and success rate of treatment selection. The long-term monitoring of epileptic patients through electroencephalography (EEG) has been considered a powerful tool to improve the diagnosis, disease classification, and treatment of patients with such condition. This work presents the development of a wireless and wearable EEG acquisition platform suitable for both long-term and short-term monitoring in inpatient and outpatient settings. The developed platform features 32 passive dry electrodes, analogue-to-digital signal conversion with 24-bit resolution and a variable sampling frequency from 250 Hz to 1000 Hz per channel, embedded in a stand-alone module. A computer-on-module embedded system runs a Linux ® operating system that rules the interface between two software frameworks, which interact to satisfy the real-time constraints of signal acquisition as well as parallel recording, processing and wireless data transmission. A textile structure was developed to accommodate all components. Platform performance was evaluated in terms of hardware, software and signal quality. The electrodes were characterised through electrochemical impedance spectroscopy and the operating system performance running an epileptic discrimination algorithm was evaluated. Signal quality was thoroughly assessed in two different approaches: playback of EEG reference signals and benchmarking with a clinical-grade EEG system in alpha-wave replacement and steady-state visual evoked potential paradigms. The proposed platform seems to efficiently monitor epileptic patients in both inpatient and outpatient settings and paves the way to new ambulatory clinical regimens as well as non-clinical EEG applications.
Technology and Tool Development to Support Safety and Mission Assurance
NASA Technical Reports Server (NTRS)
Denney, Ewen; Pai, Ganesh
2017-01-01
The Assurance Case approach is being adopted in a number of safety-mission-critical application domains in the U.S., e.g., medical devices, defense aviation, automotive systems, and, lately, civil aviation. This paradigm refocuses traditional, process-based approaches to assurance on demonstrating explicitly stated assurance goals, emphasizing the use of structured rationale, and concrete product-based evidence as the means for providing justified confidence that systems and software are fit for purpose in safely achieving mission objectives. NASA has also been embracing assurance cases through the concepts of Risk Informed Safety Cases (RISCs), as documented in the NASA System Safety Handbook, and Objective Hierarchies (OHs) as put forth by the Agency's Office of Safety and Mission Assurance (OSMA). This talk will give an overview of the work being performed by the SGT team located at NASA Ames Research Center, in developing technologies and tools to engineer and apply assurance cases in customer projects pertaining to aviation safety. We elaborate how our Assurance Case Automation Toolset (AdvoCATE) has not only extended the state-of-the-art in assurance case research, but also demonstrated its practical utility. We have successfully developed safety assurance cases for a number of Unmanned Aircraft Systems (UAS) operations, which underwent, and passed, scrutiny both by the aviation regulator, i.e., the FAA, as well as the applicable NASA boards for airworthiness and flight safety, flight readiness, and mission readiness. We discuss our efforts in expanding AdvoCATE capabilities to support RISCs and OHs under a project recently funded by OSMA under its Software Assurance Research Program. Finally, we speculate on the applicability of our innovations beyond aviation safety to such endeavors as robotic, and human spaceflight.
Neural network pattern recognition of thermal-signature spectra for chemical defense
NASA Astrophysics Data System (ADS)
Carrieri, Arthur H.; Lim, Pascal I.
1995-05-01
We treat infrared patterns of absorption or emission by nerve and blister agent compounds (and simulants of this chemical group) as features for the training of neural networks to detect the compounds' liquid layers on the ground or their vapor plumes during evaporation by external heating. Training of a four-layer network architecture is composed of a backward-error-propagation algorithm and a gradient-descent paradigm. We conduct testing by feed-forwarding preprocessed spectra through the network in a scaled format consistent with the structure of the training-data-set representation. The best-performance weight matrix (spectral filter) evolved from final network training and testing with software simulation trials is electronically transferred to a set of eight artificial intelligence integrated circuits (ICs') in specific modular form (splitting of weight matrices). This form makes full use of all input-output IC nodes. This neural network computer serves an important real-time detection function when it is integrated into pre-and postprocessing data-handling units of a tactical prototype thermoluminescence sensor now under development at the Edgewood Research, Development, and Engineering Center.
McKenzie, Judith; Braswell, Bob; Jelsma, Jennifer; Naidoo, Nirmala
2011-01-01
Q-methodology was developed to analyse subjective responses to a range of items dealing with specific topics. This article describes the use of Q-methodology and presents the results of a Q-study on perspectives on disability carried out in a training workshop as evidence for its usefulness in disability research. A Q-sort was administered in the context of a training workshop on Q-method. The Q-sort consisted of statements related to the topic of disability. The responses were analysed using specifically developed software to identify factors that represent patterns of responses. Twenty-two of the 23 respondents loaded on four factors. These factors appeared to represent different paradigms relating to the social, medical and disability rights models of disability. The fourth factor appeared to be that of a family perspective. These are all models evident in the disability research literature and provide evidence for the validity of Q-method in disability research. Based on this opportunistic study, it would appear that Q-methodology is a useful tool for identifying different view points related to disability.
Structural impact detection with vibro-haptic interfaces
NASA Astrophysics Data System (ADS)
Jung, Hwee-Kwon; Park, Gyuhae; Todd, Michael D.
2016-07-01
This paper presents a new sensing paradigm for structural impact detection using vibro-haptic interfaces. The goal of this study is to allow humans to ‘feel’ structural responses (impact, shape changes, and damage) and eventually determine health conditions of a structure. The target applications for this study are aerospace structures, in particular, airplane wings. Both hardware and software components are developed to realize the vibro-haptic-based impact detection system. First, L-shape piezoelectric sensor arrays are deployed to measure the acoustic emission data generated by impacts on a wing. Unique haptic signals are then generated by processing the measured acoustic emission data. These haptic signals are wirelessly transmitted to human arms, and with vibro-haptic interface, human pilots could identify impact location, intensity and possibility of subsequent damage initiation. With the haptic interface, the experimental results demonstrate that human could correctly identify such events, while reducing false indications on structural conditions by capitalizing on human’s classification capability. Several important aspects of this study, including development of haptic interfaces, design of optimal human training strategies, and extension of the haptic capability into structural impact detection are summarized in this paper.
Aristeia Leadership: A Catalyst for the i[superscript 2]Flex Methodology
ERIC Educational Resources Information Center
Gialamas, Stefanos; Avgerinou, Maria D.
2015-01-01
In response to the global educational reform we have developed a new education paradigm, the Global Morfosis paradigm which has been implemented at the American Community Schools of Athens (ACS Athens) Greece for the past decade. This dynamic paradigm consists of three inseparable, interconnected, and interrelated components: the Educational…
ERIC Educational Resources Information Center
Dmitrenko, ?amara ?.; Lavryk, Tatjana V.; Yaresko, Ekaterina V.
2015-01-01
Changes in the various fields of knowledge influenced the pedagogical science. The article explains the structure of the foundations of modern pedagogy through paradigmal and methodological aspects. Bases of modern pedagogy include complex of paradigms, object and subject of science, general and specific principles, methods and technologies.…
The Question of Work: Adolescent Literature and the Eriksonian Paradigm.
ERIC Educational Resources Information Center
Burgan, Mary
1988-01-01
Suggests that focusing on paradigms of work--the way it is described, together with the thematic implications it embodies--can be useful in teaching literature to young adults. Examines how examples from literature illustrate Erik H. Erikson's paradigm of the psychosocial stages of development in late childhood and adolescence. (MM)
Treatment of Children with Speech Oral Placement Disorders (OPDs): A Paradigm Emerges
ERIC Educational Resources Information Center
Bahr, Diane; Rosenfeld-Johnson, Sara
2010-01-01
Epidemiological research was used to develop the Speech Disorders Classification System (SDCS). The SDCS is an important speech diagnostic paradigm in the field of speech-language pathology. This paradigm could be expanded and refined to also address treatment while meeting the standards of evidence-based practice. The article assists that process…
Rodríguez-Domínguez, Carlos; Benghazi, Kawtar; Noguera, Manuel; Garrido, José Luis; Rodríguez, María Luisa; Ruiz-López, Tomás
2012-01-01
The Request-Response (RR) paradigm is widely used in ubiquitous systems to exchange information in a secure, reliable and timely manner. Nonetheless, there is also an emerging need for adopting the Publish-Subscribe (PubSub) paradigm in this kind of systems, due to the advantages that this paradigm offers in supporting mobility by means of asynchronous, non-blocking and one-to-many message distribution semantics for event notification. This paper analyzes the strengths and weaknesses of both the RR and PubSub paradigms to support communications in ubiquitous systems and proposes an abstract communication model in order to enable their seamless integration. Thus, developers will be focused on communication semantics and the required quality properties, rather than be concerned about specific communication mechanisms. The aim is to provide developers with abstractions intended to decrease the complexity of integrating different communication paradigms commonly needed in ubiquitous systems. The proposal has been applied to implement a middleware and a real home automation system to show its applicability and benefits. PMID:22969366
NASA Astrophysics Data System (ADS)
Bernardet, Ulysses; Bermúdez I Badia, Sergi; Duff, Armin; Inderbitzin, Martin; Le Groux, Sylvain; Manzolli, Jônatas; Mathews, Zenon; Mura, Anna; Väljamäe, Aleksander; Verschure, Paul F. M. J.
The eXperience Induction Machine (XIM) is one of the most advanced mixed-reality spaces available today. XIM is an immersive space that consists of physical sensors and effectors and which is conceptualized as a general-purpose infrastructure for research in the field of psychology and human-artifact interaction. In this chapter, we set out the epistemological rational behind XIM by putting the installation in the context of psychological research. The design and implementation of XIM are based on principles and technologies of neuromorphic control. We give a detailed description of the hardware infrastructure and software architecture, including the logic of the overall behavioral control. To illustrate the approach toward psychological experimentation, we discuss a number of practical applications of XIM. These include the so-called, persistent virtual community, the application in the research of the relationship between human experience and multi-modal stimulation, and an investigation of a mixed-reality social interaction paradigm.
A secure data outsourcing scheme based on Asmuth-Bloom secret sharing
NASA Astrophysics Data System (ADS)
Idris Muhammad, Yusuf; Kaiiali, Mustafa; Habbal, Adib; Wazan, A. S.; Sani Ilyasu, Auwal
2016-11-01
Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users' queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients' data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth-Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing.
A Cloud-Based Car Parking Middleware for IoT-Based Smart Cities: Design and Implementation
Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji
2014-01-01
This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide ‘best’ car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm. PMID:25429416
A cloud-based car parking middleware for IoT-based smart cities: design and implementation.
Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhao, Li; Zhang, Xueji
2014-11-25
This paper presents the generic concept of using cloud-based intelligent car parking services in smart cities as an important application of the Internet of Things (IoT) paradigm. This type of services will become an integral part of a generic IoT operational platform for smart cities due to its pure business-oriented features. A high-level view of the proposed middleware is outlined and the corresponding operational platform is illustrated. To demonstrate the provision of car parking services, based on the proposed middleware, a cloud-based intelligent car parking system for use within a university campus is described along with details of its design, implementation, and operation. A number of software solutions, including Kafka/Storm/Hbase clusters, OSGi web applications with distributed NoSQL, a rule engine, and mobile applications, are proposed to provide 'best' car parking service experience to mobile users, following the Always Best Connected and best Served (ABC&S) paradigm.
Ephus: Multipurpose Data Acquisition Software for Neuroscience Experiments
Suter, Benjamin A.; O'Connor, Timothy; Iyer, Vijay; Petreanu, Leopoldo T.; Hooks, Bryan M.; Kiritani, Taro; Svoboda, Karel; Shepherd, Gordon M. G.
2010-01-01
Physiological measurements in neuroscience experiments often involve complex stimulus paradigms and multiple data channels. Ephus (http://www.ephus.org) is an open-source software package designed for general-purpose data acquisition and instrument control. Ephus operates as a collection of modular programs, including an ephys program for standard whole-cell recording with single or multiple electrodes in typical electrophysiological experiments, and a mapper program for synaptic circuit mapping experiments involving laser scanning photostimulation based on glutamate uncaging or channelrhodopsin-2 excitation. Custom user functions allow user-extensibility at multiple levels, including on-line analysis and closed-loop experiments, where experimental parameters can be changed based on recently acquired data, such as during in vivo behavioral experiments. Ephus is compatible with a variety of data acquisition and imaging hardware. This paper describes the main features and modules of Ephus and their use in representative experimental applications. PMID:21960959
Generating target system specifications from a domain model using CLIPS
NASA Technical Reports Server (NTRS)
Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry
1991-01-01
The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
Computational knowledge integration in biopharmaceutical research.
Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim
2003-09-01
An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.
1988-01-01
that basic terms such as physical ofjPc>. po i i , etc., are used over and over again. We have built, a library o’ s-u.- ani have prwided mechanisms... 1 Goals of a Performance Estimator Assistant As defined in [2], the long- term goa, of a Performance Estimator Assistant (PEA) is to aid in the...characterization m. 1 Figure 1 : Current Paradigm Mid- term goals are: - domain models for analysis, . algorithm design analysis and advice, and 9 real-time
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
Designing small universal k-mer hitting sets for improved analysis of high-throughput sequencing
Kingsford, Carl
2017-01-01
With the rapidly increasing volume of deep sequencing data, more efficient algorithms and data structures are needed. Minimizers are a central recent paradigm that has improved various sequence analysis tasks, including hashing for faster read overlap detection, sparse suffix arrays for creating smaller indexes, and Bloom filters for speeding up sequence search. Here, we propose an alternative paradigm that can lead to substantial further improvement in these and other tasks. For integers k and L > k, we say that a set of k-mers is a universal hitting set (UHS) if every possible L-long sequence must contain a k-mer from the set. We develop a heuristic called DOCKS to find a compact UHS, which works in two phases: The first phase is solved optimally, and for the second we propose several efficient heuristics, trading set size for speed and memory. The use of heuristics is motivated by showing the NP-hardness of a closely related problem. We show that DOCKS works well in practice and produces UHSs that are very close to a theoretical lower bound. We present results for various values of k and L and by applying them to real genomes show that UHSs indeed improve over minimizers. In particular, DOCKS uses less than 30% of the 10-mers needed to span the human genome compared to minimizers. The software and computed UHSs are freely available at github.com/Shamir-Lab/DOCKS/ and acgt.cs.tau.ac.il/docks/, respectively. PMID:28968408
Mapping of Technological Opportunities-Labyrinth Seal Example
NASA Technical Reports Server (NTRS)
Clarke, Dana W., Sr.
2006-01-01
All technological systems evolve based on evolutionary sequences that have repeated throughout history and can be abstracted from the history of technology and patents. These evolutionary sequences represent objective patterns and provide considerable insights that can be used to proactively model future seal concepts. This presentation provides an overview of how to map seal technology into the future using a labyrinth seal example. The mapping process delivers functional descriptions of sequential changes in market/consumer demand, from today s current paradigm to the next major paradigm shift. The future paradigm is developed according to a simple formula: the future paradigm is free of all flaws associated with the current paradigm; it is as far into the future as we can see. Although revolutionary, the vision of the future paradigm is typically not immediately or completely realizable nor is it normally seen as practical. There are several reasons that prevent immediate and complete practical application, such as: 1) Some of the required technological or business resources and knowledge not being available; 2) Availability of other technological or business resources are limited; and/or 3) Some necessary knowledge has not been completely developed. These factors tend to drive the Total Cost of Ownership or Utilization out of an acceptable range and revealing the reasons for the high Total Cost of Ownership or Utilization which provides a clear understanding of research opportunities essential for future developments and defines the current limits of the immediately achievable improvements. The typical roots of high Total Cost of Ownership or Utilization lie in the limited availability or even the absence of essential resources and knowledge necessary for its realization. In order to overcome this obstacle, step-by-step modification of the current paradigm is pursued to evolve from the current situation toward the ideal future, i.e., evolution rather than revolution. A key point is that evolutionary stages are mapped to show step-by-step evolution from the current paradigm to the next major paradigm.
Toward the First Data Acquisition Standard in Synthetic Biology.
Sainz de Murieta, Iñaki; Bultelle, Matthieu; Kitney, Richard I
2016-08-19
This paper describes the development of a new data acquisition standard for synthetic biology. This comprises the creation of a methodology that is designed to capture all the data, metadata, and protocol information associated with biopart characterization experiments. The new standard, called DICOM-SB, is based on the highly successful Digital Imaging and Communications in Medicine (DICOM) standard in medicine. A data model is described which has been specifically developed for synthetic biology. The model is a modular, extensible data model for the experimental process, which can optimize data storage for large amounts of data. DICOM-SB also includes services orientated toward the automatic exchange of data and information between modalities and repositories. DICOM-SB has been developed in the context of systematic design in synthetic biology, which is based on the engineering principles of modularity, standardization, and characterization. The systematic design approach utilizes the design, build, test, and learn design cycle paradigm. DICOM-SB has been designed to be compatible with and complementary to other standards in synthetic biology, including SBOL. In this regard, the software provides effective interoperability. The new standard has been tested by experiments and data exchange between Nanyang Technological University in Singapore and Imperial College London.
Fuzzy Hybrid Deliberative/Reactive Paradigm (FHDRP)
NASA Technical Reports Server (NTRS)
Sarmadi, Hengameth
2004-01-01
This work aims to introduce a new concept for incorporating fuzzy sets in hybrid deliberative/reactive paradigm. After a brief review on basic issues of hybrid paradigm the definition of agent-based fuzzy hybrid paradigm, which enables the agents to proceed and extract their behavior through quantitative numerical and qualitative knowledge and to impose their decision making procedure via fuzzy rule bank, is discussed. Next an example performs a more applied platform for the developed approach and finally an overview of the corresponding agents architecture enhances agents logical framework.
Information processing psychology: A promising paradigm for research in science teaching
NASA Astrophysics Data System (ADS)
Stewart, James H.; Atkin, Julia A.
Three research paradigms, those of Ausubel, Gagné and Piaget, have received a great deal of attention in the literature of science education. In this article a fourth paradigm is presented - an information processing psychology paradigm. The article is composed of two sections. The first section describes a model of memory developed by information processing psychologists. The second section describes how such a model could be used to guide science education research on learning and problem solving.Received: 19 October 1981
Developing critical practice: a South African's perspective.
Pillay, M
1998-01-01
The manner in which speech and language therapy (SLT) considers communicating evidence of practice with a multicultural clientele is considered in context of cultural imperialism. A conceptual framework (i.e., the curriculum of practice) developed from a South African study (Pillay 1997), is highlighted for use in understanding, evaluating and communicating evidence of practice with the clientele in focus. The lens (or paradigm) used by SLT to view its curriculum of practice may reveal different stories about the same subject. Given this, the critical paradigm is proffered over that of the empirical-analytical (or 'scientific') and hermeneutic-interpretive types of paradigms. Finally, suggestions regarding the development of critical SLT are discussed.
EpiBasket: how e-commerce tools can improve epidemiological preparedness.
Xing, Weijia; Hejblum, Gilles; Valleron, Alain-Jacques
2013-10-31
Should an emerging infectious disease outbreak or an environmental disaster occur, the collection of epidemiological data must start as soon as possible after the event's onset. Questionnaires are usually built de novo for each event, resulting in substantially delayed epidemiological responses that are detrimental to the understanding and control of the event considered. Moreover, the public health and/or academic institution databases constructed with responses to different questionnaires are usually difficult to merge, impairing necessary collaborations. We aimed to show that e-commerce concepts and software tools can be readily adapted to enable rapid collection of data after an infectious disease outbreak or environmental disaster. Here, the 'customers' are the epidemiologists, who fill their shopping 'baskets' with standardised questions. For each epidemiological field, a catalogue of questions is constituted by identifying the relevant variables based on a review of the published literature on similar circumstances. Each question is tagged with information on its source papers. Epidemiologists can then tailor their own questionnaires by choosing appropriate questions from this catalogue. The software immediately provides them with ready-to-use forms and online questionnaires. All databases constituted by the different EpiBasket users are interoperable, because the corresponding questionnaires are derived from the same corpus of questions. A proof-of-concept prototype was developed for Knowledge, Attitudes and Practice (KAP) surveys, which is one of the fields of the epidemiological investigation frequently explored during, or after, an outbreak or environmental disaster. The catalogue of questions was initiated from a review of the KAP studies conducted during or after the 2003 severe acute respiratory syndrome epidemic. Rapid collection of standardised data after an outbreak or environmental disaster can be facilitated by transposing the e-commerce paradigm to epidemiology, taking advantage of the powerful software tools already available.
Ramot, Daniel; Johnson, Brandon E.; Berry, Tommie L.; Carnell, Lucinda; Goodman, Miriam B.
2008-01-01
Background Caenorhabditis elegans locomotion is a simple behavior that has been widely used to dissect genetic components of behavior, synaptic transmission, and muscle function. Many of the paradigms that have been created to study C. elegans locomotion rely on qualitative experimenter observation. Here we report the implementation of an automated tracking system developed to quantify the locomotion of multiple individual worms in parallel. Methodology/Principal Findings Our tracking system generates a consistent measurement of locomotion that allows direct comparison of results across experiments and experimenters and provides a standard method to share data between laboratories. The tracker utilizes a video camera attached to a zoom lens and a software package implemented in MATLAB®. We demonstrate several proof-of-principle applications for the tracker including measuring speed in the absence and presence of food and in the presence of serotonin. We further use the tracker to automatically quantify the time course of paralysis of worms exposed to aldicarb and levamisole and show that tracker performance compares favorably to data generated using a hand-scored metric. Conclusions/Signficance Although this is not the first automated tracking system developed to measure C. elegans locomotion, our tracking software package is freely available and provides a simple interface that includes tools for rapid data collection and analysis. By contrast with other tools, it is not dependent on a specific set of hardware. We propose that the tracker may be used for a broad range of additional worm locomotion applications including genetic and chemical screening. PMID:18493300
Atlas2 Cloud: a framework for personal genome analysis in the cloud
2012-01-01
Background Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. Results We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. Conclusions We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms. PMID:23134663
Atlas2 Cloud: a framework for personal genome analysis in the cloud.
Evani, Uday S; Challis, Danny; Yu, Jin; Jackson, Andrew R; Paithankar, Sameer; Bainbridge, Matthew N; Jakkamsetti, Adinarayana; Pham, Peter; Coarfa, Cristian; Milosavljevic, Aleksandar; Yu, Fuli
2012-01-01
Until recently, sequencing has primarily been carried out in large genome centers which have invested heavily in developing the computational infrastructure that enables genomic sequence analysis. The recent advancements in next generation sequencing (NGS) have led to a wide dissemination of sequencing technologies and data, to highly diverse research groups. It is expected that clinical sequencing will become part of diagnostic routines shortly. However, limited accessibility to computational infrastructure and high quality bioinformatic tools, and the demand for personnel skilled in data analysis and interpretation remains a serious bottleneck. To this end, the cloud computing and Software-as-a-Service (SaaS) technologies can help address these issues. We successfully enabled the Atlas2 Cloud pipeline for personal genome analysis on two different cloud service platforms: a community cloud via the Genboree Workbench, and a commercial cloud via the Amazon Web Services using Software-as-a-Service model. We report a case study of personal genome analysis using our Atlas2 Genboree pipeline. We also outline a detailed cost structure for running Atlas2 Amazon on whole exome capture data, providing cost projections in terms of storage, compute and I/O when running Atlas2 Amazon on a large data set. We find that providing a web interface and an optimized pipeline clearly facilitates usage of cloud computing for personal genome analysis, but for it to be routinely used for large scale projects there needs to be a paradigm shift in the way we develop tools, in standard operating procedures, and in funding mechanisms.
UML as a cell and biochemistry modeling language.
Webb, Ken; White, Tony
2005-06-01
The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.
A new learning paradigm: learning using privileged information.
Vapnik, Vladimir; Vashist, Akshay
2009-01-01
In the Afterword to the second edition of the book "Estimation of Dependences Based on Empirical Data" by V. Vapnik, an advanced learning paradigm called Learning Using Hidden Information (LUHI) was introduced. This Afterword also suggested an extension of the SVM method (the so called SVM(gamma)+ method) to implement algorithms which address the LUHI paradigm (Vapnik, 1982-2006, Sections 2.4.2 and 2.5.3 of the Afterword). See also (Vapnik, Vashist, & Pavlovitch, 2008, 2009) for further development of the algorithms. In contrast to the existing machine learning paradigm where a teacher does not play an important role, the advanced learning paradigm considers some elements of human teaching. In the new paradigm along with examples, a teacher can provide students with hidden information that exists in explanations, comments, comparisons, and so on. This paper discusses details of the new paradigm and corresponding algorithms, introduces some new algorithms, considers several specific forms of privileged information, demonstrates superiority of the new learning paradigm over the classical learning paradigm when solving practical problems, and discusses general questions related to the new ideas.
Horizon Mission Methodology - A tool for the study of technology innovation and new paradigms
NASA Technical Reports Server (NTRS)
Anderson, John L.
1993-01-01
The Horizon Mission (HM) methodology was developed to provide a means of identifying and evaluating highly innovative, breakthrough technology concepts (BTCs) and for assessing their potential impact on advanced space missions. The methodology is based on identifying new capabilities needed by hypothetical 'horizon' space missions having performance requirements that cannot be met even by extrapolating known space technologies. Normal human evaluation of new ideas such as BTCs appears to be governed (and limited) by 'inner models of reality' defined as paradigms. Thus, new ideas are evaluated by old models. This paper describes the use of the HM Methodology to define possible future paradigms that would provide alternatives to evaluation by current paradigms. The approach is to represent a future paradigm by a set of new BTC-based capabilities - called a paradigm abstract. The paper describes methods of constructing and using the abstracts for evaluating BTCs for space applications and for exploring the concept of paradigms and paradigm shifts as a representation of technology innovation.
Assessing Selective Sustained Attention in 3- to 5-Year-Old Children: Evidence from a New Paradigm
ERIC Educational Resources Information Center
Fisher, Anna; Thiessen, Erik; Godwin, Karrie; Kloos, Heidi; Dickerson, John
2013-01-01
Selective sustained attention (SSA) is crucial for higher order cognition. Factors promoting SSA are described as exogenous or endogenous. However, there is little research specifying how these factors interact during development, due largely to the paucity of developmentally appropriate paradigms. We report findings from a novel paradigm designed…
The Global Imperatives for an Education Paradigm Shift.
ERIC Educational Resources Information Center
Bright, Larry K.; And Others
The future role of education is covered in a discussion concerning the shifting of the dominant social paradigm of the United States. It is noted that the paradigm is changing from one that requires social institutions to seek and develop human resources to maintain a position of competitive dominance, to an emerging view of world interdependence.…
Communities of Practice: A Research Paradigm for the Mixed Methods Approach
ERIC Educational Resources Information Center
Denscombe, Martyn
2008-01-01
The mixed methods approach has emerged as a "third paradigm" for social research. It has developed a platform of ideas and practices that are credible and distinctive and that mark the approach out as a viable alternative to quantitative and qualitative paradigms. However, there are also a number of variations and inconsistencies within the mixed…
What's Past Is Prologue: The Evolving Paradigms of Student Affairs
ERIC Educational Resources Information Center
Taylor, Simone Himbeault
2008-01-01
The purpose of this article is to frame--and reframe--the work of student affairs. Evolving paradigms have defined and advanced this work, which is dedicated to total student development and the betterment of society. The article promotes integrative learning as a new framework for student affairs. This paradigm, grounded in theory, research, and…
Zdravkovski, Zoran
2014-01-01
The development and availability of personal computers and software as well as printing techniques in the last twenty years have made a profound change in the publication of scientific journals. Additionally, the Internet in the last decade has revolutionized the publication process to the point of changing the basic paradigm of printed journals. The Macedonian Journal of Chemistry and Chemical Engineering in its 40-year history has adopted and adapted to all these transformations. In order to keep up with the inevitable changes, as editor-in-chief I felt my responsibility was to introduce an electronic editorial managing of the journal. The choice was between commercial and open source platforms, and because of the limited funding of the journal we chose the latter. We decided on Open Journal Systems, which provided online submission and management of all content, had flexible configuration--requirements, sections, review process, etc., had options for comprehensive indexing, offered various reading tools, had email notification and commenting ability for readers, had an option for thesis abstracts and was installed locally. However, since there is limited support it requires a moderate computer knowledge/skills and effort in order to set up. Overall, it is an excellent editorial platform and a convenient solution for journals with a low budget or journals that do not want to spend their resources on commercial platforms or simply support the idea of open source software.
The Experiment Factory: Standardizing Behavioral Experiments.
Sochat, Vanessa V; Eisenberg, Ian W; Enkavi, A Zeynep; Li, Jamie; Bissett, Patrick G; Poldrack, Russell A
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms.
The Experiment Factory: Standardizing Behavioral Experiments
Sochat, Vanessa V.; Eisenberg, Ian W.; Enkavi, A. Zeynep; Li, Jamie; Bissett, Patrick G.; Poldrack, Russell A.
2016-01-01
The administration of behavioral and experimental paradigms for psychology research is hindered by lack of a coordinated effort to develop and deploy standardized paradigms. While several frameworks (Mason and Suri, 2011; McDonnell et al., 2012; de Leeuw, 2015; Lange et al., 2015) have provided infrastructure and methods for individual research groups to develop paradigms, missing is a coordinated effort to develop paradigms linked with a system to easily deploy them. This disorganization leads to redundancy in development, divergent implementations of conceptually identical tasks, disorganized and error-prone code lacking documentation, and difficulty in replication. The ongoing reproducibility crisis in psychology and neuroscience research (Baker, 2015; Open Science Collaboration, 2015) highlights the urgency of this challenge: reproducible research in behavioral psychology is conditional on deployment of equivalent experiments. A large, accessible repository of experiments for researchers to develop collaboratively is most efficiently accomplished through an open source framework. Here we present the Experiment Factory, an open source framework for the development and deployment of web-based experiments. The modular infrastructure includes experiments, virtual machines for local or cloud deployment, and an application to drive these components and provide developers with functions and tools for further extension. We release this infrastructure with a deployment (http://www.expfactory.org) that researchers are currently using to run a set of over 80 standardized web-based experiments on Amazon Mechanical Turk. By providing open source tools for both deployment and development, this novel infrastructure holds promise to bring reproducibility to the administration of experiments, and accelerate scientific progress by providing a shared community resource of psychological paradigms. PMID:27199843