NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
Huang, Taoying; Shenoy, Pareen J.; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W.; Flowers, Christopher R.
2009-01-01
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid™ (caBIG™) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system™ (LEAD™), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute’s Center for Bioinformatics to establish the LEAD™ platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD™ could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG™ can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG™ to the management of clinical and biological data. PMID:19492074
Development of the Lymphoma Enterprise Architecture Database: a caBIG Silver level compliant system.
Huang, Taoying; Shenoy, Pareen J; Sinha, Rajni; Graiser, Michael; Bumpers, Kevin W; Flowers, Christopher R
2009-04-03
Lymphomas are the fifth most common cancer in United States with numerous histological subtypes. Integrating existing clinical information on lymphoma patients provides a platform for understanding biological variability in presentation and treatment response and aids development of novel therapies. We developed a cancer Biomedical Informatics Grid (caBIG) Silver level compliant lymphoma database, called the Lymphoma Enterprise Architecture Data-system (LEAD), which integrates the pathology, pharmacy, laboratory, cancer registry, clinical trials, and clinical data from institutional databases. We utilized the Cancer Common Ontological Representation Environment Software Development Kit (caCORE SDK) provided by National Cancer Institute's Center for Bioinformatics to establish the LEAD platform for data management. The caCORE SDK generated system utilizes an n-tier architecture with open Application Programming Interfaces, controlled vocabularies, and registered metadata to achieve semantic integration across multiple cancer databases. We demonstrated that the data elements and structures within LEAD could be used to manage clinical research data from phase 1 clinical trials, cohort studies, and registry data from the Surveillance Epidemiology and End Results database. This work provides a clear example of how semantic technologies from caBIG can be applied to support a wide range of clinical and research tasks, and integrate data from disparate systems into a single architecture. This illustrates the central importance of caBIG to the management of clinical and biological data.
ERIC Educational Resources Information Center
Bae, Kyoung-Il; Kim, Jung-Hyun; Huh, Soon-Young
2003-01-01
Discusses process information sharing among participating organizations in a virtual enterprise and proposes a federated process framework and system architecture that provide a conceptual design for effective implementation of process information sharing supporting the autonomy and agility of the organizations. Develops the framework using an…
E-health and healthcare enterprise information system leveraging service-oriented architecture.
Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Cheng, Po-Hsun; Lai, Feipei
2012-04-01
To present the successful experiences of an integrated, collaborative, distributed, large-scale enterprise healthcare information system over a wired and wireless infrastructure in National Taiwan University Hospital (NTUH). In order to smoothly and sequentially transfer from the complex relations among the old (legacy) systems to the new-generation enterprise healthcare information system, we adopted the multitier framework based on service-oriented architecture to integrate the heterogeneous systems as well as to interoperate among many other components and multiple databases. We also present mechanisms of a logical layer reusability approach and data (message) exchange flow via Health Level 7 (HL7) middleware, DICOM standard, and the Integrating the Healthcare Enterprise workflow. The architecture and protocols of the NTUH enterprise healthcare information system, especially in the Inpatient Information System (IIS), are discussed in detail. The NTUH Inpatient Healthcare Information System is designed and deployed on service-oriented architecture middleware frameworks. The mechanisms of integration as well as interoperability among the components and the multiple databases apply the HL7 standards for data exchanges, which are embedded in XML formats, and Microsoft .NET Web services to integrate heterogeneous platforms. The preliminary performance of the current operation IIS is evaluated and analyzed to verify the efficiency and effectiveness of the designed architecture; it shows reliability and robustness in the highly demanding traffic environment of NTUH. The newly developed NTUH IIS provides an open and flexible environment not only to share medical information easily among other branch hospitals, but also to reduce the cost of maintenance. The HL7 message standard is widely adopted to cover all data exchanges in the system. All services are independent modules that enable the system to be deployed and configured to the highest degree of flexibility. Furthermore, we can conclude that the multitier Inpatient Healthcare Information System has been designed successfully and in a collaborative manner, based on the index of performance evaluations, central processing unit, and memory utilizations.
2005-04-12
Hardware, Database, and Operating System independence using Java • Enterprise-class Architecture using Java2 Enterprise Edition 1.4 • Standards based...portal applications. Compliance with the Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote Portals...authentication and authorization • Portal Standards using Java Specification Request for Portlet APIs (JSR-168) (Portlet API) and Web Services for Remote
Implementing GermWatcher, an enterprise infection control application.
Doherty, Joshua; Noirot, Laura A; Mayfield, Jennie; Ramiah, Sridhar; Huang, Christine; Dunagan, Wm Claiborne; Bailey, Thomas C
2006-01-01
Automated surveillance tools can provide significant advantages to infection control practitioners. When stored in a relational database, the data collected can also be used to support numerous research and quality improvement opportunities. A previously described electronic infection control surveillance system was remodeled to provide multi-hospital support, an XML based rule set, and interoperability with an enterprise terminology server. This paper describes the new architecture being used at hospitals across BJC HealthCare.
Suggestions for Documenting SOA-Based Systems
2010-09-01
Number FA8721-05-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and...understandability and fo even across an enterprise. Technical reference models (see F (e.g., Oracle database managemen general in nature, and they typica...architectural pattern. CMU/SEI-2010- T Key Aspects of the Architecture unicate something that is important to the stakeholders intaining the system
Streamlining the Process of Acquiring Secure Open Architecture Software Systems
2013-10-08
Microsoft.NET, Enterprise Java Beans, GNU Lesser General Public License (LGPL) libraries, and data communication protocols like the Hypertext Transfer...NetBeans development environments), customer relationship management (SugarCRM), database management systems (PostgreSQL, MySQL ), operating
The Perception of Human Resources Enterprise Architecture within the Department of Defense
ERIC Educational Resources Information Center
Delaquis, Richard Serge
2012-01-01
The Clinger Cohen Act of 1996 requires that all major Federal Government Information Technology (IT) systems prepare an Enterprise Architecture prior to IT acquisitions. Enterprise Architecture, like house blueprints, represents the system build, capabilities, processes, and data across the enterprise of IT systems. Enterprise Architecture is used…
Federated querying architecture with clinical & translational health IT application.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
2011-10-01
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
ERIC Educational Resources Information Center
Harrell, J. Michael
2011-01-01
Enterprise architecture is a relatively new concept that arose in the latter half of the twentieth century as a means of managing the information technology resources within the enterprise. Borrowing from the disciplines of brick and mortar architecture, software engineering, software architecture, and systems engineering, the enterprise…
A Workshop on Analysis and Evaluation of Enterprise Architectures
2010-11-01
Rev. 8-98) Prescribed by ANSI Std Z39-18 This report was prepared for the SEI Administrative Agent ESC/XPK 5 Eglin Street Hanscom AFB, MA...Enterprise Business 4 2.3 Bounding Enterprise Architecture in Practice 5 3 Enterprise Architecture Design and Documentation Practices 7 3.1 Typical...Methods 12 4.5 Federation and Acquisition 13 5 Summary 14 5.1 Workshop Findings 14 5.2 Future Work 15 Appendix A – Survey of Enterprise Architecture
Hybridization of Architectural Styles for Integrated Enterprise Information Systems
NASA Astrophysics Data System (ADS)
Bagusyte, Lina; Lupeikiene, Audrone
Current enterprise systems engineering theory does not provide adequate support for the development of information systems on demand. To say more precisely, it is forming. This chapter proposes the main architectural decisions that underlie the design of integrated enterprise information systems. This chapter argues for the extending service-oriented architecture - for merging it with component-based paradigm at the design stage and using connectors of different architectural styles. The suitability of general-purpose language SysML for the modeling of integrated enterprise information systems architectures is described and arguments pros are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begoli, Edmon; Bates, Jack; Kistler, Derek E
The Polystore architecture revisits the federated approach to access and querying of the standalone, independent databases in the uniform and optimized fashion, but this time in the context of heterogeneous data and specialized analyses. In the light of this architectural philosophy, and in the light of the major data architecture development efforts at the US Department of Veterans Administration (VA), we discuss the need for the heterogeneous data store consisting of the large relational data warehouse, an image and text datastore, and a peta-scale genomic repository. The VA's heterogeneous datastore would, to a larger or smaller degree, follow the architecturalmore » blueprint proposed by the polystore architecture. To this end, we discuss the current state of the data architecture at VA, architectural alternatives for development of the heterogeneous datastore, the anticipated challenges, and the drawbacks and benefits of adopting the polystore architecture.« less
Software architecture and engineering for patient records: current and future.
Weng, Chunhua; Levine, Betty A; Mun, Seong K
2009-05-01
During the "The National Forum on the Future of the Defense Health Information System," a track focusing on "Systems Architecture and Software Engineering" included eight presenters. These presenters identified three key areas of interest in this field, which include the need for open enterprise architecture and a federated database design, net centrality based on service-oriented architecture, and the need for focus on software usability and reusability. The eight panelists provided recommendations related to the suitability of service-oriented architecture and the enabling technologies of grid computing and Web 2.0 for building health services research centers and federated data warehouses to facilitate large-scale collaborative health care and research. Finally, they discussed the need to leverage industry best practices for software engineering to facilitate rapid software development, testing, and deployment.
Using CORBA to integrate manufacturing cells to a virtual enterprise
NASA Astrophysics Data System (ADS)
Pancerella, Carmen M.; Whiteside, Robert A.
1997-01-01
It is critical in today's enterprises that manufacturing facilities are not isolated from design, planning, and other business activities and that information flows easily and bidirectionally between these activities. It is also important and cost-effective that COTS software, databases, and corporate legacy codes are well integrated in the information architecture. Further, much of the information generated during manufacturing must be dynamically accessible to engineering and business operations both in a restricted corporate intranet and on the internet. The software integration strategy in the Sandia Agile Manufacturing Testbed supports these enterprise requirements. We are developing a CORBA-based distributed object software system for manufacturing. Each physical machining device is a CORBA object and exports a common IDL interface to allow for rapid and dynamic insertion, deletion, and upgrading within the manufacturing cell. Cell management CORBA components access manufacturing devices without knowledge of any device-specific implementation. To support information flow from design to planning data is accessible to machinists on the shop floor. CORBA allows manufacturing components to be easily accessible to the enterprise. Dynamic clients can be created using web browsers and portable Java GUI's. A CORBA-OLE adapter allows integration to PC desktop applications. Other commercial software can access CORBA network objects in the information architecture through vendor API's.
ERIC Educational Resources Information Center
Tambouris, Efthimios; Zotou, Maria; Kalampokis, Evangelos; Tarabanis, Konstantinos
2012-01-01
Enterprise architecture (EA) implementation refers to a set of activities ultimately aiming to align business objectives with information technology infrastructure in an organization. EA implementation is a multidisciplinary, complicated and endless process, hence, calls for adequate education and training programs that will build highly skilled…
Space Internet Architectures and Technologies for NASA Enterprises
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Hayden, Jeffrey L.
2001-01-01
NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures.
Research of Manufacture Time Management System Based on PLM
NASA Astrophysics Data System (ADS)
Jing, Ni; Juan, Zhu; Liangwei, Zhong
This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.
Hripcsak, George
1997-01-01
Abstract An information system architecture defines the components of a system and the interfaces among the components. A good architecture is essential for creating an Integrated Advanced Information Management System (IAIMS) that works as an integrated whole yet is flexible enough to accommodate many users and roles, multiple applications, changing vendors, evolving user needs, and advancing technology. Modularity and layering promote flexibility by reducing the complexity of a system and by restricting the ways in which components may interact. Enterprise-wide mediation promotes integration by providing message routing, support for standards, dictionary-based code translation, a centralized conceptual data schema, business rule implementation, and consistent access to databases. Several IAIMS sites have adopted a client-server architecture, and some have adopted a three-tiered approach, separating user interface functions, application logic, and repositories. PMID:9067884
2004-06-01
Viewpoint Component Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business...security concern, when applied to the different viewpoints, addresses both stakeholders, and is described as a business security model or component...Viewpoint View Architecture Description of Enterprise or Infostructure View Security Concern Business Security Model Business Stakeholder IT Architect
NASA Technical Reports Server (NTRS)
Kolar, Mike; Estefan, Jeff; Giovannoni, Brian; Barkley, Erik
2011-01-01
Topics covered (1) Why Governance and Why Now? (2) Characteristics of Architecture Governance (3) Strategic Elements (3a) Architectural Principles (3b) Architecture Board (3c) Architecture Compliance (4) Architecture Governance Infusion Process. Governance is concerned with decision making (i.e., setting directions, establishing standards and principles, and prioritizing investments). Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level
NASA Astrophysics Data System (ADS)
Bolton, Richard W.; Dewey, Allen; Horstmann, Paul W.; Laurentiev, John
1997-01-01
This paper examines the role virtual enterprises will have in supporting future business engagements and resulting technology requirements. Two representative end-user scenarios are proposed that define the requirements for 'plug-and-play' information infrastructure frameworks and architectures necessary to enable 'virtual enterprises' in US manufacturing industries. The scenarios provide a high- level 'needs analysis' for identifying key technologies, defining a reference architecture, and developing compliant reference implementations. Virtual enterprises are short- term consortia or alliances of companies formed to address fast-changing opportunities. Members of a virtual enterprise carry out their tasks as if they all worked for a single organization under 'one roof', using 'plug-and-play' information infrastructure frameworks and architectures to access and manage all information needed to support the product cycle. 'Plug-and-play' information infrastructure frameworks and architectures are required to enhance collaboration between companies corking together on different aspects of a manufacturing process. This new form of collaborative computing will decrease cycle-time and increase responsiveness to change.
Weighted Components of i-Government Enterprise Architecture
NASA Astrophysics Data System (ADS)
Budiardjo, E. K.; Firmansyah, G.; Hasibuan, Z. A.
2017-01-01
Lack of government performance, among others due to the lack of coordination and communication among government agencies. Whilst, Enterprise Architecture (EA) in the government can be use as a strategic planning tool to improve productivity, efficiency, and effectivity. However, the existence components of Government Enterprise Architecture (GEA) do not show level of importance, that cause difficulty in implementing good e-government for good governance. This study is to explore the weight of GEA components using Principal Component Analysis (PCA) in order to discovered an inherent structure of e-government. The results show that IT governance component of GEA play a major role in the GEA. The rest of components that consist of e-government system, e-government regulation, e-government management, and application key operational, contributed more or less the same. Beside that GEA from other countries analyzes using comparative base on comon enterprise architecture component. These weighted components use to construct i-Government enterprise architecture. and show the relative importance of component in order to established priorities in developing e-government.
Teaching Case: Enterprise Architecture Specification Case Study
ERIC Educational Resources Information Center
Steenkamp, Annette Lerine; Alawdah, Amal; Almasri, Osama; Gai, Keke; Khattab, Nidal; Swaby, Carval; Abaas, Ramy
2013-01-01
A graduate course in enterprise architecture had a team project component in which a real-world business case, provided by an industry sponsor, formed the basis of the project charter and the architecture statement of work. The paper aims to share the team project experience on developing the architecture specifications based on the business case…
Enterprise application architecture development based on DoDAF and TOGAF
NASA Astrophysics Data System (ADS)
Tao, Zhi-Gang; Luo, Yun-Feng; Chen, Chang-Xin; Wang, Ming-Zhe; Ni, Feng
2017-05-01
For the purpose of supporting the design and analysis of enterprise application architecture, here, we report a tailored enterprise application architecture description framework and its corresponding design method. The presented framework can effectively support service-oriented architecting and cloud computing by creating the metadata model based on architecture content framework (ACF), DoDAF metamodel (DM2) and Cloud Computing Modelling Notation (CCMN). The framework also makes an effort to extend and improve the mapping between The Open Group Architecture Framework (TOGAF) application architectural inputs/outputs, deliverables and Department of Defence Architecture Framework (DoDAF)-described models. The roadmap of 52 DoDAF-described models is constructed by creating the metamodels of these described models and analysing the constraint relationship among metamodels. By combining the tailored framework and the roadmap, this article proposes a service-oriented enterprise application architecture development process. Finally, a case study is presented to illustrate the results of implementing the tailored framework in the Southern Base Management Support and Information Platform construction project using the development process proposed by the paper.
2002-09-01
ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Egov 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING / MONITORING...initiatives. The federal government has 55 databases that deal with security threats, but inter- agency access depends on establishing agreements through...which that information can be shared. True cooperation also will require government -wide commitment to enterprise architecture, integrated
Predefined three tier business intelligence architecture in healthcare enterprise.
Wang, Meimei
2013-04-01
Business Intelligence (BI) has caused extensive concerns and widespread use in gathering, processing and analyzing data and providing enterprise users the methodology to make decisions. Different from traditional BI architecture, this paper proposes a new BI architecture, Top-Down Scalable BI architecture with defining mechanism for enterprise decision making solutions and aims at establishing a rapid, consistent, and scalable multiple applications on multiple platforms of BI mechanism. The two opposite information flows in our BI architecture offer the merits of having the high level of organizational prospects and making full use of the existing resources. We also introduced the avg-bed-waiting-time factor to evaluate hospital care capacity.
1998-01-24
the Apparel Manufacturing Architecture (AMA), a generic architecture for an apparel enterprise. ARN-AIMS consists of three modules - Order Processing , Order...Tracking and Shipping & Invoicing. The Order Processing Module is designed to facilitate the entry of customer orders for stock and special
Analysis of central enterprise architecture elements in models of six eHealth projects.
Virkanen, Hannu; Mykkänen, Juha
2014-01-01
Large-scale initiatives for eHealth services have been established in many countries on regional or national level. The use of Enterprise Architecture has been suggested as a methodology to govern and support the initiation, specification and implementation of large-scale initiatives including the governance of business changes as well as information technology. This study reports an analysis of six health IT projects in relation to Enterprise Architecture elements, focusing on central EA elements and viewpoints in different projects.
The architecture of enterprise hospital information system.
Lu, Xudong; Duan, Huilong; Li, Haomin; Zhao, Chenhui; An, Jiye
2005-01-01
Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described.
Impact of Enterprise Architecture on Architecture Agility and Coherence
ERIC Educational Resources Information Center
Abaas, Kanari
2009-01-01
IT has permeated to the very roots of organizations and has an ever increasingly important role in the achievement of overall corporate objectives and business strategies. This paper presents an approach for evaluating the impact of existing Enterprise Architecture (EA) implementations. The papers answers questions such as: What are the challenges…
Using Solid State Drives as a Mid-Tier Cache in Enterprise Database OLTP Applications
NASA Astrophysics Data System (ADS)
Khessib, Badriddine M.; Vaid, Kushagra; Sankar, Sriram; Zhang, Chengliang
When originally introduced, flash based solid state drives (SSD) exhibited a very high random read throughput with low sub-millisecond latencies. However, in addition to their steep prices, SSDs suffered from slow write rates and reliability concerns related to cell wear. For these reasons, they were relegated to a niche status in the consumer and personal computer market. Since then, several architectural enhancements have been introduced that led to a substantial increase in random write operations as well as a reasonable improvement in reliability. From a purely performance point of view, these high I/O rates and improved reliability make the SSDs an ideal choice for enterprise On-Line Transaction Processing (OLTP) applications. However, from a price/performance point of view, the case for SSDs may not be clear. Enterprise class SSD Price/GB, continues to be at least 10x higher than conventional magnetic hard disk drives (HDD) despite considerable drop in Flash chip prices.
Privacy Protection Standards for the Information Sharing Environment
2009-09-01
enable ISE participants to share information and data (see ISE Implementation Plan, p. 51, ISE Enterprise Architecture Framework, pp. 67, 73–74 and...of frontiers. This article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises. 2. The exercise...5 U.S.C. § 552a, as amended. Program Manager-Information Sharing Environment. (2008). Information Sharing Enterprise Architecture Framework
An Enterprise Architecture Perspective to Electronic Health Record Based Care Governance.
Motoc, Bogdan
2017-01-01
This paper proposes an Enterprise Architecture viewpoint of Electronic Health Record (EHR) based care governance. The improvements expected are derived from the collaboration framework and the clinical health model proposed as foundation for the concept of EHR.
DOT National Transportation Integrated Search
2016-08-01
The purpose of this research report is to develop a Strategic Enterprise Architecture (EA) Design and Implementation Plan for the Montana Department of Transportation (MDT). Information management systems are vital to maintaining the States transp...
An Agile Enterprise Regulation Architecture for Health Information Security Management
Chen, Ying-Pei; Hsieh, Sung-Huai; Chien, Tsan-Nan; Chen, Heng-Shuen; Luh, Jer-Junn; Lai, Jin-Shin; Lai, Feipei; Chen, Sao-Jie
2010-01-01
Abstract Information security management for healthcare enterprises is complex as well as mission critical. Information technology requests from clinical users are of such urgency that the information office should do its best to achieve as many user requests as possible at a high service level using swift security policies. This research proposes the Agile Enterprise Regulation Architecture (AERA) of information security management for healthcare enterprises to implement as part of the electronic health record process. Survey outcomes and evidential experiences from a sample of medical center users proved that AERA encourages the information officials and enterprise administrators to overcome the challenges faced within an electronically equipped hospital. PMID:20815748
An agile enterprise regulation architecture for health information security management.
Chen, Ying-Pei; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chien, Tsan-Nan; Chen, Heng-Shuen; Luh, Jer-Junn; Lai, Jin-Shin; Lai, Feipei; Chen, Sao-Jie
2010-09-01
Information security management for healthcare enterprises is complex as well as mission critical. Information technology requests from clinical users are of such urgency that the information office should do its best to achieve as many user requests as possible at a high service level using swift security policies. This research proposes the Agile Enterprise Regulation Architecture (AERA) of information security management for healthcare enterprises to implement as part of the electronic health record process. Survey outcomes and evidential experiences from a sample of medical center users proved that AERA encourages the information officials and enterprise administrators to overcome the challenges faced within an electronically equipped hospital.
Managing the Evolution of an Enterprise Architecture using a MAS-Product-Line Approach
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Resinas, manuel; Sterritt, Roy; Rash, James L.
2006-01-01
We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.
Chen, Elizabeth S.; Maloney, Francine L.; Shilmayster, Eugene; Goldberg, Howard S.
2009-01-01
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs. PMID:20351830
Chen, Elizabeth S; Maloney, Francine L; Shilmayster, Eugene; Goldberg, Howard S
2009-11-14
A systematic and standard process for capturing information within free-text clinical documents could facilitate opportunities for improving quality and safety of patient care, enhancing decision support, and advancing data warehousing across an enterprise setting. At Partners HealthCare System, the Medical Language Processing (MLP) services project was initiated to establish a component-based architectural model and processes to facilitate putting MLP functionality into production for enterprise consumption, promote sharing of components, and encourage reuse. Key objectives included exploring the use of an open-source framework called the Unstructured Information Management Architecture (UIMA) and leveraging existing MLP-related efforts, terminology, and document standards. This paper describes early experiences in defining the infrastructure and standards for extracting, encoding, and structuring clinical observations from a variety of clinical documents to serve enterprise-wide needs.
Linking Humans to Data: Designing an Enterprise Architecture for EarthCube
NASA Astrophysics Data System (ADS)
Xu, C.; Yang, C.; Meyer, C. B.
2013-12-01
National Science Foundation (NSF)'s EarthCube is a strategic initiative towards a grand enterprise that holistically incorporates different geoscience research domains. The EarthCube as envisioned by NSF is a community-guided cyberinfrastructure (NSF 2011). The design of EarthCube enterprise architecture (EA) offers a vision to harmonize processes between the operations of EarthCube and its information technology foundation, the geospatial cyberinfrastructure. (Yang et al. 2010). We envision these processes as linking humans to data. We report here on fundamental ideas that would ultimately materialize as a conceptual design of EarthCube EA. EarthCube can be viewed as a meta-science that seeks to advance knowledge of the Earth through cross-disciplinary connections made using conventional domain-based earth science research. In order to build capacity that enables crossing disciplinary chasms, a key step would be to identify the cornerstones of the envisioned enterprise architecture. Human and data inputs are the two key factors to the success of EarthCube (NSF 2011), based upon which three hypotheses have been made: 1) cross disciplinary collaboration has to be achieved through data sharing; 2) disciplinary differences need to be articulated and captured in both computer and human understandable formats; 3) human intervention is crucial for crossing the disciplinary chasms. We have selected the Federal Enterprise Architecture Framework (FEAF, CIO Council 2013) as the baseline for the envisioned EarthCube EA, noting that the FEAF's deficiencies can be improved upon with inputs from three other popular EA frameworks. This presentation reports the latest on the conceptual design of an enterprise architecture in support of EarthCube.
NASA Technical Reports Server (NTRS)
Ticker, Ronald L.; Azzolini, John D.
2000-01-01
The study investigates NASA's Earth Science Enterprise needs for Distributed Spacecraft Technologies in the 2010-2025 timeframe. In particular, the study focused on the Earth Science Vision Initiative and extrapolation of the measurement architecture from the 2002-2010 time period. Earth Science Enterprise documents were reviewed. Interviews were conducted with a number of Earth scientists and technologists. fundamental principles of formation flying were also explored. The results led to the development of four notional distribution spacecraft architectures. These four notional architectures (global constellations, virtual platforms, precision formation flying, and sensorwebs) are presented. They broadly and generically cover the distributed spacecraft architectures needed by Earth Science in the post-2010 era. These notional architectures are used to identify technology needs and drivers. Technology needs are subsequently grouped into five categories: Systems and architecture development tools; Miniaturization, production, manufacture, test and calibration; Data networks and information management; Orbit control, planning and operations; and Launch and deployment. The current state of the art and expected developments are explored. High-value technology areas are identified for possible future funding emphasis.
Evaluating the Effectiveness of Reference Models in Federating Enterprise Architectures
ERIC Educational Resources Information Center
Wilson, Jeffery A.
2012-01-01
Agencies need to collaborate with each other to perform missions, improve mission performance, and find efficiencies. The ability of individual government agencies to collaborate with each other for mission and business success and efficiency is complicated by the different techniques used to describe their Enterprise Architectures (EAs).…
Generating a Corpus of Mobile Forensic Images for Masquerading user Experimentation.
Guido, Mark; Brooks, Marc; Grover, Justin; Katz, Eric; Ondricek, Jared; Rogers, Marcus; Sharpe, Lauren
2016-11-01
The Periodic Mobile Forensics (PMF) system investigates user behavior on mobile devices. It applies forensic techniques to an enterprise mobile infrastructure, utilizing an on-device agent named TractorBeam. The agent collects changed storage locations for later acquisition, reconstruction, and analysis. TractorBeam provides its data to an enterprise infrastructure that consists of a cloud-based queuing service, relational database, and analytical framework for running forensic processes. During a 3-month experiment with Purdue University, TractorBeam was utilized in a simulated operational setting across 34 users to evaluate techniques to identify masquerading users (i.e., users other than the intended device user). The research team surmises that all masqueraders are undesirable to an enterprise, even when a masquerader lacks malicious intent. The PMF system reconstructed 821 forensic images, extracted one million audit events, and accurately detected masqueraders. Evaluation revealed that developed methods reduced storage requirements 50-fold. This paper describes the PMF architecture, performance of TractorBeam throughout the protocol, and results of the masquerading user analysis. © 2016 American Academy of Forensic Sciences.
Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O
2008-11-06
In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.
Extending enterprise architecture modelling with business goals and requirements
NASA Astrophysics Data System (ADS)
Engelsman, Wilco; Quartel, Dick; Jonkers, Henk; van Sinderen, Marten
2011-02-01
The methods for enterprise architecture (EA), such as The Open Group Architecture Framework, acknowledge the importance of requirements modelling in the development of EAs. Modelling support is needed to specify, document, communicate and reason about goals and requirements. The current modelling techniques for EA focus on the products, services, processes and applications of an enterprise. In addition, techniques may be provided to describe structured requirements lists and use cases. Little support is available however for modelling the underlying motivation of EAs in terms of stakeholder concerns and the high-level goals that address these concerns. This article describes a language that supports the modelling of this motivation. The definition of the language is based on existing work on high-level goal and requirements modelling and is aligned with an existing standard for enterprise modelling: the ArchiMate language. Furthermore, the article illustrates how EA can benefit from analysis techniques from the requirements engineering domain.
Understanding the Value of Enterprise Architecture for Organizations: A Grounded Theory Approach
ERIC Educational Resources Information Center
Nassiff, Edwin
2012-01-01
There is a high rate of information system implementation failures attributed to the lack of alignment between business and information technology strategy. Although enterprise architecture (EA) is a means to correct alignment problems and executives highly rate the importance of EA, it is still not used in most organizations today. Current…
NASA Astrophysics Data System (ADS)
Nogueira, Juan Manuel; Romero, David; Espadas, Javier; Molina, Arturo
2013-02-01
With the emergence of new enterprise models, such as technology-based enterprises, and the large quantity of information generated through technological advances, the Zachman framework continues to represent a modelling tool of great utility and value to construct an enterprise architecture (EA) that can integrate and align the IT infrastructure and business goals. Nevertheless, implementing an EA requires an important effort within an enterprise. Small technology-based enterprises and start-ups can take advantage of EAs and frameworks but, because these enterprises have limited resources to allocate for this task, an enterprise framework implementation is not feasible in most cases. This article proposes a new methodology based on action-research for the implementation of the business, system and technology models of the Zachman framework to assist and facilitate its implementation. Following the explanation of cycles of the proposed methodology, a case study is presented to illustrate the results of implementing the Zachman framework in a technology-based enterprise: PyME CREATIVA, using action-research approach.
NASA Astrophysics Data System (ADS)
Leuchter, S.; Reinert, F.; Müller, W.
2014-06-01
Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.
Hospital enterprise Architecture Framework (Study of Iranian University Hospital Organization).
Haghighathoseini, Atefehsadat; Bobarshad, Hossein; Saghafi, Fatehmeh; Rezaei, Mohammad Sadegh; Bagherzadeh, Nader
2018-06-01
Nowadays developing smart and fast services for patients and transforming hospitals to modern hospitals is considered a necessity. Living in the world inundated with information systems, designing services based on information technology entails a suitable architecture framework. This paper aims to present a localized enterprise architecture framework for the Iranian university hospital. Using two dimensions of implementation and having appropriate characteristics, the best 17 enterprises frameworks were chosen. As part of this effort, five criteria were selected according to experts' inputs. According to these criteria, five frameworks which had the highest rank were chosen. Then 44 general characteristics were extracted from the existing 17 frameworks after careful studying. Then a questionnaire was written accordingly to distinguish the necessity of those characteristics using expert's opinions and Delphi method. The result showed eight important criteria. In the next step, using AHP method, TOGAF was chosen regarding having appropriate characteristics and the ability to be implemented among reference formats. In the next step, enterprise architecture framework was designed by TOGAF in a conceptual model and its layers. For determining architecture framework parts, a questionnaire with 145 questions was written based on literature review and expert's opinions. The results showed during localization of TOGAF for Iran, 111 of 145 parts were chosen and certified to be used in the hospital. The results showed that TOGAF could be suitable for use in the hospital. So, a localized Hospital Enterprise Architecture Modelling is developed by customizing TOGAF for an Iranian hospital at eight levels and 11 parts. This new model could be used to be performed in other Iranian hospitals. Copyright © 2018 Elsevier B.V. All rights reserved.
Interoperable Architecture for Command and Control
2014-06-01
defined objective. Elements can include other systems, people, processes, technology and other support elements (Adapted from [9]). Enterprise System...An enterprise is an intentionally created entity of human endeavour with a certain purpose. An enterprise could be considered a type of system [7]. In...this case the enterprise is a Defence Enterprise System required by government as a tool to maintain national sovereignty. Capability
Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika
2013-01-01
The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.
NASA Astrophysics Data System (ADS)
Serrano, Rafael; González, Luis Carlos; Martín, Francisco Jesús
2009-11-01
Under the project SENSOR-IA which has had financial funding from the Order of Incentives to the Regional Technology Centers of the Counsil of Innovation, Science and Enterprise of Andalusia, an architecture for the optimization of a machining process in real time through rule-based expert system has been developed. The architecture consists of an acquisition system and sensor data processing engine (SATD) from an expert system (SE) rule-based which communicates with the SATD. The SE has been designed as an inference engine with an algorithm for effective action, using a modus ponens rule model of goal-oriented rules.The pilot test demonstrated that it is possible to govern in real time the machining process based on rules contained in a SE. The tests have been done with approximated rules. Future work includes an exhaustive collection of data with different tool materials and geometries in a database to extract more precise rules.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2014-01-01
Architecture development is conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this presentation characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles
ERIC Educational Resources Information Center
Ngeru, James
2012-01-01
In the past few decades, adoption of Enterprise Integration (EI) through initiatives such as Service Oriented Architecture (SOA), Enterprise Application Integration (EAI) and Enterprise Resource Planning (ERP) has consistently dominated most of organizations' top strategic priorities. Additionally, the field of EI has generated a vast amount…
A new architecture for enterprise information systems.
Covvey, H D; Stumpf, J J
1999-01-01
Irresistible economic and technical forces are forcing healthcare institutions to develop regionalized services such as consolidated or virtual laboratories. Technical realities, such as the lack of an enabling enterprise-level information technology (IT) integration infrastructure, the existence of legacy systems, and non-existent or embryonic enterprise-level IT services organizations, are delaying or frustrating the achievement of the desired configuration of shared services. On attempting to address this matter, we discover that the state-of-the-art in integration technology is not wholly adequate, and itself becomes a barrier to the full realization of shared healthcare services. In this paper we report new work from the field of Co-operative Information Systems that proposes a new architecture of systems that are intrinsically cooperation-enabled, and we extend this architecture to both the regional and national scales.
Mapping SOA Artefacts onto an Enterprise Reference Architecture Framework
NASA Astrophysics Data System (ADS)
Noran, Ovidiu
Currently, there is still no common agreement on the service-Oriented architecture (SOA) definition, or the types and meaning of the artefacts involved in the creation and maintenance of an SOA. Furthermore, the SOA image shift from an infrastructure solution to a business-wide change project may have promoted a perception that SOA is a parallel initiative, a competitor and perhaps a successor of enterprise architecture (EA). This chapter attempts to map several typical SOA artefacts onto an enterprise reference framework commonly used in EA. This is done in order to show that the EA framework can express and structure most of the SOA artefacts and therefore, a framework for SOA could in fact be derived from an EA framework with the ensuing SOA-EA integration benefits.
Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture
NASA Astrophysics Data System (ADS)
Kamogawa, Takaaki
This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.
ERIC Educational Resources Information Center
Makiya, George K.
2012-01-01
This dissertation reports on a multi-dimensional longitudinal investigation of the factors that influence Enterprise Architecture (EA) diffusion and assimilation within the U.S. federal government. The study uses publicly available datasets of 123 U.S. federal departments and agencies, as well as interview data among CIOs and EA managers within…
ERIC Educational Resources Information Center
Stamas, Paul J.
2013-01-01
This case study research followed the two-year transition of a medium-sized manufacturing firm towards a service-oriented enterprise. A service-oriented enterprise is an emerging architecture of the firm that leverages the paradigm of services computing to integrate the capabilities of the firm with the complementary competencies of business…
NASA Astrophysics Data System (ADS)
Qianyi, Zhang; Xiaoshun, Li; Ping, Hu; Lu, Ning
2018-03-01
With the promotion of undergraduate training mode of “3+1” in Beijing University of Agriculture, the mode and direction of applied and compound talents training should be further visualized, at the same time, in order to make up for the shortage of Double Teachers in the school and the lack of teaching cases that cover the advanced technology in the industry, the school actively encourages the cooperation between the two teaching units and enterprises, and closely connects the enterprise resources with the school teaching system, using the “1” in “3+1” to carry out innovative training work for students. This method is beneficial for college students to integrate theory into practice and realize the purpose of applying knowledge in Higher Education. However, in the actual student training management, this kind of cooperation involves three party units and personnel, so it is difficult to form a unified management, on the other hand, it may also result from poor communication, which leads to unsatisfactory training results. At the same time, there is no good training supervision mechanism, causes the student training work specious. To solve the above problem,this paper designs a training management system of student innovation and Entrepreneurship Based on school enterprise cooperation,the system can effectively manage the relevant work of students’ training, and effectively solve the above problems. The subject is based on the training of innovation and entrepreneurship in the school of computer and information engineering of Beijing University of Agriculture. The system software architecture is designed using B/S architecture technology, the system is divided into three layers, the application of logic layer includes student training management related business, and realized the user’s basic operation management for student training, users can not only realize the basic information management of enterprises, colleges and students through the system, at the same time, it also realizes the information operation of student training management [1]. The data layer of the system creates database applications through Mysql technology, and provides data storage for the whole system.
DAsHER CD: Developing a Data-Oriented Human-Centric Enterprise Architecture for EarthCube
NASA Astrophysics Data System (ADS)
Yang, C. P.; Yu, M.; Sun, M.; Qin, H.; Robinson, E.
2015-12-01
One of the biggest challenges that face Earth scientists is the resource discovery, access, and sharing in a desired fashion. EarthCube is targeted to enable geoscientists to address the challenges by fostering community-governed efforts that develop a common cyberinfrastructure for the purpose of collecting, accessing, analyzing, sharing and visualizing all forms of data and related resources, through the use of advanced technological and computational capabilities. Here we design an Enterprise Architecture (EA) for EarthCube to facilitate the knowledge management, communication and human collaboration in pursuit of the unprecedented data sharing across the geosciences. The design results will provide EarthCube a reference framework for developing geoscience cyberinfrastructure collaborated by different stakeholders, and identifying topics which should invoke high interest in the community. The development of this EarthCube EA framework leverages popular frameworks, such as Zachman, Gartner, DoDAF, and FEAF. The science driver of this design is the needs from EarthCube community, including the analyzed user requirements from EarthCube End User Workshop reports and EarthCube working group roadmaps, and feedbacks or comments from scientists obtained by organizing workshops. The final product of this Enterprise Architecture is a four-volume reference document: 1) Volume one is this document and comprises an executive summary of the EarthCube architecture, serving as an overview in the initial phases of architecture development; 2) Volume two is the major body of the design product. It outlines all the architectural design components or viewpoints; 3) Volume three provides taxonomy of the EarthCube enterprise augmented with semantics relations; 4) Volume four describes an example of utilizing this architecture for a geoscience project.
Integrating manufacturing softwares for intelligent planning execution: a CIIMPLEX perspective
NASA Astrophysics Data System (ADS)
Chu, Bei Tseng B.; Tolone, William J.; Wilhelm, Robert G.; Hegedus, M.; Fesko, J.; Finin, T.; Peng, Yun; Jones, Chris H.; Long, Junshen; Matthews, Mike; Mayfield, J.; Shimp, J.; Su, S.
1997-01-01
Recent developments have made it possible to interoperate complex business applications at much lower costs. Application interoperation, along with business process re- engineering can result in significant savings by eliminating work created by disconnected business processes due to isolated business applications. However, we believe much greater productivity benefits can be achieved by facilitating timely decision-making, utilizing information from multiple enterprise perspectives. The CIIMPLEX enterprise integration architecture is designed to enable such productivity gains by helping people to carry out integrated enterprise scenarios. An enterprise scenario is triggered typically by some external event. The goal of an enterprise scenario is to make the right decisions considering the full context of the problem. Enterprise scenarios are difficult for people to carry out because of the interdependencies among various actions. One can easily be overwhelmed by the large amount of information. We propose the use of software agents to help gathering relevant information and present them in the appropriate context of an enterprise scenario. The CIIMPLEX enterprise integration architecture is based on the FAIME methodology for application interoperation and plug-and-play. It also explores the use of software agents in application plug-and- play.
2014-10-28
change. Enterprise Business System In August 2000, DLA began developing its Enterprise Resource Planning ( ERP ) system by initiating the Business...the EBS core system. EBS became the ERP system solution supporting DLA nonenergy commodity activities. DLA subsequently enhanced its EBS...capabilities by adding SAP software that supported DLA Enterprise Operational Accounting, real property, and inventory management functions. As part of the
Enterprise Information Architecture for Mission Development
NASA Technical Reports Server (NTRS)
Dutra, Jayne
2007-01-01
This slide presentation reviews the concept of an information architecture to assist in mission development. The integrate information architecture will create a unified view of the information using metadata and the values (i.e., taxonomy).
A new architecture for enterprise information systems.
Covvey, H. D.; Stumpf, J. J.
1999-01-01
Irresistible economic and technical forces are forcing healthcare institutions to develop regionalized services such as consolidated or virtual laboratories. Technical realities, such as the lack of an enabling enterprise-level information technology (IT) integration infrastructure, the existence of legacy systems, and non-existent or embryonic enterprise-level IT services organizations, are delaying or frustrating the achievement of the desired configuration of shared services. On attempting to address this matter, we discover that the state-of-the-art in integration technology is not wholly adequate, and itself becomes a barrier to the full realization of shared healthcare services. In this paper we report new work from the field of Co-operative Information Systems that proposes a new architecture of systems that are intrinsically cooperation-enabled, and we extend this architecture to both the regional and national scales. PMID:10566455
NASA Astrophysics Data System (ADS)
Martin, Andreas; Emmenegger, Sandro; Hinkelmann, Knut; Thönssen, Barbara
2017-04-01
The accessibility of project knowledge obtained from experiences is an important and crucial issue in enterprises. This information need about project knowledge can be different from one person to another depending on the different roles he or she has. Therefore, a new ontology-based case-based reasoning (OBCBR) approach that utilises an enterprise ontology is introduced in this article to improve the accessibility of this project knowledge. Utilising an enterprise ontology improves the case-based reasoning (CBR) system through the systematic inclusion of enterprise-specific knowledge. This enterprise-specific knowledge is captured using the overall structure given by the enterprise ontology named ArchiMEO, which is a partial ontological realisation of the enterprise architecture framework (EAF) ArchiMate. This ontological representation, containing historical cases and specific enterprise domain knowledge, is applied in a new OBCBR approach. To support the different information needs of different stakeholders, this OBCBR approach has been built in such a way that different views, viewpoints, concerns and stakeholders can be considered. This is realised using a case viewpoint model derived from the ISO/IEC/IEEE 42010 standard. The introduced approach was implemented as a demonstrator and evaluated using an application case that has been elicited from a business partner in the Swiss research project.
NASA Astrophysics Data System (ADS)
Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.
2006-12-01
The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and partner agencies to efficiently identify the significant results for new experiment directions and principle investigators to formulate experiment directions for new proposals.
Motion/imagery secure cloud enterprise architecture analysis
NASA Astrophysics Data System (ADS)
DeLay, John L.
2012-06-01
Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.
Managing Large Scale Project Analysis Teams through a Web Accessible Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.
2008-01-01
Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.
NASA Astrophysics Data System (ADS)
Dolotovskii, I. V.; Dolotovskaya, N. V.; Larin, E. A.
2018-05-01
The article presents the architecture and content of a specialized analytical system for monitoring operational conditions, planning of consumption and generation of energy resources, long-term planning of production activities and development of a strategy for the development of the energy complex of gas processing enterprises. A compositional model of structured data on the equipment of the main systems of the power complex is proposed. The correctness of the use of software modules and the database of the analytical system is confirmed by comparing the results of measurements on the equipment of the electric power system and simulation at the operating gas processing plant. A high accuracy in the planning of consumption of fuel and energy resources has been achieved (the error does not exceed 1%). Information and program modules of the analytical system allow us to develop a strategy for improving the energy complex in the face of changing technological topology and partial uncertainty of economic factors.
Department of Defense Enterprise Architecture Transition Strategy, Version 2.0
2008-02-29
the DoD CIO Enterprise Architecture Congruence Community of Practice Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting ...Directorate for Information Operations and Reports , 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware...does not display a currently valid OMB control number. 1. REPORT DATE 29 FEB 2008 2. REPORT TYPE 3. DATES COVERED 00-00-2008 to 00-00-2008 4
GEARS: An Enterprise Architecture Based On Common Ground Services
NASA Astrophysics Data System (ADS)
Petersen, S.
2014-12-01
Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.
Enterprise Architecture as a Tool of Navy METOC Transformation
2006-09-01
Enterprise Service Integration Layer (MESIL) METOC Enterprise Service Bus (ESB) Local ESBl Impl InfraI l I f Production Center Node Local ESBl Impl...InfraI l I f Local ESBl Impl InfraI l I f METOC Edge Node NCOW Tenets NCOW Tenets SOA Tenets SOA Tenets Production Center Node Top-Down Analysis
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... FEDERAL HOUSING FINANCE AGENCY [No. 2011-N-13] Notice of Order: Revisions to Enterprise Public Use Database Incorporating High-Cost Single-Family Securitized Loan Data Fields and Technical Data Field..., regarding FHFA's adoption of an Order revising FHFA's Public Use Database matrices to include certain data...
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
2007-04-01
Services and System Capabilities Enterprise Rules and Standards for Interoperability Navy AFArmy TRANS COM DFASDLA Ente prise Shared Services and System...Where commonality among components exists, there are also opportunities for identifying and leveraging shared services . A service-oriented architecture...and (3) shared services . The BMA federation strategy, according to these officials, is the first mission area federation strategy, and it is their
A step-by-step methodology for enterprise interoperability projects
NASA Astrophysics Data System (ADS)
Chalmeta, Ricardo; Pazos, Verónica
2015-05-01
Enterprise interoperability is one of the key factors for enhancing enterprise competitiveness. Achieving enterprise interoperability is an extremely complex process which involves different technological, human and organisational elements. In this paper we present a framework to help enterprise interoperability. The framework has been developed taking into account the three domains of interoperability: Enterprise Modelling, Architecture and Platform and Ontologies. The main novelty of the framework in comparison to existing ones is that it includes a step-by-step methodology that explains how to carry out an enterprise interoperability project taking into account different interoperability views, like business, process, human resources, technology, knowledge and semantics.
75 FR 68806 - Statement of Organization, Functions and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... Agency business applications architectures, the engineering of business processes, the building and... architecture, engineers technology for business processes, builds, deploys, maintains and manages enterprise systems and data collections efforts; (5) applies business applications architecture to process specific...
Marshall Application Realignment System (MARS) Architecture
NASA Technical Reports Server (NTRS)
Belshe, Andrea; Sutton, Mandy
2010-01-01
The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most interested in Phase 3 because this is where the data analysis, scoring, and recommendation capability is realized. Stakeholders want to see the benefits derived from reducing the steady-state application base and identify opportunities for portfolio performance improvement and application realignment.
Horvath, Monica M.; Rusincovitch, Shelley A.; Brinson, Stephanie; Shang, Howard C.; Evans, Steve; Ferranti, Jeffrey M.
2015-01-01
Purpose Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Methods Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of “smart dimensions”, and dynamically-generated user interfaces. Results DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort “join” functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Conclusions Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. PMID:25051403
Horvath, Monica M; Rusincovitch, Shelley A; Brinson, Stephanie; Shang, Howard C; Evans, Steve; Ferranti, Jeffrey M
2014-12-01
Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of "smart dimensions", and dynamically-generated user interfaces. DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort "join" functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Allesch, Jurgen; Preiss-Allesch, Dagmar
This report describes a study that identified major databases in operation in the 12 European Community countries that provide small- and medium-sized enterprises with information on opportunities for obtaining training and continuing education. Thirty-five databases were identified through information obtained from telephone interviews or…
Enterprise architecture availability analysis using fault trees and stakeholder interviews
NASA Astrophysics Data System (ADS)
Närman, Per; Franke, Ulrik; König, Johan; Buschle, Markus; Ekstedt, Mathias
2014-01-01
The availability of enterprise information systems is a key concern for many organisations. This article describes a method for availability analysis based on Fault Tree Analysis and constructs from the ArchiMate enterprise architecture (EA) language. To test the quality of the method, several case-studies within the banking and electrical utility industries were performed. Input data were collected through stakeholder interviews. The results from the case studies were compared with availability of log data to determine the accuracy of the method's predictions. In the five cases where accurate log data were available, the yearly downtime estimates were within eight hours from the actual downtimes. The cost of performing the analysis was low; no case study required more than 20 man-hours of work, making the method ideal for practitioners with an interest in obtaining rapid availability estimates of their enterprise information systems.
A Proposed Pattern of Enterprise Architecture
2013-02-01
consistent architecture descriptions. UPDM comprises extensions to both OMG’s Unified Modelling Language (UML) and Systems Modelling Language ( SysML ...those who use UML and SysML . These represent significant advancements that enable architecture trade-off analyses, architecture model execution...Language ( SysML ), and thus provides for architectural descriptions that contain a rich set of (formally) connected DoDAF/MoDAF viewpoints expressed
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Harp, Janicce Leshay
2014-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. Capabilities include anomaly detection, fault isolation, prognostics and physics-based diagnostics.
Knowledge Innovation System: The Common Language.
ERIC Educational Resources Information Center
Rogers, Debra M. Amidon
1993-01-01
The Knowledge Innovation System is a management technique in which a networked enterprise uses knowledge flow as a collaborative advantage. Enterprise Management System-Architecture, which can be applied to collaborative activities, has five domains: economic, sociological, psychological, managerial, and technological. (SK)
Information Technology Architectures. New Opportunities for Partnering, CAUSE94. Track VI.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers are presented from the 1994 CAUSE conference track on information technology architectures as applied to higher education institutions. The papers include: (1) "Reshaping the Enterprise: Building the Next Generation of Information Systems Through Information Architecture and Processing Reengineering," which notes…
Enterprise-wide PACS: beyond radiology, an architecture to manage all medical images.
Bandon, David; Lovis, Christian; Geissbühler, Antoine; Vallée, Jean-Paul
2005-08-01
Picture archiving and communication systems (PACS) have the vocation to manage all medical images acquired within the hospital. To address the various situations encountered in the imaging specialties, the traditional architecture used for the radiology department has to evolve. We present our preliminarily results toward an enterprise-wide PACS intended to support all kind of image production in medicine, from biomolecular images to whole-body pictures. Our solution is based on an existing radiologic PACS system from which images are distributed through an electronic patient record to all care facilities. This platform is enriched with a flexible integration framework supporting digital image communication in medicine (DICOM) and DICOM-XML formats. In addition, a generic workflow engine highly customizable is used to drive work processes. Echocardiology; hematology; ear, nose, and throat; and dermatology, including wounds, follow-up is the first implemented extensions outside of radiology. We also propose a global strategy for further developments based on three possible architectures for an enterprise-wide PACS.
Advanced Ground Systems Maintenance Enterprise Architecture Project
NASA Technical Reports Server (NTRS)
Perotti, Jose M. (Compiler)
2015-01-01
The project implements an architecture for delivery of integrated health management capabilities for the 21st Century launch complex. The delivered capabilities include anomaly detection, fault isolation, prognostics and physics based diagnostics.
Integrating hospital information systems in healthcare institutions: a mediation architecture.
El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian
2012-10-01
Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.
Integrated radiologist's workstation enabling the radiologist as an effective clinical consultant
NASA Astrophysics Data System (ADS)
McEnery, Kevin W.; Suitor, Charles T.; Hildebrand, Stan; Downs, Rebecca; Thompson, Stephen K.; Shepard, S. Jeff
2002-05-01
Since February 2000, radiologists at the M. D. Anderson Cancer Center have accessed clinical information through an internally developed radiologist's clinical interpretation workstation called RadStation. This project provides a fully integrated digital dictation workstation with clinical data review. RadStation enables the radiologist as an effective clinical consultant with access to pertinent sources of clinical information at the time of dictation. Data sources not only include prior radiology reports from the radiology information system (RIS) but access to pathology data, laboratory data, history and physicals, clinic notes, and operative reports. With integrated clinical information access, a radiologists's interpretation not only comments on morphologic findings but also can enable evaluation of study findings in the context of pertinent clinical presentation and history. Image access is enabled through the integration of an enterprise image archive (Stentor, San Francisco). Database integration is achieved by a combination of real time HL7 messaging and queries to SQL-based legacy databases. A three-tier system architecture accommodates expanding access to additional databases including real-time patient schedule as well as patient medications and allergies.
de Merich, D; Forte, Giulia
2011-01-01
Risk assessment is the fundamental process of an enterprise's prevention system and is the principal mandatory provision contained in the Health and Safety Law (Legislative Decree 81/2008) amended by Legislative Decree 106/2009. In order to properly comply with this obligation also in small-sized enterprises, the appropriate regulatory bodies should provide the enterprises with standardized tools and methods for identifying, assessing and managing risks. To assist in particular small and micro-enterprises (SMEs) with risk assessment, by providing a flexible tool that can also be standardized in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. Official efforts to provide Italian SMEs with information may initially make use of the findings of research conducted by ISPESL over the past 20 years, thanks in part to cooperation with other institutions (Regions, INAIL-National Insurance Institute for Occupational Accidents and Diseases), which have led to the creation of an information system on prevention consisting of numerous databases, both statistical and documental ("National System of Surveillance on fatal and serious accidents", "National System of Surveillance on work-related diseases", "Sector hazard profiles" database, "Solutions and Best Practices" database, "Technical Guidelines" database, "Training packages for prevention professionals in enterprises" database). With regard to evaluation criteria applicable within the enterprise, the possibility of combining traditional and uniform areas of assessment (by sector or by risk factor) with assessments by job/occupation has become possible thanks to the cooperation agreement made in 2009 by ISPESL, the ILO (International Labour Organisation) of Geneva and IIOSH (Israel Institute for Occupational Health and Hygiene) regarding the creation of an international Database (HDODB) based on risk datasheets per occupation. The project sets out to assist in particular small and micro-enterprises with risk assessment, providing a flexible and standardized tool in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. The model proposed by ISPESL selected the ILO's "Hazard Datasheet on Occupation" as an initial information tool to steer efforts to assess and manage hazards in small and micro-enterprises. In addition to being an internationally validated tool, the occupation datasheet has a very simple structure that is very effective in communicating and updating information in relation to the local context. According to the logic based on the providing support to enterprises by means of a collaborative network among institutions, local supervisory services and social partners, standardised hazard assessment procedures should be, irrespective of any legal obligations, the preferred tools of an "updatable information system" capable of providing support for the need to improve the process of assessing and managing hazards in enterprises.
Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Stocker, John C.; Golomb, Andrew M.
2011-01-01
Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.
The web-based information system for small and medium enterprises of Tomsk region
NASA Astrophysics Data System (ADS)
Senchenko, P. V.; Zhukovskiy, O. I.; Gritsenko, Yu B.; Senchenko, A. P.; Gritsenko, L. M.; Kovaleva, E. V.
2017-01-01
This paper presents the web enabled automated information data support system of small and medium-sized enterprises of Tomsk region. We define the purpose and application field of the system. In addition, we build a generic architecture and find system functions.
A system framework of inter-enterprise machining quality control based on fractal theory
NASA Astrophysics Data System (ADS)
Zhao, Liping; Qin, Yongtao; Yao, Yiyong; Yan, Peng
2014-03-01
In order to meet the quality control requirement of dynamic and complicated product machining processes among enterprises, a system framework of inter-enterprise machining quality control based on fractal was proposed. In this system framework, the fractal-specific characteristic of inter-enterprise machining quality control function was analysed, and the model of inter-enterprise machining quality control was constructed by the nature of fractal structures. Furthermore, the goal-driven strategy of inter-enterprise quality control and the dynamic organisation strategy of inter-enterprise quality improvement were constructed by the characteristic analysis on this model. In addition, the architecture of inter-enterprise machining quality control based on fractal was established by means of Web service. Finally, a case study for application was presented. The result showed that the proposed method was available, and could provide guidance for quality control and support for product reliability in inter-enterprise machining processes.
NASA Astrophysics Data System (ADS)
Kapulin, D. V.; Chemidov, I. V.; Kazantsev, M. A.
2017-01-01
In the paper, the aspects of design, development and implementation of the automated control system for warehousing under the manufacturing process of the radio-electronic enterprise JSC «Radiosvyaz» are discussed. The architecture of the automated control system for warehousing proposed in the paper consists of a server which is connected to the physically separated information networks: the network with a database server, which stores information about the orders for picking, and the network with the automated storage and retrieval system. This principle allows implementing the requirements for differentiation of access, ensuring the information safety and security requirements. Also, the efficiency of the developed automated solutions in terms of optimizing the warehouse’s logistic characteristics is researched.
Addressing BI Transactional Flows in the Real-Time Enterprise Using GoldenGate TDM
NASA Astrophysics Data System (ADS)
Pareek, Alok
It's time to visit low latency and reliable real-time (RT) infrastructures to support next generation BI applications instead of continually debating the need and notion of real-time. The last few years have illuminated some key paradigms affecting data management. The arguments put forth to move away from traditional DBMS architectures have proven persuasive - and specialized architectural data stores are being adopted in the industry [1]. The change from traditional database pull methods towards intelligent routing/push models is underway, causing applications to be redesigned, redeployed, and re-architected. One direct result of this is that despite original warnings about replication [2] - enterprises continue to deploy multiple replicas to support both performance, and high availability of RT applications, with an added complexity around manageability of heterogeneous computing systems. The enterprise is overflowing with data streams that require instantaneous processing and integration, to deliver faster visibility and invoke conjoined actions for RT decision making, resulting in deployment of advanced BI applications as can be seen by stream processing over RT feeds from operational systems for CEP [3]. Given these various paradigms, a multitude of new challenges and requirements have emerged, thereby necessitating different approaches to management of RT applications for BI. The purpose of this paper is to offer a viewpoint on how RT affects critical operational applications, evolves the weight of non-critical applications, and pressurizes availability/data-movement requirements in the underlying infrastructure. I will discuss how the GoldenGate TDM platform is being deployed within the RTE to manage some of these challenges particularly around RT dissemination of transactional data to reduce latency in data integration flows, to enable real-time reporting/DW, and to increase availability of underlying operational systems. Real world case studies will be used to support the various discussion points. The paper is an argument to augment traditional DI flows with a real-time technology (referred to as transactional data management) to support operational BI requirements.
Transforming LandWarNet: Implementing the Enterprise Strategy
2010-08-01
Prescribed by ANSI Std Z39-18 2 HHH HHH 3 Over the past decade, the United States’ global defense posture has...when they need it, in any environment. n HHH A Soldier’s Story HHH 4 LandWarNet is the Army’s solution to this enterprise network requirement...Architecture HHH LandWarNet HHH 5 To form a truly unified enterprise network, demarcated only by classification enclaves, the Army must change its
2015-04-30
mobile devices used within academic, business , or government enterprises. Acquisition personnel in such enterprises will increasingly be called on to...Graduate School of Business & Public Policy at the Naval Postgraduate School. To request defense acquisition research, to become a research sponsor, or to...address challenges in the acquisition of software systems for Web-based or mobile devices used within academic, business , or government enterprises
A Technical Infrastructure to Integrate Dynamics AX ERP and CRM into University Curriculum
ERIC Educational Resources Information Center
Wimmer, Hayden; Hall, Kenneth
2016-01-01
Enterprise Resource Planning and Customer Relationship Management are becoming important topics at the university level, and are increasingly receiving course-level attention in the curriculum. In fact, the Information Systems Body of Knowledge specifically identifies Enterprise Architecture as an Information Systems-specific knowledge area. The…
Achieving Better Buying Power through Acquisition of Open Architecture Software Systems: Volume 1
2016-01-06
supporting “Bring Your Own Devices” (BYOD)? 22 New business models for OA software components ● Franchising ● Enterprise licensing ● Metered usage...paths IP and cybersecurity requirements will need continuous attention! 35 New business models for OA software components ● Franchising ● Enterprise
Enterprise and system of systems capability development life-cycle processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, David Franklin
2014-08-01
This report and set of appendices are a collection of memoranda originally drafted circa 2007-2009 for the purpose of describing and detailing a models-based systems engineering approach for satisfying enterprise and system-of-systems life cycle process requirements. At the time there was interest and support to move from Capability Maturity Model Integration (CMMI) Level One (ad hoc processes) to Level Three. The main thrust of the material presents a rational exposâe of a structured enterprise development life cycle that uses the scientific method as a framework, with further rigor added from adapting relevant portions of standard systems engineering processes. While themore » approach described invokes application of the Department of Defense Architectural Framework (DoDAF), it is suitable for use with other architectural description frameworks.« less
Architecture of next-generation information management systems for digital radiology enterprises
NASA Astrophysics Data System (ADS)
Wong, Stephen T. C.; Wang, Huili; Shen, Weimin; Schmidt, Joachim; Chen, George; Dolan, Tom
2000-05-01
Few information systems today offer a clear and flexible means to define and manage the automated part of radiology processes. None of them provide a coherent and scalable architecture that can easily cope with heterogeneity and inevitable local adaptation of applications. Most importantly, they often lack a model that can integrate clinical and administrative information to aid better decisions in managing resources, optimizing operations, and improving productivity. Digital radiology enterprises require cost-effective solutions to deliver information to the right person in the right place and at the right time. We propose a new architecture of image information management systems for digital radiology enterprises. Such a system is based on the emerging technologies in workflow management, distributed object computing, and Java and Web techniques, as well as Philips' domain knowledge in radiology operations. Our design adapts the approach of '4+1' architectural view. In this new architecture, PACS and RIS will become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, it will provide powerful query and statistical functions for managing resources and improving productivity in real time. This work will lead to a new direction of image information management in the next millennium. We will illustrate the innovative design with implemented examples of a working prototype.
ERIC Educational Resources Information Center
Cooper, Richard P.
2007-01-01
It has been suggested that the enterprise of developing mechanistic theories of the human cognitive architecture is flawed because the theories produced are not directly falsifiable. Newell attempted to sidestep this criticism by arguing for a Lakatosian model of scientific progress in which cognitive architectures should be understood as theories…
NASA Astrophysics Data System (ADS)
Li, Qing; Wang, Ze-yuan; Cao, Zhi-chao; Du, Rui-yang; Luo, Hao
2015-08-01
With the process of globalisation and the development of management models and information technology, enterprise cooperation and collaboration has developed from intra-enterprise integration, outsourcing and inter-enterprise integration, and supply chain management, to virtual enterprises and enterprise networks. Some midfielder enterprises begin to serve for different supply chains. Therefore, they combine related supply chains into a complex enterprise network. The main challenges for enterprise network's integration and collaboration are business process and data fragmentation beyond organisational boundaries. This paper reviews the requirements of enterprise network's integration and collaboration, as well as the development of new information technologies. Based on service-oriented architecture (SOA), collaboration modelling and collaboration agents are introduced to solve problems of collaborative management for service convergence under the condition of process and data fragmentation. A model-driven methodology is developed to design and deploy the integrating framework. An industrial experiment is designed and implemented to illustrate the usage of developed technologies in this paper.
Development and Implementation of Kumamoto Technopolis Regional Database T-KIND
NASA Astrophysics Data System (ADS)
Onoue, Noriaki
T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.
Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.
2000-09-13
The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. Totally Integrated Munitions Enterprise (TIME) is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controllers to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integratingmore » design, engineering, manufacturing, administration, and logistics.« less
Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.
2000-08-18
The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. Totally Integrated Munitions Enterprise (TIME) is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controllers to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integratingmore » design, engineering, manufacturing, administration, and logistics.« less
Smart Systems for Logistics Command and Control (SSLC2)
2004-06-01
design options 12 AFRL Risk Abatement (continued) • Awareness of key development projects: • AF Portal • GCSS-AF • TBMCS-UL • Enterprise Data Warehouse ... Logistics Enterprise Architecture • Early identification of Transition Agents 13 Collaboration Partners • AF-ILMM • AMC/A-4 • AFC2ISRC • AFMC LSO
Concept of operations for knowledge discovery from Big Data across enterprise data warehouses
NASA Astrophysics Data System (ADS)
Sukumar, Sreenivas R.; Olama, Mohammed M.; McNair, Allen W.; Nutaro, James J.
2013-05-01
The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.
A Review of Enterprise Architecture Use in Defence
2014-09-01
dictionary of terms; • architecture description language; • architectural information (pertaining both to specific projects and higher level...UNCLASSIFIED 59 Z39.19 2005 Monolingual Controlled Vocabularies, National Information Standards Organisation, Bethesda: NISO Press, 2005. BABOK 2009...togaf/ Z39.19 2005 ANSI/NISO Z39.19 – Guidelines for the Construction, Format, and Management of Monolingual Controlled Vocabularies, Bethesda: NISO
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252
A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.
Law, V; Goldberg, H S; Jones, P; Safran, C
1998-01-01
One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.
Systems budgets architecture and development for the Maunakea Spectroscopic Explorer
NASA Astrophysics Data System (ADS)
Mignot, Shan; Flagey, Nicolas; Szeto, Kei; Murowinski, Rick; McConnachie, Alan
2016-08-01
The Maunakea Spectroscopic Explorer (MSE) project is an enterprise to upgrade the existing Canada-France- Hawaii observatory into a spectroscopic facility based on a 10 meter-class telescope. As such, the project relies on engineering requirements not limited only to its instruments (the low, medium and high resolution spectrographs) but for the whole observatory. The science requirements, the operations concept, the project management and the applicable regulations are the basis from which these requirements are initially derived, yet they do not form hierarchies as each may serve several purposes, that is, pertain to several budgets. Completeness and consistency are hence the main systems engineering challenges for such a large project as MSE. Special attention is devoted to ensuring the traceability of requirements via parametric models, derivation documents, simulations, and finally maintaining KAOS diagrams and a database under IBM Rational DOORS linking them together. This paper will present the architecture of the main budgets under development and the associated processes, expand to highlight those that are interrelated and how the system, as a whole, is then optimized by modelling and analysis of the pertinent system parameters.
Using enterprise architecture to analyse how organisational structure impact motivation and learning
NASA Astrophysics Data System (ADS)
Närman, Pia; Johnson, Pontus; Gingnell, Liv
2016-06-01
When technology, environment, or strategies change, organisations need to adjust their structures accordingly. These structural changes do not always enhance the organisational performance as intended partly because organisational developers do not understand the consequences of structural changes in performance. This article presents a model-based analysis framework for quantitative analysis of the effect of organisational structure on organisation performance in terms of employee motivation and learning. The model is based on Mintzberg's work on organisational structure. The quantitative analysis is formalised using the Object Constraint Language (OCL) and the Unified Modelling Language (UML) and implemented in an enterprise architecture tool.
Managing changes in the enterprise architecture modelling context
NASA Astrophysics Data System (ADS)
Khanh Dam, Hoa; Lê, Lam-Son; Ghose, Aditya
2016-07-01
Enterprise architecture (EA) models the whole enterprise in various aspects regarding both business processes and information technology resources. As the organisation grows, the architecture of its systems and processes must also evolve to meet the demands of the business environment. Evolving an EA model may involve making changes to various components across different levels of the EA. As a result, an important issue before making a change to an EA model is assessing the ripple effect of the change, i.e. change impact analysis. Another critical issue is change propagation: given a set of primary changes that have been made to the EA model, what additional secondary changes are needed to maintain consistency across multiple levels of the EA. There has been however limited work on supporting the maintenance and evolution of EA models. This article proposes an EA description language, namely ChangeAwareHierarchicalEA, integrated with an evolution framework to support both change impact analysis and change propagation within an EA model. The core part of our framework is a technique for computing the impact of a change and a new method for generating interactive repair plans from Alloy consistency rules that constrain the EA model.
A framework for investigation into extended enterprise resilience
NASA Astrophysics Data System (ADS)
Erol, Ozgur; Sauser, Brian J.; Mansouri, Mo
2010-05-01
This article proposes a framework for investigation into 'extended enterprise resilience' based on the key attributes of enterprise resilience in the context of extended enterprises. Such attributes, namely agility, flexibility, adaptability and connectivity, are frequently defined as supporting attributes of enterprise resilience, but the issue is how they can be more effectively applied to extended enterprises. The role of information technology in assisting connectivity and collaboration is frequently recognised as contributing to resilience on all levels, and will likewise be employed on the level of extended enterprise systems. The proposed framework is based on the expanded application of two primary enablers of enterprise resilience: (i) the capability of an enterprise to connect systems, people, processes and information in a way that allows enterprise to become more connected and responsive to the dynamics of its environment, stakeholders and competitors; (ii) the alignment of information technology with business goals. The former requires inter- and intra-level interoperability and integration within the extended enterprises, and the latter requires modelling of the underlying technology infrastructure and creation of a consolidated view of, and access to, all available resources in the extended enterprises that can be attained by well-defined enterprise architecture.
Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing
NASA Technical Reports Server (NTRS)
Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce
2008-01-01
This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.
47 CFR 52.25 - Database architecture and administration.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...
47 CFR 52.25 - Database architecture and administration.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...
47 CFR 52.25 - Database architecture and administration.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...
47 CFR 52.25 - Database architecture and administration.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...
47 CFR 52.25 - Database architecture and administration.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...
Enterprise-class Digital Imaging and Communications in Medicine (DICOM) image infrastructure.
York, G; Wortmann, J; Atanasiu, R
2001-06-01
Most current picture archiving and communication systems (PACS) are designed for a single department or a single modality. Few PACS installations have been deployed that support the needs of the hospital or the entire Integrated Delivery Network (IDN). The authors propose a new image management architecture that can support a large, distributed enterprise.
Trusted computation through biologically inspired processes
NASA Astrophysics Data System (ADS)
Anderson, Gustave W.
2013-05-01
Due to supply chain threats it is no longer a reasonable assumption that traditional protections alone will provide sufficient security for enterprise systems. The proposed cognitive trust model architecture extends the state-of-the-art in enterprise anti-exploitation technologies by providing collective immunity through backup and cross-checking, proactive health monitoring and adaptive/autonomic threat response, and network resource diversity.
Notes on a Vision for the Global Space Weather Enterprise
NASA Astrophysics Data System (ADS)
Head, James N.
2015-07-01
Space weather phenomena impacts human civilization on a global scale and hence calls for a global approach to research, monitoring, and operational forecasting. The Global Space Weather Enterprise (GSWE) could be arranged along lines well established in existing international frameworks related to space exploration or to the use of space to benefit humanity. The Enterprise need not establish a new organization, but could evolve from existing international organizations. A GSWE employing open architectural concepts could be arranged to promote participation by all interested States regardless of current differences in science and technical capacity. Such an Enterprise would engender capacity building and burden sharing opportunities.
Meaningful Cost-Benefit Analysis for Service-Oriented Architecture Projects
2010-05-01
SOA to identify these activities and shows how those costs come to be commingled with other development and maintenance activities. The paper argues...affected by SOA . To be consistent with the separation suggested above, this paper suggests the following new activities: • Enterprise architecture...Government or Federal Rights License 14. ABSTRACT This paper argues that proper cost-benefit analysis of service-oriented architecture projects is not
2009-05-27
technology network architecture to connect various DHS elements and promote information sharing.17 • Establish a DHS State, Local, and Regional...A Strategic Plan; training, and the implementation of a comprehensive information systems architecture .65 As part of its integration...information technology network architecture was submitted to Congress last year. See DHS I&A, Homeland Security Information Technology Network
IHE cross-enterprise document sharing for imaging: design challenges
NASA Astrophysics Data System (ADS)
Noumeir, Rita
2006-03-01
Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.
A Framework for a Decision Support System in a Hierarchical Extended Enterprise Decision Context
NASA Astrophysics Data System (ADS)
Boza, Andrés; Ortiz, Angel; Vicens, Eduardo; Poler, Raul
Decision Support System (DSS) tools provide useful information to decision makers. In an Extended Enterprise, a new goal, changes in the current objectives or small changes in the extended enterprise configuration produce a necessary adjustment in its decision system. A DSS in this context must be flexible and agile to make suitable an easy and quickly adaptation to this new context. This paper proposes to extend the Hierarchical Production Planning (HPP) structure to an Extended Enterprise decision making context. In this way, a framework for DSS in Extended Enterprise context is defined using components of HPP. Interoperability details have been reviewed to identify the impact in this framework. The proposed framework allows overcoming some interoperability barriers, identifying and organizing components for a DSS in Extended Enterprise context, and working in the definition of an architecture to be used in the design process of a flexible DSS in Extended Enterprise context which can reuse components for futures Extended Enterprise configurations.
Security Aspects of an Enterprise-Wide Network Architecture.
ERIC Educational Resources Information Center
Loew, Robert; Stengel, Ingo; Bleimann, Udo; McDonald, Aidan
1999-01-01
Presents an overview of two projects that concern local area networks and the common point between networks as they relate to network security. Discusses security architectures based on firewall components, packet filters, application gateways, security-management components, an intranet solution, user registration by Web form, and requests for…
Proposing an Optimal Learning Architecture for the Digital Enterprise.
ERIC Educational Resources Information Center
O'Driscoll, Tony
2003-01-01
Discusses the strategic role of learning in information age organizations; analyzes parallels between the application of technology to business and the application of technology to learning; and proposes a learning architecture that aligns with the knowledge-based view of the firm and optimizes the application of technology to achieve proficiency…
Reshaping the Enterprise through an Information Architecture and Process Reengineering.
ERIC Educational Resources Information Center
Laudato, Nicholas C.; DeSantis, Dennis J.
1995-01-01
The approach used by the University of Pittsburgh (Pennsylvania) in designing a campus-wide information architecture and a framework for reengineering the business process included building consensus on a general philosophy for information systems, using pattern-based abstraction techniques, applying data modeling and application prototyping, and…
An end-to-end communications architecture for condition-based maintenance applications
NASA Astrophysics Data System (ADS)
Kroculick, Joseph
2014-06-01
This paper explores challenges in implementing an end-to-end communications architecture for Condition-Based Maintenance Plus (CBM+) data transmission which aligns with the Army's Network Modernization Strategy. The Army's Network Modernization strategy is based on rolling out network capabilities which connect the smallest unit and Soldier level to enterprise systems. CBM+ is a continuous improvement initiative over the life cycle of a weapon system or equipment to improve the reliability and maintenance effectiveness of Department of Defense (DoD) systems. CBM+ depends on the collection, processing and transport of large volumes of data. An important capability that enables CBM+ is an end-to-end network architecture that enables data to be uploaded from the platform at the tactical level to enterprise data analysis tools. To connect end-to-end maintenance processes in the Army's supply chain, a CBM+ network capability can be developed from available network capabilities.
Integrating Environmental and Information Systems Management: An Enterprise Architecture Approach
NASA Astrophysics Data System (ADS)
Noran, Ovidiu
Environmental responsibility is fast becoming an important aspect of strategic management as the reality of climate change settles in and relevant regulations are expected to tighten significantly in the near future. Many businesses react to this challenge by implementing environmental reporting and management systems. However, the environmental initiative is often not properly integrated in the overall business strategy and its information system (IS) and as a result the management does not have timely access to (appropriately aggregated) environmental information. This chapter argues for the benefit of integrating the environmental management (EM) project into the ongoing enterprise architecture (EA) initiative present in all successful companies. This is done by demonstrating how a reference architecture framework and a meta-methodology using EA artefacts can be used to co-design the EM system, the organisation and its IS in order to achieve a much needed synergy.
The NASA Integrated Information Technology Architecture
NASA Technical Reports Server (NTRS)
Baldridge, Tim
1997-01-01
This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.
Modeling Adaptable Business Service for Enterprise Collaboration
NASA Astrophysics Data System (ADS)
Boukadi, Khouloud; Vincent, Lucien; Burlat, Patrick
Nowadays, a Service Oriented Architecture (SOA) seems to be one of the most promising paradigms for leveraging enterprise information systems. SOA creates opportunities for enterprises to provide value added service tailored for on demand enterprise collaboration. With the emergence and rapid development of Web services technologies, SOA is being paid increasing attention and has become widespread. In spite of the popularity of SOA, a standardized framework for modeling and implementing business services are still in progress. For the purpose of supporting these service-oriented solutions, we adopt a model driven development approach. This paper outlines the Contextual Service Oriented Modeling and Analysis (CSOMA) methodology and presents UML profiles for the PIM level service-oriented architectural modeling, as well as its corresponding meta-models. The proposed PIM (Platform Independent Model) describes the business SOA at a high level of abstraction regardless of techniques involved in the application employment. In addition, all essential service-specific concerns required for delivering quality and context-aware service are covered. Some of the advantages of this approach are that it is generic and thus not closely allied with Web service technology as well as specifically treating the service adaptability during the design stage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Nutaro, James J; Sukumar, Sreenivas R
2013-01-01
The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Optionsmore » that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.« less
Pohjonen, Hanna; Ross, Peeter; Blickman, Johan G; Kamman, Richard
2007-01-01
Emerging technologies are transforming the workflows in healthcare enterprises. Computing grids and handheld mobile/wireless devices are providing clinicians with enterprise-wide access to all patient data and analysis tools on a pervasive basis. In this paper, emerging technologies are presented that provide computing grids and streaming-based access to image and data management functions, and system architectures that enable pervasive computing on a cost-effective basis. Finally, the implications of such technologies are investigated regarding the positive impacts on clinical workflows.
Reliability Engineering for Service Oriented Architectures
2013-02-01
Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit
2010-03-19
network architecture to connect various DHS elements and promote information sharing.17 • Establish a DHS State, Local, and Regional Fusion Center...of reports; the I&A Strategic Plan; training, and the implementation of a comprehensive information systems architecture .73 As part of its...comprehensive information technology network architecture was submitted to Congress last year. See DHS I&A, Homeland Security Information Technology Network
2016-02-22
SPONSORED REPORT SERIES Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web and Mobile Devices 22...ACQUISITION RESEARCH PROGRAM SPONSORED REPORT SERIES Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web ...Policy Naval Postgraduate School Executive Summary Many people within large enterprises rely on up to four Web -based or mobile devices for their
Enterprise systems security management: a framework for breakthrough protection
NASA Astrophysics Data System (ADS)
Farroha, Bassam S.; Farroha, Deborah L.
2010-04-01
Securing the DoD information network is a tremendous task due to its size, access locations and the amount of network intrusion attempts on a daily basis. This analysis investigates methods/architecture options to deliver capabilities for secure information sharing environment. Crypto-binding and intelligent access controls are basic requirements for secure information sharing in a net-centric environment. We introduce many of the new technology components to secure the enterprise. The cooperative mission requirements lead to developing automatic data discovery and data stewards granting access to Cross Domain (CD) data repositories or live streaming data. Multiple architecture models are investigated to determine best-of-breed approaches including SOA and Private/Public Clouds.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Totally Integrated Munitions Enterprise ''Affordable Munitions Production for the 21st Century''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burleson, R.R.; Poggio, M.E.; Rosenberg, S.J.
2000-07-14
The U.S. Army faces several munitions manufacturing issues: downsizing of the organic production base, timely fielding of affordable smart munitions, and munitions replenishment during national emergencies. TIME is addressing these complex issues via the development and demonstration of an integrated enterprise. The enterprise will include the tools, network, and open modular architecture controller to enable accelerated acquisition, shortened concept to volume production, lower life cycle costs, capture of critical manufacturing processes, and communication of process parameters between remote sites to rapidly spin-off production for replenishment by commercial sources. TIME addresses the enterprise as a system, integrating design, engineering, manufacturing, administration,more » and logistics.« less
pLog enterprise-enterprise GIS-based geotechnical data management system enhancements.
DOT National Transportation Integrated Search
2015-12-01
Recent eorts by the Louisiana Department of Transportation and Development (DOTD) and the : Louisiana Transportation Research Center (LTRC) have developed a Geotechnical Information : Database, with a Geographic Information System (GIS) interface....
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salguero, Laura Marie; Huff, Johnathon; Matta, Anthony R.
Sandia National Laboratories is an organization with a wide range of research and development activities that include nuclear, explosives, and chemical hazards. In addition, Sandia has over 2000 labs and over 40 major test facilities, such as the Thermal Test Complex, the Lightning Test Facility, and the Rocket Sled Track. In order to support safe operations, Sandia has a diverse Environment, Safety, and Health (ES&H) organization that provides expertise to support engineers and scientists in performing work safely. With such a diverse organization to support, the ES&H program continuously seeks opportunities to improve the services provided for Sandia by usingmore » various methods as part of their risk management strategy. One of the methods being investigated is using enterprise architecture analysis to mitigate risk inducing characteristics such as normalization of deviance, organizational drift, and problems in information flow. This paper is a case study for how a Department of Defense Architecture Framework (DoDAF) model of the ES&H enterprise, including information technology applications, can be analyzed to understand the level of risk associated with the risk inducing characteristics discussed above. While the analysis is not complete, we provide proposed analysis methods that will be used for future research as the project progresses.« less
Peer-to-peer architecture for multi-departmental distributed PACS
NASA Astrophysics Data System (ADS)
Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman
2006-03-01
We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.
Educational JavaBeans: a Requirements Driven Architecture.
ERIC Educational Resources Information Center
Hall, Jon; Rapanotti, Lucia
This paper investigates, through a case study, the development of a software architecture that is compatible with a system's high-level requirements. The case study is an example of an extended customer/supplier relationship (post-point of sale support) involved in e-universities and is representative of a class of enterprise without current…
Web Service Architecture Framework for Embedded Devices
ERIC Educational Resources Information Center
Yanzick, Paul David
2009-01-01
The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…
ERIC Educational Resources Information Center
Tadesse, Yohannes
2012-01-01
The importance of information security has made many organizations to invest and utilize effective information security controls within the information systems (IS) architecture. An organization's strategic decisions to secure enterprise-wide services often associated with the overall competitive advantages that are attained through the process of…
77 FR 74226 - Excepted Service
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
..., network and systems engineering, enterprise architecture, intelligence analysis, investigation... Affairs. Bureau of Economic Staff Assistant.... DS120122 10/11/2012 and Business Affairs. Bureau of...
NASA Technical Reports Server (NTRS)
Shearrow, Charles A.
1999-01-01
One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.
The ESID Online Database network.
Guzman, D; Veit, D; Knerr, V; Kindle, G; Gathmann, B; Eades-Perner, A M; Grimbacher, B
2007-03-01
Primary immunodeficiencies (PIDs) belong to the group of rare diseases. The European Society for Immunodeficiencies (ESID), is establishing an innovative European patient and research database network for continuous long-term documentation of patients, in order to improve the diagnosis, classification, prognosis and therapy of PIDs. The ESID Online Database is a web-based system aimed at data storage, data entry, reporting and the import of pre-existing data sources in an enterprise business-to-business integration (B2B). The online database is based on Java 2 Enterprise System (J2EE) with high-standard security features, which comply with data protection laws and the demands of a modern research platform. The ESID Online Database is accessible via the official website (http://www.esid.org/). Supplementary data are available at Bioinformatics online.
Business intelligence modeling in launch operations
NASA Astrophysics Data System (ADS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-05-01
The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations, and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems.
Business Intelligence Modeling in Launch Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-01-01
This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems. The future of business intelligence of space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems.
A Molecular Framework for Understanding DCIS
2016-10-01
well. Pathologic and Clinical Annotation Database A clinical annotation database titled the Breast Oncology Database has been established to...complement the procured SPORE sample characteristics and annotated pathology data. This Breast Oncology Database is an offsite clinical annotation...database adheres to CSMC Enterprise Information Services (EIS) research database security standards. The Breast Oncology Database consists of: 9 Baseline
A Framework for Enterprise Operating Systems Based on Zachman Framework
NASA Astrophysics Data System (ADS)
Ostadzadeh, S. Shervin; Rahmani, Amir Masoud
Nowadays, the Operating System (OS) isn't only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.
Using enterprise architecture artefacts in an organisation
NASA Astrophysics Data System (ADS)
Niemi, Eetu; Pekkola, Samuli
2017-03-01
As a tool for management and planning, Enterprise Architecture (EA) can potentially align organisations' business processes, information, information systems and technology towards a common goal, and supply the information required within this journey. However, an explicit view on why, how, when and by whom EA artefacts are used in order to realise its full potential is not defined. Utilising the features of information systems use studies and data from a case study with 14 EA stakeholder interviews, we identify and describe 15 EA artefact use situations that are then reflected in the related literature. Their analysis enriches understanding of what are EA artefacts, how and why they are used and when are they used, and results in a theoretical framework for understanding their use in general.
Design and Acquisition of Software for Defense Systems
2018-02-14
enterprise business systems and related information technology (IT) services, the role software plays in enabling and enhancing weapons systems often...3 The information in this chart was compiled from Christian Hagen, Jeff Sorenson, Steven Hurt...understanding to make an informed choice of final architecture. The Task Force found commercial practice starts with several competing architectures and
Research on high availability architecture of SQL and NoSQL
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao
2017-03-01
With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.
Data accuracy assessment using enterprise architecture
NASA Astrophysics Data System (ADS)
Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias
2011-02-01
Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.
Scarselli, A; Leva, A; Campo, G; Marconi, M; Nesti, M; Erba, P
2005-01-01
The Italian Institute for Occupational Prevention and Safety (ISPESL) carried out a register of enterprises operating in industry, services and agriculture sector to provide information on their location, economical activity and occupational data. This database has been built merging administrative files from the National Institute of Social Security (INPS) and the Computer Science Society of Italian Chambers of Commerce (InfoCamere). Enterprises have been classified by economic sector - in accordance with ISTAT (National Statistics Institute) "Ateco91" classification--and by accuracy level of the record linkage. In details, three different subsystems have been set up: (A) enterprises satisfying linkage; (B) enterprises in InfoCamere file not linked with INPS file; (C) enterprises in INPS file not linked with InfoCamere file. In the whole, 6.026.676 factories have been collected, of which 1.188.784 in group A, 4.543.091 in group B and 294.801 in group C. Establishing a database of information on industries may be useful to improve preventive programs and to plan health care surveillance systems.
Power Analysis of an Enterprise Wireless Communication Architecture
2017-09-01
easily plug a satellite-based communication module into the enterprise processor when needed. Once plugged-in, it automatically runs the corresponding...reduce the SWaP by using a singular processing/computing module to run user applications and to implement waveform algorithms. This approach would...GPP) technology improved enough to allow a wide variety of waveforms to run in the GPP; thus giving rise to the SDR (Brannon 2004). Today’s
NASA Astrophysics Data System (ADS)
Arnaoudova, Kristina; Stanchev, Peter
2015-11-01
The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.
Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin
2003-09-01
Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system.
Saranummi, Niilo
2005-01-01
The PICNIC architecture aims at supporting inter-enterprise integration and the facilitation of collaboration between healthcare organisations. The concept of a Regional Health Economy (RHE) is introduced to illustrate the varying nature of inter-enterprise collaboration between healthcare organisations collaborating in providing health services to citizens and patients in a regional setting. The PICNIC architecture comprises a number of PICNIC IT Services, the interfaces between them and presents a way to assemble these into a functioning Regional Health Care Network meeting the needs and concerns of its stakeholders. The PICNIC architecture is presented through a number of views relevant to different stakeholder groups. The stakeholders of the first view are national and regional health authorities and policy makers. The view describes how the architecture enables the implementation of national and regional health policies, strategies and organisational structures. The stakeholders of the second view, the service viewpoint, are the care providers, health professionals, patients and citizens. The view describes how the architecture supports and enables regional care delivery and process management including continuity of care (shared care) and citizen-centred health services. The stakeholders of the third view, the engineering view, are those that design, build and implement the RHCN. The view comprises four sub views: software engineering, IT services engineering, security and data. The proposed architecture is founded into the main stream of how distributed computing environments are evolving. The architecture is realised using the web services approach. A number of well established technology platforms and generic standards exist that can be used to implement the software components. The software components that are specified in PICNIC are implemented in Open Source.
Architectural Framework for Addressing Legacy Waste from the Cold War - 13611
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Gregory A.; Glazner, Christopher G.; Steckley, Sam
We present an architectural framework for the use of a hybrid simulation model of enterprise-wide operations used to develop system-level insight into the U.S. Department of Energy's (DOE) environmental cleanup of legacy nuclear waste at the Savannah River Site. We use this framework for quickly exploring policy and architectural options, analyzing plans, addressing management challenges and developing mitigation strategies for DOE Office of Environmental Management (EM). The socio-technical complexity of EM's mission compels the use of a qualitative approach to complement a more a quantitative discrete event modeling effort. We use this model-based analysis to pinpoint pressure and leverage pointsmore » and develop a shared conceptual understanding of the problem space and platform for communication among stakeholders across the enterprise in a timely manner. This approach affords the opportunity to discuss problems using a unified conceptual perspective and is also general enough that it applies to a broad range of capital investment/production operations problems. (authors)« less
The architecture of a virtual grid GIS server
NASA Astrophysics Data System (ADS)
Wu, Pengfei; Fang, Yu; Chen, Bin; Wu, Xi; Tian, Xiaoting
2008-10-01
The grid computing technology provides the service oriented architecture for distributed applications. The virtual Grid GIS server is the distributed and interoperable enterprise application GIS architecture running in the grid environment, which integrates heterogeneous GIS platforms. All sorts of legacy GIS platforms join the grid as members of GIS virtual organization. Based on Microkernel we design the ESB and portal GIS service layer, which compose Microkernel GIS. Through web portals, portal GIS services and mediation of service bus, following the principle of SoC, we separate business logic from implementing logic. Microkernel GIS greatly reduces the coupling degree between applications and GIS platforms. The enterprise applications are independent of certain GIS platforms, and making the application developers to pay attention to the business logic. Via configuration and orchestration of a set of fine-grained services, the system creates GIS Business, which acts as a whole WebGIS request when activated. In this way, the system satisfies a business workflow directly and simply, with little or no new code.
New approaches to digital transformation of petrochemical production
NASA Astrophysics Data System (ADS)
Andieva, E. Y.; Kapelyuhovskaya, A. A.
2017-08-01
The newest concepts of the reference architecture of digital industrial transformation are considered, the problems of their application for the enterprises having in their life cycle oil products processing and marketing are revealed. The concept of the reference architecture, providing a systematic representation of the fundamental changes in the approaches to production management based on the automation of production process control is proposed.
ERIC Educational Resources Information Center
Laidlaw, Gregory
2013-01-01
The purpose of this study is to evaluate the use of Lean/Agile principles, using action research to develop and deploy new technology for Small and Medium sized enterprises. The research case was conducted at the Lapeer County Sheriff's Department and involves the initial deployment of a Service Oriented Architecture to alleviate the data…
NASA Astrophysics Data System (ADS)
Zhang, Wenyu; Zhang, Shuai; Cai, Ming; Jian, Wu
2015-04-01
With the development of virtual enterprise (VE) paradigm, the usage of serviceoriented architecture (SOA) is increasingly being considered for facilitating the integration and utilisation of distributed manufacturing resources. However, due to the heterogeneous nature among VEs, the dynamic nature of a VE and the autonomous nature of each VE member, the lack of both sophisticated coordination mechanism in the popular centralised infrastructure and semantic expressivity in the existing SOA standards make the current centralised, syntactic service discovery method undesirable. This motivates the proposed agent-based peer-to-peer (P2P) architecture for semantic discovery of manufacturing services across VEs. Multi-agent technology provides autonomous and flexible problemsolving capabilities in dynamic and adaptive VE environments. Peer-to-peer overlay provides highly scalable coupling across decentralised VEs, each of which exhibiting as a peer composed of multiple agents dealing with manufacturing services. The proposed architecture utilises a novel, efficient, two-stage search strategy - semantic peer discovery and semantic service discovery - to handle the complex searches of manufacturing services across VEs through fast peer filtering. The operation and experimental evaluation of the prototype system are presented to validate the implementation of the proposed approach.
The CEOS Global Observation Strategy for Disaster Risk Management: An Enterprise Architect's View
NASA Astrophysics Data System (ADS)
Moe, K.; Evans, J. D.; Frye, S.
2013-12-01
The Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS), on behalf of the Global Earth Observation System of Systems (GEOSS), is defining an enterprise architecture (known as GA.4.D) for the use of satellite observations in international disaster management. This architecture defines the scope and structure of the disaster management enterprise (based on disaster types and phases); its processes (expressed via use cases / system functions); and its core values (in particular, free and open data sharing via standard interfaces). The architecture also details how a disaster management enterprise describes, obtains, and handles earth observations and data products for decision-support; and how it draws on distributed computational services for streamlined operational capability. We have begun to apply this architecture to a new CEOS initiative, the Global Observation Strategy for Disaster Risk Management (DRM). CEOS is defining this Strategy based on the outcomes of three pilot projects focused on seismic hazards, volcanoes, and floods. These pilots offer a unique opportunity to characterize and assess the impacts (benefits / costs) of the GA.4.D architecture in practice. In particular, the DRM Floods Pilot is applying satellite-based optical and radar data to flood mitigation, warning, and response, including monitoring and modeling at regional to global scales. It is focused on serving user needs and building local institutional / technical capacity in the Caribbean, Southern Africa, and Southeast Asia. In the context of these CEOS DRM Pilots, we are characterizing where and how the GA.4D architecture helps participants to: - Understand the scope and nature of hazard events quickly and accurately - Assure timely delivery of observations into analysis, modeling, and decision-making - Streamline user access to products - Lower barriers to entry for users or suppliers - Streamline or focus field operations in disaster reduction - Reduce redundancies and gaps in inter-organizational systems - Assist in planning / managing / prioritizing information and computing resources - Adapt computational resources to new technologies or evolving user needs - Sustain capability for the long term Insights from this exercise are helping us to abstract best practices applicable to other contexts, disaster types, and disaster phases, whereby local communities can improve their use of satellite data for greater preparedness. This effort is also helping to assess the likely impacts and roles of emerging technologies (such as cloud computing, "Big Data" analysis, location-based services, crowdsourcing, semantic services, small satellites, drones, direct broadcast, or model webs) in future disaster management activities.
Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System
Price, Ronald N.; Hernandez, Kim
2001-01-01
Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.
Industrial Cloud: Toward Inter-enterprise Integration
NASA Astrophysics Data System (ADS)
Wlodarczyk, Tomasz Wiktor; Rong, Chunming; Thorsen, Kari Anne Haaland
Industrial cloud is introduced as a new inter-enterprise integration concept in cloud computing. The characteristics of an industrial cloud are given by its definition and architecture and compared with other general cloud concepts. The concept is then demonstrated by a practical use case, based on Integrated Operations (IO) in the Norwegian Continental Shelf (NCS), showing how industrial digital information integration platform gives competitive advantage to the companies involved. Further research and development challenges are also discussed.
NASA Astrophysics Data System (ADS)
Shani, Uri; Kol, Tomer; Shachor, Gal
2004-04-01
Managing medical digital information objects, and in particular medical images is an enterprise-grade problem. Firstly, there is the sheer amount of digital data that is generated in the proliferation of digital (and film-free) medical imaging. Secondly, the managing software ought to enjoy high availability, recoverability and manageability that are found only in the most business-critical systems. Indeed, such requirements are borrowed from the business enterprise world. Moreover, the solution for the medical information management problem should too employ the same software tools, middlewares and architectures. It is safe to say that all first-line medical PACS products strive to provide a solution for all these challenging requirements. The DICOM standard has been a prime enabler of such solutions. DICOM created the interconnectivity, which made it possible for a PACS service to manage millions of exams consisting of trillions of images. With the more comprehensive IHE architecture, the enterprise is expanded into a multi-facility regional conglomerate, which presents extreme demands from the data management system. HIPPA legislations add considerable challenges per security, privacy and other legal issues, which aggravate the situation. In this paper, we firstly present what in our view should be the general requirements for a first-line medical PACS, taken from an enterprise medical imaging storage and management solution perspective. While these requirements can be met by homegrown implementations, we suggest looking at the existing technologies, which have emerged in the recent years to meet exactly these challenges in the business world. We present an evolutionary process, which led to the design and implementation of a medical object management subsystem. This is indeed an enterprise medical imaging solution that is built upon respective technological components. The system answers all these challenges simply by not reinventing wheels, but rather reusing the best "wheels" for the job. Relying on such middleware components allowed us to concentrate on added value for this specific problem domain.
An integrated healthcare enterprise information portal and healthcare information system framework.
Hsieh, S L; Lai, Feipei; Cheng, P H; Chen, J L; Lee, H H; Tsai, W N; Weng, Y C; Hsieh, S H; Hsu, K P; Ko, L F; Yang, T H; Chen, C H
2006-01-01
The paper presents an integrated, distributed Healthcare Enterprise Information Portal (HEIP) and Hospital Information Systems (HIS) framework over wireless/wired infrastructure at National Taiwan University Hospital (NTUH). A single sign-on solution for the hospital customer relationship management (CRM) in HEIP has been established. The outcomes of the newly developed Outpatient Information Systems (OIS) in HIS are discussed. The future HEIP blueprints with CRM oriented features: e-Learning, Remote Consultation and Diagnosis (RCD), as well as on-Line Vaccination Services are addressed. Finally, the integrated HEIP and HIS architectures based on the middleware technologies are proposed along with the feasible approaches. The preliminary performance of multi-media, time-based data exchanges over the wireless HEIP side is collected to evaluate the efficiency of the architecture.
Stead, William W.; Miller, Randolph A.; Musen, Mark A.; Hersh, William R.
2000-01-01
The vision of integrating information—from a variety of sources, into the way people work, to improve decisions and process—is one of the cornerstones of biomedical informatics. Thoughts on how this vision might be realized have evolved as improvements in information and communication technologies, together with discoveries in biomedical informatics, and have changed the art of the possible. This review identified three distinct generations of “integration” projects. First-generation projects create a database and use it for multiple purposes. Second-generation projects integrate by bringing information from various sources together through enterprise information architecture. Third-generation projects inter-relate disparate but accessible information sources to provide the appearance of integration. The review suggests that the ideas developed in the earlier generations have not been supplanted by ideas from subsequent generations. Instead, the ideas represent a continuum of progress along the three dimensions of workflow, structure, and extraction. PMID:10730596
The research and implementation of PDM systems based on the .NET platform
NASA Astrophysics Data System (ADS)
Gao, Hong-li; Jia, Ying-lian; Yang, Ji-long; Jiang, Wei
2005-12-01
A new kind of PDM system scheme based on the .NET platform for solving application problems of the current PDM system applied in an enterprise is described. The key technologies of this system, such as .NET, Accessing Data, information processing, Web, ect., were discussed. The 3-tier architecture of a PDM system based on the C/S and B/S mixed mode was presented. In this system, all users share the same Database Server in order to ensure the coherence and safety of client data. ADO.NET leverages the power of XML to provide disconnected access to data, which frees the connection to be used by other clients. Using this approach, the system performance was improved. Moreover, the important function modules in a PDM system such as project management, product structure management and Document Management module were developed and realized.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Functional and Database Architecture Design.
1983-09-26
I AD-At3.N 275 FUNCTIONAL AND D ATABASE ARCHITECTURE DESIGN (U) ALPHA / OMEGA GROUP INC HARVARD MA 26 SEP 83 NODS 4-83-C 0525 UNCLASSIFIED FG52 N EE...0525 REPORT AOO1 FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Submitted to: Office of Naval Research Department of the Navy 800 N. Quincy Street...ZNTIS GRA& I DTIC TAB Unannounced 0 Justification REPORT ON Distribution/ Availability Codes Avail and/or FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Dist
10 Steps to Building an Architecture for Space Surveillance Projects
NASA Astrophysics Data System (ADS)
Gyorko, E.; Barnhart, E.; Gans, H.
Space surveillance is an increasingly complex task, requiring the coordination of a multitude of organizations and systems, while dealing with competing capabilities, proprietary processes, differing standards, and compliance issues. In order to fully understand space surveillance operations, analysts and engineers need to analyze and break down their operations and systems using what are essentially enterprise architecture processes and techniques. These techniques can be daunting to the first- time architect. This paper provides a summary of simplified steps to analyze a space surveillance system at the enterprise level in order to determine capabilities, services, and systems. These steps form the core of an initial Model-Based Architecting process. For new systems, a well defined, or well architected, space surveillance enterprise leads to an easier transition from model-based architecture to model-based design and provides a greater likelihood that requirements are fulfilled the first time. Both new and existing systems benefit from being easier to manage, and can be sustained more easily using portfolio management techniques, based around capabilities documented in the model repository. The resulting enterprise model helps an architect avoid 1) costly, faulty portfolio decisions; 2) wasteful technology refresh efforts; 3) upgrade and transition nightmares; and 4) non-compliance with DoDAF directives. The Model-Based Architecting steps are based on a process that Harris Corporation has developed from practical experience architecting space surveillance systems and ground systems. Examples are drawn from current work on documenting space situational awareness enterprises. The process is centered on DoDAF 2 and its corresponding meta-model so that terminology is standardized and communicable across any disciplines that know DoDAF architecting, including acquisition, engineering and sustainment disciplines. Each step provides a guideline for the type of data to collect, and also the appropriate views to generate. The steps include 1) determining the context of the enterprise, including active elements and high level capabilities or goals; 2) determining the desired effects of the capabilities and mapping capabilities against the project plan; 3) determining operational performers and their inter-relationships; 4) building information and data dictionaries; 5) defining resources associated with capabilities; 6) determining the operational behavior necessary to achieve each capability; 7) analyzing existing or planned implementations to determine systems, services and software; 8) cross-referencing system behavior to operational behavioral; 9) documenting system threads and functional implementations; and 10) creating any required textual documentation from the model.
75 FR 41180 - Notice of Order: Revisions to Enterprise Public Use Database
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-15
... Database AGENCY: Federal Housing Finance Agency. ACTION: Notice of order. SUMMARY: Section 1323(a)(1) of.... This responsibility to maintain a public use database (PUDB) for such mortgage data was transferred to... purpose of loan data field in these two databases. 4. Single-family Data Field 27 and Multifamily Data...
2009-03-01
Overview In this chapter, a literature review is conducted relevant to the research topics found to contribute to the successful creation and...priorities, and any common IT standards and tools ( Armour , Kaisler, and Liu 1999b). The vision defines the business strategy of the organization...and depicts how the enterprise will use IT in support of that strategy ( Armour , Kaisler, and Liu 1999a). One method of selecting this vision
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) VETERANS SMALL BUSINESS REGULATIONS Application Guidelines § 74.10 Where must an application be... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
NASA Astrophysics Data System (ADS)
Zhang, Min; Pavlicek, William; Panda, Anshuman; Langer, Steve G.; Morin, Richard; Fetterly, Kenneth A.; Paden, Robert; Hanson, James; Wu, Lin-Wei; Wu, Teresa
2015-03-01
DICOM Index Tracker (DIT) is an integrated platform to harvest rich information available from Digital Imaging and Communications in Medicine (DICOM) to improve quality assurance in radiology practices. It is designed to capture and maintain longitudinal patient-specific exam indices of interests for all diagnostic and procedural uses of imaging modalities. Thus, it effectively serves as a quality assurance and patient safety monitoring tool. The foundation of DIT is an intelligent database system which stores the information accepted and parsed via a DICOM receiver and parser. The database system enables the basic dosimetry analysis. The success of DIT implementation at Mayo Clinic Arizona calls for the DIT deployment at the enterprise level which requires significant improvements. First, for geographically distributed multi-site implementation, the first bottleneck is the communication (network) delay; the second is the scalability of the DICOM parser to handle the large volume of exams from different sites. To address this issue, DICOM receiver and parser are separated and decentralized by site. To facilitate the enterprise wide Quality Assurance (QA), a notable challenge is the great diversities of manufacturers, modalities and software versions, as the solution DIT Enterprise provides the standardization tool for device naming, protocol naming, physician naming across sites. Thirdly, advanced analytic engines are implemented online which support the proactive QA in DIT Enterprise.
AZALIA: an A to Z Assessment of the Likelihood of Insider Attack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Matt; Gates, Carrie; Frincke, Deborah A.
2009-05-12
Recent surveys indicate that the ``financial impact and operating losses due to insider intrusions are increasing'' . Within the government, insider abuse by those with access to sensitive or classified material can be particularly damaging. Further, the detection of such abuse is becoming more difficult due to other influences, such as out-sourcing, social networking and mobile computing. This paper focuses on a key aspect of our enterprise-wide architecture: a risk assessment based on predictions of the likelihood that a specific user poses an increased risk of behaving in a manner that is inconsistent with the organization’s stated goals and interests.more » We present a high-level architectural description for an enterprise-level insider threat product and we describe psychosocial factors and associated data needs to recognize possible insider threats.« less
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Bak, Peter
2015-01-01
Abstract. IHE XDS-I profile proposes an architecture model for cross-enterprise medical image sharing, but there are only a few clinical implementations reported. Here, we investigate three pilot studies based on the IHE XDS-I profile to see whether we can use this architecture as a foundation for image sharing solutions in a variety of health-care settings. The first pilot study was image sharing for cross-enterprise health care with federated integration, which was implemented in Huadong Hospital and Shanghai Sixth People’s Hospital within the Shanghai Shen-Kang Hospital Management Center; the second pilot study was XDS-I–based patient-controlled image sharing solution, which was implemented by the Radiological Society of North America (RSNA) team in the USA; and the third pilot study was collaborative imaging diagnosis with electronic health-care record integration in regional health care, which was implemented in two districts in Shanghai. In order to support these pilot studies, we designed and developed new image access methods, components, and data models such as RAD-69/WADO hybrid image retrieval, RSNA clearinghouse, and extension of metadata definitions in both the submission set and the cross-enterprise document sharing (XDS) registry. We identified several key issues that impact the implementation of XDS-I in practical applications, and conclude that the IHE XDS-I profile is a theoretically good architecture and a useful foundation for medical image sharing solutions across multiple regional health-care providers. PMID:26835497
Zhang, Jianguo; Zhang, Kai; Yang, Yuanyuan; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Bak, Peter
2015-10-01
IHE XDS-I profile proposes an architecture model for cross-enterprise medical image sharing, but there are only a few clinical implementations reported. Here, we investigate three pilot studies based on the IHE XDS-I profile to see whether we can use this architecture as a foundation for image sharing solutions in a variety of health-care settings. The first pilot study was image sharing for cross-enterprise health care with federated integration, which was implemented in Huadong Hospital and Shanghai Sixth People's Hospital within the Shanghai Shen-Kang Hospital Management Center; the second pilot study was XDS-I-based patient-controlled image sharing solution, which was implemented by the Radiological Society of North America (RSNA) team in the USA; and the third pilot study was collaborative imaging diagnosis with electronic health-care record integration in regional health care, which was implemented in two districts in Shanghai. In order to support these pilot studies, we designed and developed new image access methods, components, and data models such as RAD-69/WADO hybrid image retrieval, RSNA clearinghouse, and extension of metadata definitions in both the submission set and the cross-enterprise document sharing (XDS) registry. We identified several key issues that impact the implementation of XDS-I in practical applications, and conclude that the IHE XDS-I profile is a theoretically good architecture and a useful foundation for medical image sharing solutions across multiple regional health-care providers.
Signori, Marcos R; Garcia, Renato
2010-01-01
This paper presents a model that aids the Clinical Engineering to deal with Risk Management in the Healthcare Technological Process. The healthcare technological setting is complex and supported by three basics entities: infrastructure (IS), healthcare technology (HT), and human resource (HR). Was used an Enterprise Architecture - MODAF (Ministry of Defence Architecture Framework) - to model this process for risk management. Thus, was created a new model to contribute to the risk management in the HT process, through the Clinical Engineering viewpoint. This architecture model can support and improve the decision making process of the Clinical Engineering to the Risk Management in the Healthcare Technological process.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
... Database Incorporating High-Cost Single-Family Securitized Loan Data Fields and Technical Data Field... single-family matrix in FHFA's Public Use Database (PUDB) to include data fields for the high-cost single... of loan attributes in FHFA's databases that could be used, singularly or in some combination, to...
Guest editorial. Integrated healthcare information systems.
Li, Ling; Ge, Ri-Li; Zhou, Shang-Ming; Valerdi, Ricardo
2012-07-01
The use of integrated information systems for healthcare has been started more than a decade ago. In recent years, rapid advances in information integration methods have spurred tremendous growth in the use of integrated information systems in healthcare delivery. Various techniques have been used for probing such integrated systems. These techniques include service-oriented architecture (SOA), EAI, workflow management, grid computing, and others. Many applications require a combination of these techniques, which gives rise to the emergence of enterprise systems in healthcare. Development of the techniques originated from different disciplines has the potential to significantly improve the performance of enterprise systems in healthcare. This editorial paper briefly introduces the enterprise systems in the perspective of healthcare informatics.
Towards the Architecture of an Instructional Multimedia Database.
ERIC Educational Resources Information Center
Verhagen, Plin W.; Bestebreurtje, R.
1994-01-01
Discussion of multimedia databases in education focuses on the development of an adaptable database in The Netherlands that uses optical storage media to hold the audiovisual components. Highlights include types of applications; types of users; accessibility; adaptation; an object-oriented approach; levels of the database architecture; and…
Enterprise Information System Integration Technology
NASA Astrophysics Data System (ADS)
Tanaka, Tetsuo; Yumoto, Masaki; Itsuki, Rei
In the current rapidly changing business environment, companies need to be efficient and agile to survive and thrive. That is why flexible systems integration is urgent and crucial concern for any enterprise. For the meanwhile, systems integration technology is getting more complicated, and middleware types are beginning blur for decades. We sort system integration into four different types, “Delayed Federation", “Real-time Federation", “Delayed Integration", and “Real-time Integration". We also outline appropriate technology and architecture for each type.
1995-02-01
Descriptive Summary 2 FIP Resources and Indefinite Delivery - Quantity Contracts 3 Central Design Activity Summary 4 Accesion For NTIS CRA&IDTIC TAB...interface with financial systems should be integrated into the standard architecture of the Military Departments iii to ensure maximum cost...provided to support and maintain the DFAS enterprise local area network initiative to establish a standardized architecture for office automation and
Lack of integration governance in ERP development: a case study on causes and effects
NASA Astrophysics Data System (ADS)
Kähkönen, Tommi; Smolander, Kari; Maglyas, Andrey
2017-09-01
The development of an enterprise resource planning (ERP) system starts actually after it has been implemented and taken into use. It is necessary to integrate ERP with other business information systems inside and outside the company. With the grounded theory, we aim to understand how integration challenges emerged in a large manufacturing enterprise when the long-term ERP system reached the beginning of its retirement. Structural changes, an information technology governance model, lack of organisational vision, having no architectural descriptions, lack of collaboration, cost cutting, and organisational culture made integration governance troublesome. As a consequence, the enterprise suffered from several undesired effects, such as complex integration scenarios between internal systems, and failing to provide its customers the needed information. The reduction of costs strengthened the organisational silos further and led to unrealised business process improvements. We provide practitioners with four recommendations. First, the organisational goals for integration should be exposed. Second, when evaluating the needs and impacts of integration, a documented architectural description about the system landscape needs to be utilised. Third, the role of IT should be emphasised in development decision-making, and fourth, collaboration is the core ingredient for successful integration governance.
Secure Cooperative Data Access in Multi-Cloud Environment
ERIC Educational Resources Information Center
Le, Meixing
2013-01-01
In this dissertation, we discuss the problem of enabling cooperative query execution in a multi-cloud environment where the data is owned and managed by multiple enterprises. Each enterprise maintains its own relational database using a private cloud. In order to implement desired business services, parties need to share selected portion of their…
Patterns-Based IS Change Management in SMEs
NASA Astrophysics Data System (ADS)
Makna, Janis; Kirikova, Marite
The majority of information systems change management guidelines and standards are either too abstract or too bureaucratic to be easily applicable in small enterprises. This chapter proposes the approach, the method, and the prototype that are designed especially for information systems change management in small and medium enterprises. The approach is based on proven patterns of changes in the set of information systems elements. The set of elements was obtained by theoretical analysis of information systems and business process definitions and enterprise architectures. The patterns were evolved from a number of information systems theories and tested in 48 information systems change management projects. The prototype presents and helps to handle three basic change patterns, which help to anticipate the overall scope of changes related to particular elementary changes in an enterprise information system. The use of prototype requires just basic knowledge in organizational business process and information management.
Towards a Global Names Architecture: The future of indexing scientific names.
Pyle, Richard L
2016-01-01
For more than 250 years, the taxonomic enterprise has remained almost unchanged. Certainly, the tools of the trade have improved: months-long journeys aboard sailing ships have been reduced to hours aboard jet airplanes; advanced technology allows humans to access environments that were once utterly inaccessible; GPS has replaced crude maps; digital hi-resolution imagery provides far more accurate renderings of organisms that even the best commissioned artists of a century ago; and primitive candle-lit microscopes have been replaced by an array of technologies ranging from scanning electron microscopy to DNA sequencing. But the basic paradigm remains the same. Perhaps the most revolutionary change of all - which we are still in the midst of, and which has not yet been fully realized - is the means by which taxonomists manage and communicate the information of their trade. The rapid evolution in recent decades of computer database management software, and of information dissemination via the Internet, have both dramatically improved the potential for streamlining the entire taxonomic process. Unfortunately, the potential still largely exceeds the reality. The vast majority of taxonomic information is either not yet digitized, or digitized in a form that does not allow direct and easy access. Moreover, the information that is easily accessed in digital form is not yet seamlessly interconnected. In an effort to bring reality closer to potential, a loose affiliation of major taxonomic resources, including GBIF, the Encyclopedia of Life, NBII, Catalog of Life, ITIS, IPNI, ICZN, Index Fungorum, and many others have been crafting a "Global Names Architecture" (GNA). The intention of the GNA is not to replace any of the existing taxonomic data initiatives, but rather to serve as a dynamic index to interconnect them in a way that streamlines the entire taxonomic enterprise: from gathering specimens in the field, to publication of new taxa and related data.
An Autonomic Framework for Integrating Security and Quality of Service Support in Databases
ERIC Educational Resources Information Center
Alomari, Firas
2013-01-01
The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
...) Not to exceed 3000 positions that require unique cyber security skills and knowledge to perform cyber..., distributed control systems security, cyber incident response, cyber exercise facilitation and management, cyber vulnerability detection and assessment, network and systems engineering, enterprise architecture...
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
Wang, Zhihui; Kiryu, Tohru
2006-04-01
Since machine-based exercise still uses local facilities, it is affected by time and place. We designed a web-based system architecture based on the Java 2 Enterprise Edition that can accomplish continuously supported machine-based exercise. In this system, exercise programs and machines are loosely coupled and dynamically integrated on the site of exercise via the Internet. We then extended the conventional health promotion model, which contains three types of players (users, exercise trainers, and manufacturers), by adding a new player: exercise program creators. Moreover, we developed a self-describing strategy to accommodate a variety of exercise programs and provide ease of use to users on the web. We illustrate our novel design with examples taken from our feasibility study on a web-based cycle ergometer exercise system. A biosignal-based workload control approach was introduced to ensure that users performed appropriate exercise alone.
A technological infrastructure to sustain Internetworked Enterprises
NASA Astrophysics Data System (ADS)
La Mattina, Ernesto; Savarino, Vincenzo; Vicari, Claudia; Storelli, Davide; Bianchini, Devis
In the Web 3.0 scenario, where information and services are connected by means of their semantics, organizations can improve their competitive advantage by publishing their business and service descriptions. In this scenario, Semantic Peer to Peer (P2P) can play a key role in defining dynamic and highly reconfigurable infrastructures. Organizations can share knowledge and services, using this infrastructure to move towards value networks, an emerging organizational model characterized by fluid boundaries and complex relationships. This chapter collects and defines the technological requirements and architecture of a modular and multi-Layer Peer to Peer infrastructure for SOA-based applications. This technological infrastructure, based on the combination of Semantic Web and P2P technologies, is intended to sustain Internetworked Enterprise configurations, defining a distributed registry and enabling more expressive queries and efficient routing mechanisms. The following sections focus on the overall architecture, while describing the layers that form it.
NASA Astrophysics Data System (ADS)
van Veenstra, Anne Fleur; Zuurmond, Arre
To enhance the quality of their online service delivery, many government organizations seek to transform their organization beyond merely setting up a front office. This transformation includes elements such as the formation of service delivery chains, the adoption of a management strategy supporting process orientation and the implementation of enterprise architecture. This paper explores whether undertaking this transformation has a positive effect on the quality of online service delivery, using data gathered from seventy local governments. We found that having an externally oriented management strategy in place, adopting enterprise architecture, aligning information systems to business and sharing activities between processes and departments are positively related to the quality of online service delivery. We recommend that further research should be carried out to find out whether dimensions of organizational development too have an effect on online service delivery in the long term.
Applying a Service-Oriented Architecture to Operational Flight Program Development
2007-09-01
using two Java 2 Enterprise Edition (J2EE) Web servers. The weapon models were accessed using a SUN Microsystems Java Web Services Development Pack...Oriented Architectures 22 CROSSTALK The Journal of Defense Software Engineering September 2007 tion, and Spring/ Hibernate to provide the data access...tion since a major coding effort was avoided. The majority of the effort was tweaking pre-existing Java source code and editing of eXtensible Markup
Component architecture in drug discovery informatics.
Smith, Peter M
2002-05-01
This paper reviews the characteristics of a new model of computing that has been spurred on by the Internet, known as Netcentric computing. Developments in this model led to distributed component architectures, which, although not new ideas, are now realizable with modern tools such as Enterprise Java. The application of this approach to scientific computing, particularly in pharmaceutical discovery research, is discussed and highlighted by a particular case involving the management of biological assay data.
Technology architecture guidelines for a health care system.
Jones, D T; Duncan, R; Langberg, M L; Shabot, M M
2000-01-01
Although the demand for use of information technology within the healthcare industry is intensifying, relatively little has been written about guidelines to optimize IT investments. A technology architecture is a set of guidelines for technology integration within an enterprise. The architecture is a critical tool in the effort to control information technology (IT) operating costs by constraining the number of technologies supported. A well-designed architecture is also an important aid to integrating disparate applications, data stores and networks. The authors led the development of a thorough, carefully designed technology architecture for a large and rapidly growing health care system. The purpose and design criteria are described, as well as the process for gaining consensus and disseminating the architecture. In addition, the processes for using, maintaining, and handling exceptions are described. The technology architecture is extremely valuable to health care organizations both in controlling costs and promoting integration.
Technology architecture guidelines for a health care system.
Jones, D. T.; Duncan, R.; Langberg, M. L.; Shabot, M. M.
2000-01-01
Although the demand for use of information technology within the healthcare industry is intensifying, relatively little has been written about guidelines to optimize IT investments. A technology architecture is a set of guidelines for technology integration within an enterprise. The architecture is a critical tool in the effort to control information technology (IT) operating costs by constraining the number of technologies supported. A well-designed architecture is also an important aid to integrating disparate applications, data stores and networks. The authors led the development of a thorough, carefully designed technology architecture for a large and rapidly growing health care system. The purpose and design criteria are described, as well as the process for gaining consensus and disseminating the architecture. In addition, the processes for using, maintaining, and handling exceptions are described. The technology architecture is extremely valuable to health care organizations both in controlling costs and promoting integration. PMID:11079913
Internet-enabled collaborative agent-based supply chains
NASA Astrophysics Data System (ADS)
Shen, Weiming; Kremer, Rob; Norrie, Douglas H.
2000-12-01
This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.
Preparing for Human Exploration
NASA Technical Reports Server (NTRS)
Drake, Bret G.; Joosten, B. Kent
1998-01-01
NASA's Human Exploration and Development of Space (HEDS) Enterprise is defining architectures and requirements for human exploration that radically reduce the costs of such missions through the use of advanced technologies, commercial partnerships and innovative systems strategies. In addition, the HEDS Enterprise is collaborating with the Space Science Enterprise to acquire needed early knowledge about Mars and to demonstrate critical technologies via robotic missions. This paper provides an overview of the technological challenges facing NASA as it prepares for human exploration. Emphasis is placed on identifying the key technologies including those which will provide the most return in terms of reducing total mission cost and/or reducing potential risk to the mission crew. Top-level requirements are provided for those critical enabling technology options currently under consideration.
Ryan, Amanda; Eklund, Peter
2008-01-01
Healthcare information is composed of many types of varying and heterogeneous data. Semantic interoperability in healthcare is especially important when all these different types of data need to interact. Presented in this paper is a solution to interoperability in healthcare based on a standards-based middleware software architecture used in enterprise solutions. This architecture has been translated into the healthcare domain using a messaging and modeling standard which upholds the ideals of the Semantic Web (HL7 V3) combined with a well-known standard terminology of clinical terms (SNOMED CT).
Rio: a dynamic self-healing services architecture using Jini networking technology
NASA Astrophysics Data System (ADS)
Clarke, James B.
2002-06-01
Current mainstream distributed Java architectures offer great capabilities embracing conventional enterprise architecture patterns and designs. These traditional systems provide robust transaction oriented environments that are in large part focused on data and host processors. Typically, these implementations require that an entire application be deployed on every machine that will be used as a compute resource. In order for this to happen, the application is usually taken down, installed and started with all systems in-sync and knowing about each other. Static environments such as these present an extremely difficult environment to setup, deploy and administer.
iSDS: a self-configurable software-defined storage system for enterprise
NASA Astrophysics Data System (ADS)
Chen, Wen-Shyen Eric; Huang, Chun-Fang; Huang, Ming-Jen
2018-01-01
Storage is one of the most important aspects of IT infrastructure for various enterprises. But, enterprises are interested in more than just data storage; they are interested in such things as more reliable data protection, higher performance and reduced resource consumption. Traditional enterprise-grade storage satisfies these requirements at high cost. It is because traditional enterprise-grade storage is usually designed and constructed by customised field-programmable gate array to achieve high-end functionality. However, in this ever-changing environment, enterprises request storage with more flexible deployment and at lower cost. Moreover, the rise of new application fields, such as social media, big data, video streaming service etc., makes operational tasks for administrators more complex. In this article, a new storage system called intelligent software-defined storage (iSDS), based on software-defined storage, is described. More specifically, this approach advocates using software to replace features provided by traditional customised chips. To alleviate the management burden, it also advocates applying machine learning to automatically configure storage to meet dynamic requirements of workloads running on storage. This article focuses on the analysis feature of iSDS cluster by detailing its architecture and design.
The land management and operations database (LMOD)
USDA-ARS?s Scientific Manuscript database
This paper presents the design, implementation, deployment, and application of the Land Management and Operations Database (LMOD). LMOD is the single authoritative source for reference land management and operation reference data within the USDA enterprise data warehouse. LMOD supports modeling appl...
Development of strategic enterprise architecture design for the Ohio Department of Transportation.
DOT National Transportation Integrated Search
2014-01-01
In order for the Ohio Department of Transportation (ODOT) to successfully carry out its mission, it is essential to : appropriately incorporate and utilize technology. Information management systems are vital to maintaining the states : transporta...
Picture archiving and computing systems: the key to enterprise digital imaging.
Krohn, Richard
2002-09-01
The utopian view of the electronic medical record includes the digital transformation of all aspects of patient information. Historically, imagery from the radiology, cardiology, ophthalmology, and pathology departments, as well as the emergency room, has been a morass of paper, film, and other media, isolated within each department's system architecture. In answer to this dilemma, picture archiving and computing systems have become the focal point of efforts to create a single platform for the collection, storage, and distribution of clinical imagery throughout the health care enterprise.
Integrating all medical records to an enterprise viewer.
Li, Haomin; Duan, Huilong; Lu, Xudong; Zhao, Chenhui; An, Jiye
2005-01-01
The idea behind hospital information systems is to make all of a patient's medical reports, lab results, and images electronically available to clinicians, instantaneously, wherever they are. But the higgledy-piggledy evolution of most hospital computer systems makes it hard to integrate all these clinical records. Although several integration standards had been proposed to meet this challenger, none of them is fit to Chinese hospitals. In this paper, we introduce our work of implementing a three-tiered architecture enterprise viewer in Huzhou Central Hospital to integration all existing medical information systems using limited resource.
Enterprise Cloud Architecture for Chinese Ministry of Railway
NASA Astrophysics Data System (ADS)
Shan, Xumei; Liu, Hefeng
Enterprise like PRC Ministry of Railways (MOR), is facing various challenges ranging from highly distributed computing environment and low legacy system utilization, Cloud Computing is increasingly regarded as one workable solution to address this. This article describes full scale cloud solution with Intel Tashi as virtual machine infrastructure layer, Hadoop HDFS as computing platform, and self developed SaaS interface, gluing virtual machine and HDFS with Xen hypervisor. As a result, on demand computing task application and deployment have been tackled per MOR real working scenarios at the end of article.
Research on SaaS and Web Service Based Order Tracking
NASA Astrophysics Data System (ADS)
Jiang, Jianhua; Sheng, Buyun; Gong, Lixiong; Yang, Mingzhong
To solve the order tracking of across enterprises in Dynamic Virtual Enterprise (DVE), a SaaS and web service based order tracking solution was designed by analyzing the order management process in DVE. To achieve the system, the SaaS based architecture of data management on order tasks manufacturing states was constructed, and the encapsulation method of transforming application system into web service was researched. Then the process of order tracking in the system was given out. Finally, the feasibility of this study was verified by the development of a prototype system.
Research on time synchronization scheme of MES systems in manufacturing enterprise
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
System architecture of communication infrastructures for PPDR organisations
NASA Astrophysics Data System (ADS)
Müller, Wilmuth
2017-04-01
The growing number of events affecting public safety and security (PS and S) on a regional scale with potential to grow up to large scale cross border disasters puts an increased pressure on organizations responsible for PS and S. In order to respond timely and in an adequate manner to such events Public Protection and Disaster Relief (PPDR) organizations need to cooperate, align their procedures and activities, share the needed information and be interoperable. Existing PPDR/PMR technologies do not provide broadband capability, which is a major limitation in supporting new services hence new information flows and currently they have no successor. There is also no known standard that addresses interoperability of these technologies. The paper at hands provides an approach to tackle the above mentioned aspects by defining an Enterprise Architecture (EA) of PPDR organizations and a System Architecture of next generation PPDR communication networks for a variety of applications and services on broadband networks, including the ability of inter-system, inter-agency and cross-border operations. The Open Safety and Security Architecture Framework (OSSAF) provides a framework and approach to coordinate the perspectives of different types of stakeholders within a PS and S organization. It aims at bridging the silos in the chain of commands and on leveraging interoperability between PPDR organizations. The framework incorporates concepts of several mature enterprise architecture frameworks including the NATO Architecture Framework (NAF). However, OSSAF is not providing details on how NAF should be used for describing the OSSAF perspectives and views. In this contribution a mapping of the NAF elements to the OSSAF views is provided. Based on this mapping, an EA of PPDR organizations with a focus on communication infrastructure related capabilities is presented. Following the capability modeling, a system architecture for secure and interoperable communication infrastructures for PPDR organizations is presented. This architecture was implemented within a project sponsored by the European Union and successfully demonstrated in a live validation exercise in June 2016.
NASA Astrophysics Data System (ADS)
Kim, Woojin; Boonn, William
2010-03-01
Data mining of existing radiology and pathology reports within an enterprise health system can be used for clinical decision support, research, education, as well as operational analyses. In our health system, the database of radiology and pathology reports exceeds 13 million entries combined. We are building a web-based tool to allow search and data analysis of these combined databases using freely available and open source tools. This presentation will compare performance of an open source full-text indexing tool to MySQL's full-text indexing and searching and describe implementation procedures to incorporate these capabilities into a radiology-pathology search engine.
Knowledge management in healthcare: towards 'knowledge-driven' decision-support services.
Abidi, S S
2001-09-01
In this paper, we highlight the involvement of Knowledge Management in a healthcare enterprise. We argue that the 'knowledge quotient' of a healthcare enterprise can be enhanced by procuring diverse facets of knowledge from the seemingly placid healthcare data repositories, and subsequently operationalising the procured knowledge to derive a suite of Strategic Healthcare Decision-Support Services that can impact strategic decision-making, planning and management of the healthcare enterprise. In this paper, we firstly present a reference Knowledge Management environment-a Healthcare Enterprise Memory-with the functionality to acquire, share and operationalise the various modalities of healthcare knowledge. Next, we present the functional and architectural specification of a Strategic Healthcare Decision-Support Services Info-structure, which effectuates a synergy between knowledge procurement (vis-à-vis Data Mining) and knowledge operationalisation (vis-à-vis Knowledge Management) techniques to generate a suite of strategic knowledge-driven decision-support services. In conclusion, we argue that the proposed Healthcare Enterprise Memory is an attempt to rethink the possible sources of leverage to improve healthcare delivery, hereby providing a valuable strategic planning and management resource to healthcare policy makers.
NASA Astrophysics Data System (ADS)
Mathews, T. J.; Little, M. M.; Huffer, E.
2013-12-01
Working from an Enterprise Architecture, the ASDC has implemented a suite of new tools to provide improved access and understanding of data products related to the Earth's radiation budget, clouds, aerosols and tropospheric chemistry. This poster describes the overall architecture and the capabilities that have been implemented within the past twelve months. Further insight is offered into the issues and constraints of those tools, as well as lessons learned in their implementation.
Data Modeling Challenges of Advanced Interoperability.
Blobel, Bernd; Oemig, Frank; Ruotsalainen, Pekka
2018-01-01
Progressive health paradigms, involving many different disciplines and combining multiple policy domains, requires advanced interoperability solutions. This results in special challenges for modeling health systems. The paper discusses classification systems for data models and enterprise business architectures and compares them with the ISO Reference Architecture. On that basis, existing definitions, specifications and standards of data models for interoperability are evaluated and their limitations are discussed. Amendments to correctly use those models and to better meet the aforementioned challenges are offered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loparo, Kenneth; Kolacinski, Richard; Threeanaew, Wanchat
A central goal of the work was to enable both the extraction of all relevant information from sensor data, and the application of information gained from appropriate processing and fusion at the system level to operational control and decision-making at various levels of the control hierarchy through: 1. Exploiting the deep connection between information theory and the thermodynamic formalism, 2. Deployment using distributed intelligent agents with testing and validation in a hardware-in-the loop simulation environment. Enterprise architectures are the organizing logic for key business processes and IT infrastructure and, while the generality of current definitions provides sufficient flexibility, the currentmore » architecture frameworks do not inherently provide the appropriate structure. Of particular concern is that existing architecture frameworks often do not make a distinction between ``data'' and ``information.'' This work defines an enterprise architecture for health and condition monitoring of power plant equipment and further provides the appropriate foundation for addressing shortcomings in current architecture definition frameworks through the discovery of the information connectivity between the elements of a power generation plant. That is, to identify the correlative structure between available observations streams using informational measures. The principle focus here is on the implementation and testing of an emergent, agent-based, algorithm based on the foraging behavior of ants for eliciting this structure and on measures for characterizing differences between communication topologies. The elicitation algorithms are applied to data streams produced by a detailed numerical simulation of Alstom’s 1000 MW ultra-super-critical boiler and steam plant. The elicitation algorithm and topology characterization can be based on different informational metrics for detecting connectivity, e.g. mutual information and linear correlation.« less
Cryptography for a High-Assurance Web-Based Enterprise
2013-10-01
2. Other Cryptographic services - Java provides many cryptographic services through the Java Cryptography Architecture (JCA) framework. The...id=2125 [7]. Miller, Sandra Kay, Fiber Optic Networks Vulnerable to Attack, Information Security Magazine, November 15, 2006, [8]. José R.C
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.
Development of enterprise architecture in university using TOGAF as framework
NASA Astrophysics Data System (ADS)
Amalia, Endang; Supriadi, Hari
2017-06-01
The university of XYZ is located in Bandung, West Java. It has an infrastructure of technology information (IT) which is managed independently. Currently, the IT at the University of XYZ employs a complex conventional management pattern that does not result in a fully integrated IT infrastructure. This is not adaptive in addressing solutions to changing business needs and applications. In addition, it impedes the innovative development of sustainable IT services and also contributes to an unnecessary high workload for managers. This research aims to establish the concept of IS/IT strategic planning. This is used in the development of the IS/IT and in designing the information technology infrastructure based on the framework of The Open Group Architecture Framework (TOGAF) and Architecture Development Method (ADM). A case study will be done at the University of XYZ using the concept of qualitative research through review of literatures and interviews. This study generates the following stages:(1) forming a design using TOGAF and the ADM around nine functional areas of business and propose 12 application candidates to be developed at XYZ University; (2) generating 11 principles of the development of information technology architecture; (3) creating a portfolio for future applications (McFarlan Grid), generating 6 applications in the strategic quadrant (SIAKAD-T, E-LIBRARY, SIPADU-T, DSS, SIPPM-T, KMS), 2 quadrant application operations (PMS-T, CRM), 4 quadrant application supports (MNC-T, NOPEC-T, EMAIL-SYSTEM, SSO); and (4) modelling the enterprise architecture of this study which could be a reference in making a blueprint for the development of information systems and information technology at the University of XYZ.
SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases
NASA Astrophysics Data System (ADS)
Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.
One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2011 CFR
2011-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2013 CFR
2013-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2014 CFR
2014-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2010 CFR
2010-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Using Virtual Servers to Teach the Implementation of Enterprise-Level DBMSs: A Teaching Note
ERIC Educational Resources Information Center
Wagner, William P.; Pant, Vik
2010-01-01
One of the areas where demand has remained strong for MIS students is in the area of database management. Since the early days, this topic has been a mainstay in the MIS curriculum. Students of database management today typically learn about relational databases, SQL, normalization, and how to design and implement various kinds of database…
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Organizational Leadership Process for University Education
ERIC Educational Resources Information Center
Llamosa-Villalba, Ricardo; Delgado, Dario J.; Camacho, Heidi P.; Paéz, Ana M.; Valdivieso, Raúl F.
2014-01-01
This paper relates the "Agile School", an emerging archetype of the enterprise architecture: "Processes of Organizational Leadership" for leading and managing strategies, tactics and operations of forming in Higher Education Institutions. Agile School is a system for innovation and deep transformation of University Institutions…
Network-Centric Operations Support: Lessons Learned, Status, and Way-Ahead
2014-06-01
34 Information Sharing Environment (ISE) Presentation, Enterprise Architecture Conference, 2011 (http://goveaconference.com/Events/2011/Sessions/ Tuesday ...cgi-bin/GetTRDoc?AD=ADA525312) [35] Morris , Michael, et al. Widget and Mobile Technologies a Forcing Function for Acquisition Change: Paradigm Shift
2006-06-01
7 C. THE ENTERPRISE ARCHITECTURE MANAGEMENT MATURITY FRAMEWORK ...43 B. WHAT ARE INFORMATION TECHNOLOGY FRAMEWORKS AND WHY SHOULD THEY BE IMPLEMENTED?................................43 C...THE INFORMATION TECHNOLOGY INFRASTRUCTURE LIBRARY FRAMEWORK ..........................................................................44 1. What
Governing for Enterprise Security (Briefing Charts)
2005-01-01
governance/stakeholder.html © 2005 by Carnegie Mellon University page 16 Adequate Security and Operational Risk “Appropriate business security is that which...Sherwood 03] Sherwood, John; Clark; Andrew; Lynas, David. “Systems and Business Security Architecture.” SABSA Limited, 17 September 2003. Available at
75 FR 77753 - Pilot Program for the Temporary Exchange of Information Technology Personnel
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... to enhance its position and expertise in the IT field, particularly in cybersecurity. The... IT professional skills, particularly in the area of cybersecurity. Several Components including... systems, software application, cybersecurity, enterprise architecture, policy and planning, internet/web...
Department of Defense Information Enterprise Architecture Version 1.2
2010-05-07
mission. Principles express an organization’s intentions so that design and investment decisions can be made from a common basis of understanding ... Business rules are definitive statements that constrain operations to implement the principle and associated policies. The vision, principles, and
caGrid 1.0: a Grid enterprise architecture for cancer research.
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2007-10-11
caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.
Health care applications based on mobile phone centric smart sensor network.
Quero, J M; Tarrida, C L; Santana, J J; Ermolov, V; Jantunen, I; Laine, H; Eichholz, J
2007-01-01
This paper presents the MIMOSA architecture and development platform to create Ambient Intelligence applications. MIMOSA achieves this objective by developing a personal mobile-device centric architecture and open technology platform where microsystem technology is the key enabling technology for their realization due to its low-cost, low power consumption, and small size. This paper focuses the demonstration activities carried out in the field of health care. MIMOSA project is a European level initiative involving 15 enterprises and research institutions and universities.
2008-06-01
executes the avionics test) can run on the new ATS thus creating the common ATS framework . The system will also enable numerous new functional...Enterprise-level architecture that reflects corporate DoD priorities and requirements for business systems, and provides a common framework to ensure that...entire Business Mission Area (BMA) of the DoD. The BEA also contains a set of integrated Department of Defense Architecture Framework (DoDAF
Research on Customer Value Based on Extension Data Mining
NASA Astrophysics Data System (ADS)
Chun-Yan, Yang; Wei-Hua, Li
Extenics is a new discipline for dealing with contradiction problems with formulize model. Extension data mining (EDM) is a product combining Extenics with data mining. It explores to acquire the knowledge based on extension transformations, which is called extension knowledge (EK), taking advantage of extension methods and data mining technology. EK includes extensible classification knowledge, conductive knowledge and so on. Extension data mining technology (EDMT) is a new data mining technology that mining EK in databases or data warehouse. Customer value (CV) can weigh the essentiality of customer relationship for an enterprise according to an enterprise as a subject of tasting value and customers as objects of tasting value at the same time. CV varies continually. Mining the changing knowledge of CV in databases using EDMT, including quantitative change knowledge and qualitative change knowledge, can provide a foundation for that an enterprise decides the strategy of customer relationship management (CRM). It can also provide a new idea for studying CV.
NASA Astrophysics Data System (ADS)
Gilman, Charles R.; Aparicio, Manuel; Barry, J.; Durniak, Timothy; Lam, Herman; Ramnath, Rajiv
1997-12-01
An enterprise's ability to deliver new products quickly and efficiently to market is critical for competitive success. While manufactureres recognize the need for speed and flexibility to compete in this market place, companies do not have the time or capital to move to new automation technologies. The National Industrial Information Infrastructure Protocols Consortium's Solutions for MES Adaptable Replicable Technology (NIIIP SMART) subgroup is developing an information infrastructure to enable the integration and interoperation among Manufacturing Execution Systems (MES) and Enterprise Information Systems within an enterprise or among enterprises. The goal of these developments is an adaptable, affordable, reconfigurable, integratable manufacturing system. Key innovative aspects of NIIIP SMART are: (1) Design of an industry standard object model that represents the diverse aspects of MES. (2) Design of a distributed object network to support real-time information sharing. (3) Product data exchange based on STEP and EXPRESS (ISO 10303). (4) Application of workflow and knowledge management technologies to enact manufacturing and business procedures and policy. (5) Application of intelligent agents to support emergent factories. This paper illustrates how these technologies have been incorporated into the NIIIP SMART system architecture to enable the integration and interoperation of existing tools and future MES applications in a 'plug and play' environment.
Technical architecture of ONC-approved plans for statewide health information exchange.
Barrows, Randolph C; Ezzard, John
2011-01-01
ONC-approved state plans for HIE were reviewed for descriptions and depictions of statewide HIE technical architecture. Review was complicated by non-standard organizational elements and technical terminology across state plans. Findings were mapped to industry standard, referenced, and defined HIE architecture descriptions and characteristics. Results are preliminary due to the initial subset of ONC-approved plans available, the rapid pace of new ONC-plan approvals, and continuing advancements in standards and technology of HIE, etc. Review of 28 state plans shows virtually all include a direct messaging component, but for participating entities at state-specific levels of granularity (RHIO, enterprise, organization/provider). About ½ of reviewed plans describe a federated architecture, and ¼ of plans utilize a single-vendor "hybrid-federated" architecture. About 1/3 of states plan to leverage new federal and open exchange technologies (DIRECT, CONNECT, etc.). Only one plan describes a centralized architecture for statewide HIE, but others combine central and federated architectural approaches.
Technical Architecture of ONC-Approved Plans For Statewide Health Information Exchange
Barrows, Randolph C.; Ezzard, John
2011-01-01
ONC-approved state plans for HIE were reviewed for descriptions and depictions of statewide HIE technical architecture. Review was complicated by non-standard organizational elements and technical terminology across state plans. Findings were mapped to industry standard, referenced, and defined HIE architecture descriptions and characteristics. Results are preliminary due to the initial subset of ONC-approved plans available, the rapid pace of new ONC-plan approvals, and continuing advancements in standards and technology of HIE, etc. Review of 28 state plans shows virtually all include a direct messaging component, but for participating entities at state-specific levels of granularity (RHIO, enterprise, organization/provider). About ½ of reviewed plans describe a federated architecture, and ¼ of plans utilize a single-vendor “hybrid-federated” architecture. About 1/3 of states plan to leverage new federal and open exchange technologies (DIRECT, CONNECT, etc.). Only one plan describes a centralized architecture for statewide HIE, but others combine central and federated architectural approaches. PMID:22195059
Horban', A Ie
2013-09-01
The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.
Brandner, Antje; Schreiweis, Björn; Aguduri, Lakshmi S; Bronsch, Tobias; Kunz, Aline; Pensold, Peter; Stein, Katharina E; Weiss, Nicolas; Yüksekogul, Nilay; Bergh, Björn; Heinze, Oliver
2016-01-01
Over the last years we stepwise implemented our vision of a personal cross-enterprise electronic health record (PEHR) in the Rhine-Neckar-Region in Germany. The patient portal is one part of the PEHR architecture with IHE connectivity. The patient is enabled to access and manage his medical record by use of the patient portal. Moreover, he can give his consent regarding which healthcare providers are allowed to send data into or read data from his medical record. Forthcoming studies will give evidence for improvements and further requirements to develop.
PathCase-SB architecture and database design
2011-01-01
Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889
Study on Enterprise Education System for Undergraduates in Universities
ERIC Educational Resources Information Center
Zhang, Min
2014-01-01
This paper studies the higher school undergraduate entrepreneurship education system. Its architecture mainly includes five aspects of content: improve the students' entrepreneurial cognitive ability, adjust the teacher's education idea, carry out various kinds of entrepreneurship and entrepreneurial training, carry out flexible forms of team…
NASA Astrophysics Data System (ADS)
Berres, A.; Karthik, R.; Nugent, P.; Sorokine, A.; Myers, A.; Pang, H.
2017-12-01
Building an integrated data infrastructure that can meet the needs of a sustainable energy-water resource management requires a robust data management and geovisual analytics platform, capable of cross-domain scientific discovery and knowledge generation. Such a platform can facilitate the investigation of diverse complex research and policy questions for emerging priorities in Energy-Water Nexus (EWN) science areas. Using advanced data analytics, machine learning techniques, multi-dimensional statistical tools, and interactive geovisualization components, such a multi-layered federated platform is being developed, the Energy-Water Nexus Knowledge Discovery Framework (EWN-KDF). This platform utilizes several enterprise-grade software design concepts and standards such as extensible service-oriented architecture, open standard protocols, event-driven programming model, enterprise service bus, and adaptive user interfaces to provide a strategic value to the integrative computational and data infrastructure. EWN-KDF is built on the Compute and Data Environment for Science (CADES) environment in Oak Ridge National Laboratory (ORNL).
Human Exploration Framework Team: Strategy and Status
NASA Technical Reports Server (NTRS)
Muirhead, Brian K.; Sherwood, Brent; Olson, John
2011-01-01
Human Exploration Framework Team (HEFT) was formulated to create a decision framework for human space exploration that drives out the knowledge, capabilities and infrastructure NASA needs to send people to explore multiple destinations in the Solar System in an efficient, sustainable way. The specific goal is to generate an initial architecture that can evolve into a long term, enterprise-wide architecture that is the basis for a robust human space flight enterprise. This paper will discuss the initial HEFT activity which focused on starting up the cross-agency team, getting it functioning, developing a comprehensive development and analysis process and conducting multiple iterations of the process. The outcome of this process will be discussed including initial analysis of capabilities and missions for at least two decades, keeping Mars as the ultimate destination. Details are provided on strategies that span a broad technical and programmatic trade space, are analyzed against design reference missions and evaluated against a broad set of figures of merit including affordability, operational complexity, and technical and programmatic risk.
ImTK: an open source multi-center information management toolkit
NASA Astrophysics Data System (ADS)
Alaoui, Adil; Ingeholm, Mary Lou; Padh, Shilpa; Dorobantu, Mihai; Desai, Mihir; Cleary, Kevin; Mun, Seong K.
2008-03-01
The Information Management Toolkit (ImTK) Consortium is an open source initiative to develop robust, freely available tools related to the information management needs of basic, clinical, and translational research. An open source framework and agile programming methodology can enable distributed software development while an open architecture will encourage interoperability across different environments. The ISIS Center has conceptualized a prototype data sharing network that simulates a multi-center environment based on a federated data access model. This model includes the development of software tools to enable efficient exchange, sharing, management, and analysis of multimedia medical information such as clinical information, images, and bioinformatics data from multiple data sources. The envisioned ImTK data environment will include an open architecture and data model implementation that complies with existing standards such as Digital Imaging and Communications (DICOM), Health Level 7 (HL7), and the technical framework and workflow defined by the Integrating the Healthcare Enterprise (IHE) Information Technology Infrastructure initiative, mainly the Cross Enterprise Document Sharing (XDS) specifications.
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
Yin, Zhujia; Liu, Lijuan; Wang, Haidong
2018-01-01
Based on the database data of Chinese industrial enterprises from 2000 to 2007 and the LP method, this paper measures the total factor productivity of enterprises and investigates the effect of different mixed ownership forms on enterprises’ efficiency and the effect of heterogeneous ownership balance on the mixed ownership enterprises’ efficiency. The state-owned enterprise and mixed ownership enterprise are identified by the enterprise’s paid-up capital. The results show that, on the whole, for the mixed ownership enterprise, the higher the diversification degree of the shareholders is, the higher the efficiency becomes, and in different types of industries, the mixed forms of shareholders have different effects on the efficiency of enterprises. The heterogeneous ownership balance and the enterprise efficiency show nonlinear U-type relationships. Both the higher and lower heterogeneous ownership balance degrees will promote the enterprise’s efficiency. However, when the ownership balance degree is in the range of [0.2 0.5], the increase in ownership balance will lead to the decline of enterprise efficiency. Therefore, when introducing non-state-owned capital, state-owned enterprises should take full account of their own characteristics by rationally controlling the shareholding ratio of non-state-owned capital and play the positive role of a mixed ownership structure in corporate governance with appropriate ownership balances. PMID:29614126
NASA Astrophysics Data System (ADS)
Kemp, C.; Car, N. J.
2016-12-01
Geoscience Australia (GA) is a government agency that provides advice on the geology and geography of Australia. It is the custodian of many digital and physical datasets of national significance. For several years GA has been implementing an enterprise approach to provenance management. The goal for transparency and reproducibility for all of GA's information products; an objective supported at the highest levels and explicitly listed in its Science Principles. Currently GA is finalising a set of enterprise tools to assist with provenance management and rolling out provenance reporting to different science areas. GA has adopted or developed: provenance storage systems; provenance collection code libraries (for use within automated systems); reporting interfaces (for manual use) and provenance representation capability within legacy catalogues. Using these tools within GA's science areas involves modelling the scenario first and then assessing whether the area has its data managed in such a way that allows links to data within provenance to be resolvable in perpetuity. We don't just want to represent provenance (demonstrating transparency), we want to access data via provenance (allowing for reproducibility). A subtask of GA's current work is to link physical samples to information products (datasets, reports, papers) by uniquely and persistently identifying samples using International GeoSample Numbers and then modelling automated & manual laboratory workflows and associated tasks, such as data delivery to corporate databases using the W3C's PROV Data Model. We use PROV DM throughout our modelling and systems. We are also moving to deliver all sample and digital dataset metadata across the agency in the Web Ontology Language (OWL) and exposing it via Linked Data methods in order to allow Semantic Web querying of multiple systems allowing provenance to be leveraged using as a single method and query point. Through the Science First Transformation Program GA is undergoing a significant rethinking of its data architecture, curation and access to support the Digital Science capability for which Provenance management is an output.
Research of B2B e-Business Application and Development Technology Based on SOA
NASA Astrophysics Data System (ADS)
Xian, Li Liang
Today, the B2B e-business systems in most enterprises usually have multiple heterogeneous and independent systems which are based on different platforms and operate in different functional departments. To deal with the increased services in future, an enterprise needs to expand its system continuously. This, however, will cause great inconvenience to the future system maintenance. To implement e-business successfully, a unified internal e-business integration environment must be established to integrate the internal system and thus realize a unified internal mechanism within the enterprise e-business system. The SOA (service-oriented architecture), however, can well meet the above requirements. The integration of SOA-based applications can reduce the dependency of different types of IT systems, reduce the cost of system maintenance and the complexity of the IT system operation, increase the flexibility of the system deployment, and at the same time exclude the barrier of service innovation. Research and application of SOA-based enterprise application systems has become a very important research project at present. Based on SOA, this document designs an enterprise e-business application model and realizes a flexible and expandable e-business platform.
Systemic Approach of a Virtual Enterprise that Constructs Wireless Payment Mechanisms
NASA Astrophysics Data System (ADS)
Assimakopoulos, Nikitas A.; Riggas, Anastasis N.; Kotsimpos, George K.
2004-08-01
Enterprises and Organizations are realizing that there are many win-win scenarios, for their customers and business partners, using the latest technology to enact convenient and secure purchases `over the air'. Wireless Payment (W/P) is the key element of Wireless Commerce. Businesses around the world are attempting to position themselves to operate in a highly competitive global economy. A single organization is often not able to develop sufficient internal design or production capabilities to respond effectively within a short period of time. The focus of this paper will be on the development and analysis of a Virtual Enterprise Architecture for the construction of W/P Mechanisms using Systemic Methodologies. A framework for the rapid and efficient integration of the business processes of the participating companies that construct W/P Mechanisms is provided.
Expanding the spectrum: 20 years of advances in MMW imagery
NASA Astrophysics Data System (ADS)
Martin, Christopher A.; Lovberg, John A.; Kolinko, Valdimir G.
2017-05-01
Millimeter-wave imaging has expanded from the single-pixel swept imagers developed in the 1960s to large field-ofview real-time systems in use today. Trex Enterprises has been developing millimeter-wave imagers since 1991 for aviation and security applications, as well as millimeter-wave communications devices. As MMIC device development was stretching into the MMW band in the 1990s, Trex developed novel imaging architectures to create 2-D staring systems with large pixel counts and no moving parts while using a minimal number of devices. Trex also contributed to the device development in amplifiers, switches, and detectors to enable the next generation of passive MMW imaging systems. The architectures and devices developed continue to be employed in security imagers, radar, and radios produced by Trex. This paper reviews the development of the initial real-time MMW imagers and associated devices by Trex Enterprises from the 1990s through the 2000s. The devices include W-band MMIC amplifiers, switches, and detector didoes, and MMW circuit boards and optical processors. The imaging systems discussed include two different real-time passive MMW imagers flown on helicopters and a MMW radar system, as well as implementation of the devices and architectures in simpler stand-off and gateway security imagers.
Enterprise Command and Control Requirements and Common Architecture on US Navy Surface Combatants
2009-06-01
94 V. SUMMARY AND AREAS FOR...97 B. AREAS FOR FURTHER STUDY...training covered the relevant C2 functions. Cost savings in the form of man-hours can then be identified in areas of potential training redundancy
Helix Project Testbed - Towards the Self-Regenerative Incorruptible Enterprise
2011-09-14
hardware implementation with a microkernel in a way that allows information flow properties of the entire construction to be statically verified all the way...secure architectural skeleton. This skeleton couples a critical slice of the low level hardware implementation with a microkernel in a way that
Database of Mechanical Properties of Textile Composites
NASA Technical Reports Server (NTRS)
Delbrey, Jerry
1996-01-01
This report describes the approach followed to develop a database for mechanical properties of textile composites. The data in this database is assembled from NASA Advanced Composites Technology (ACT) programs and from data in the public domain. This database meets the data documentation requirements of MIL-HDBK-17, Section 8.1.2, which describes in detail the type and amount of information needed to completely document composite material properties. The database focuses on mechanical properties of textile composite. Properties are available for a range of parameters such as direction, fiber architecture, materials, environmental condition, and failure mode. The composite materials in the database contain innovative textile architectures such as the braided, woven, and knitted materials evaluated under the NASA ACT programs. In summary, the database contains results for approximately 3500 coupon level tests, for ten different fiber/resin combinations, and seven different textile architectures. It also includes a limited amount of prepreg tape composites data from ACT programs where side-by-side comparisons were made.
NASA Astrophysics Data System (ADS)
Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele
2018-04-01
Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.
Intranet technology in hospital information systems.
Cimino, J J
1997-01-01
The clinical information system architecture at the Columbia-Presbyterian Medical Center in New York is being incorporated into an intranet using Internet and World Wide Web protocols. The result is an Enterprise-Wide Web which provides more flexibility for access to specific patient information and general medical knowledge. Critical aspects of the architecture include a central data repository and a vocabulary server. The new architecture provides ways of displaying patient information in summary, graphical, and multimedia forms. Using customized links called Infobuttons, we provide access to on-line information resources available on the World Wide Web. Our experience to date has raised a number of interesting issues about the use of this technology for health care systems.
Salary Management System for Small and Medium-sized Enterprises
NASA Astrophysics Data System (ADS)
Hao, Zhang; Guangli, Xu; Yuhuan, Zhang; Yilong, Lei
Small and Medium-sized Enterprises (SMEs) in the process of wage entry, calculation, the total number are needed to be done manually in the past, the data volume is quite large, processing speed is low, and it is easy to make error, which is resulting in low efficiency. The main purpose of writing this paper is to present the basis of salary management system, establish a scientific database, the computer payroll system, using the computer instead of a lot of past manual work in order to reduce duplication of staff labor, it will improve working efficiency.This system combines the actual needs of SMEs, through in-depth study and practice of the C/S mode, PowerBuilder10.0 development tools, databases and SQL language, Completed a payroll system needs analysis, database design, application design and development work. Wages, departments, units and personnel database file are included in this system, and have data management, department management, personnel management and other functions, through the control and management of the database query, add, delete, modify, and other functions can be realized. This system is reasonable design, a more complete function, stable operation has been tested to meet the basic needs of the work.
Privacy-Aware Location Database Service for Granular Queries
NASA Astrophysics Data System (ADS)
Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide
Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.
Experience in running relational databases on clustered storage
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Potocky, Miroslav
2015-12-01
For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.
caGrid 1.0: A Grid Enterprise Architecture for Cancer Research
Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel
2007-01-01
caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIGTM) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIGTM. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5. PMID:18693901
Design and Implementation of an Enterprise Internet of Things
NASA Astrophysics Data System (ADS)
Sun, Jing; Zhao, Huiqun; Wang, Ka; Zhang, Houyong; Hu, Gongzhu
Since the notion of "Internet of Things" (IoT) introduced about 10 years ago, most IoT research has focused on higher level issues, such as strategies, architectures, standardization, and enabling technologies, but studies of real cases of IoT are still lacking. In this paper, a real case of Internet of Things called ZB IoT is introduced. It combines the Service Oriented Architecture (SOA) with EPC global standards in the system design, and focuses on the security and extensibility of IoT in its implementation.
2011-08-15
system must, at a minimum, include design and configuration framework supporting: Part 1. Net Ready. The system must support net ‐ centric operations...Analyze, evaluate and incorporate relevant DoD Architecture Framework . 5) Document standards for each task / condition combination. 6) Prepare final FAA...task Analyze, evaluate and incorporate relevant Army Architecture Framework Document standards for each task/condition combination forming
An Informatics Approach to Establishing a Sustainable Public Health Community
ERIC Educational Resources Information Center
Kriseman, Jeffrey Michael
2012-01-01
This work involved the analysis of a public health system, and the design, development and deployment of enterprise informatics architecture, and sustainable community methods to address problems with the current public health system. Specifically, assessment of the Nationally Notifiable Disease Surveillance System (NNDSS) was instrumental in…
The New DOD Instruction 5000.02: An Analysis of the Efficiencies to be Gained
2015-06-01
IT, and Information Systems • Post Implementation Review (PIR) • DOD Enterprise Architecture requirement • Cybersecurity Strategy for all IT... dam /rand/pubs/occasional_papers/2010/RAND_OP3 08.pdf President’s Blue Ribbon Commission on Defense Management. (1986). A quest for excellence, final
NASA Astrophysics Data System (ADS)
Duff, Francis; McGarry, Donald; Zasada, David; Foote, Scott
2009-05-01
The MITRE Sensor Layer Prototype is an initial design effort to enable every sensor to help create new capabilities through collaborative data sharing. By making both upstream (raw) and downstream (processed) sensor data visible, users can access the specific level, type, and quantities of data needed to create new data products that were never anticipated by the original designers of the individual sensors. The major characteristic that sets sensor data services apart from typical enterprise services is the volume (on the order of multiple terabytes) of raw data that can be generated by most sensors. Traditional tightly coupled processing approaches extract pre-determined information from the incoming raw sensor data, format it, and send it to predetermined users. The community is rapidly reaching the conclusion that tightly coupled sensor processing loses too much potentially critical information.1 Hence upstream (raw and partially processed) data must be extracted, rapidly archived, and advertised to the enterprise for unanticipated uses. The authors believe layered sensing net-centric integration can be achieved through a standardize-encapsulate-syndicateaggregate- manipulate-process paradigm. The Sensor Layer Prototype's technical approach focuses on implementing this proof of concept framework to make sensor data visible, accessible and useful to the enterprise. To achieve this, a "raw" data tap between physical transducers associated with sensor arrays and the embedded sensor signal processing hardware and software has been exploited. Second, we encapsulate and expose both raw and partially processed data to the enterprise within the context of a service-oriented architecture. Third, we advertise the presence of multiple types, and multiple layers of data through geographic-enabled Really Simple Syndication (GeoRSS) services. These GeoRSS feeds are aggregated, manipulated, and filtered by a feed aggregator. After filtering these feeds to bring just the type and location of data sought by multiple processes to the attention of each processing station, just that specifically sought data is downloaded to each process application. The Sensor Layer Prototype participated in a proof-of-concept demonstration in April 2008. This event allowed multiple MITRE innovation programs to interact among themselves to demonstrate the ability to couple value-adding but previously unanticipated users to the enterprise. For this event, the Sensor Layer Prototype was used to show data entering the environment in real time. Multiple data types were encapsulated and added to the database via the Sensor Layer Prototype, specifically National Imagery Transmission Format 2.1 (NITF), NATO Standardization Format 4607 (STANAG 4607), Cursor-on-Target (CoT), Joint Photographic Experts Group (JPEG), Hierarchical Data Format (HDF5) and several additional sensor file formats describing multiple sensors addressing a common scenario.
NASA Astrophysics Data System (ADS)
Budiman, Kholiq; Prahasto, Toni; Kusumawardhani, Amie
2018-02-01
This research has applied an integrated design and development of planning information system, which is been designed using Enterprise Architecture Planning. Frequent discrepancy between planning and realization of the budget that has been made, resulted in ineffective planning, is one of the reason for doing this research. The design using EAP aims to keep development aligned and in line with the strategic direction of the organization. In the practice, EAP is carried out in several stages of the planning initiation, identification and definition of business functions, proceeded with architectural design and EA implementation plan that has been built. In addition to the design of the Enterprise Architecture, this research carried out the implementation, and was tested by several methods of black box and white box. Black box testing method is used to test the fundamental aspects of the software, tested by two kinds of testing, first is using User Acceptance Testing and the second is using software functionality testing. White box testing method is used to test the effectiveness of the code in the software, tested using unit testing. Tests conducted using white box and black box on the integrated planning information system, is declared successful. Success in the software testing can not be ascertained if the software built has not shown any distinction from prior circumstance to the development of this integrated planning information system. For ensuring the success of this system implementation, the authors test consistency between the planning of data and the realization of prior-use of the information system, until after-use information system. This consistency test is done by reducing the time data of the planning and realization time. From the tabulated data, the planning information system that has been built reduces the difference between the planning time and the realization time, in which indicates that the planning information system can motivate the planner unit in realizing the budget that has been designed. It also proves that the value chain of the information planning system has brought implications for budget realization.
Loya, Salvador Rodriguez; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech
2014-12-01
The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems' performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS.
Loya, Salvador Rodriguez; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech
2017-01-01
The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems’ performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS PMID:25325996
Integration of hybrid wireless networks in cloud services oriented enterprise information systems
NASA Astrophysics Data System (ADS)
Li, Shancang; Xu, Lida; Wang, Xinheng; Wang, Jue
2012-05-01
This article presents a hybrid wireless network integration scheme in cloud services-based enterprise information systems (EISs). With the emerging hybrid wireless networks and cloud computing technologies, it is necessary to develop a scheme that can seamlessly integrate these new technologies into existing EISs. By combining the hybrid wireless networks and computing in EIS, a new framework is proposed, which includes frontend layer, middle layer and backend layers connected to IP EISs. Based on a collaborative architecture, cloud services management framework and process diagram are presented. As a key feature, the proposed approach integrates access control functionalities within the hybrid framework that provide users with filtered views on available cloud services based on cloud service access requirements and user security credentials. In future work, we will implement the proposed framework over SwanMesh platform by integrating the UPnP standard into an enterprise information system.
Development of Geospatial Map Based Election Portal
NASA Astrophysics Data System (ADS)
Gupta, A. Kumar Chandra; Kumar, P.; Vasanth Kumar, N.
2014-11-01
The Geospatial Delhi Limited (GSDL), a Govt. of NCT of Delhi Company formed in order to provide the geospatial information of National Capital Territory of Delhi (NCTD) to the Government of National Capital Territory of Delhi (GNCTD) and its organs such as DDA, MCD, DJB, State Election Department, DMRC etc., for the benefit of all citizens of Government of National Capital Territory of Delhi (GNCTD). This paper describes the development of Geospatial Map based Election portal (GMEP) of NCT of Delhi. The portal has been developed as a map based spatial decision support system (SDSS) for pertain to planning and management of Department of Chief Electoral Officer, and as an election related information searching tools (Polling Station, Assembly and parliamentary constituency etc.,) for the citizens of NCTD. The GMEP is based on Client-Server architecture model. It has been developed using ArcGIS Server 10.0 with J2EE front-end on Microsoft Windows environment. The GMEP is scalable to enterprise SDSS with enterprise Geo Database & Virtual Private Network (VPN) connectivity. Spatial data to GMEP includes delimited precinct area boundaries of Voters Area of Polling stations, Assembly Constituency, Parliamentary Constituency, Election District, Landmark locations of Polling Stations & basic amenities (Police Stations, Hospitals, Schools and Fire Stations etc.). GMEP could help achieve not only the desired transparency and easiness in planning process but also facilitates through efficient & effective tools for management of elections. It enables a faster response to the changing ground realities in the development planning, owing to its in-built scientific approach and open-ended design.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.
1991-01-01
Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.
Evaluation of relational and NoSQL database architectures to manage genomic annotations.
Schulz, Wade L; Nelson, Brent G; Felker, Donn K; Durant, Thomas J S; Torres, Richard
2016-12-01
While the adoption of next generation sequencing has rapidly expanded, the informatics infrastructure used to manage the data generated by this technology has not kept pace. Historically, relational databases have provided much of the framework for data storage and retrieval. Newer technologies based on NoSQL architectures may provide significant advantages in storage and query efficiency, thereby reducing the cost of data management. But their relative advantage when applied to biomedical data sets, such as genetic data, has not been characterized. To this end, we compared the storage, indexing, and query efficiency of a common relational database (MySQL), a document-oriented NoSQL database (MongoDB), and a relational database with NoSQL support (PostgreSQL). When used to store genomic annotations from the dbSNP database, we found the NoSQL architectures to outperform traditional, relational models for speed of data storage, indexing, and query retrieval in nearly every operation. These findings strongly support the use of novel database technologies to improve the efficiency of data management within the biological sciences. Copyright © 2016 Elsevier Inc. All rights reserved.
Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki
2006-04-01
The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with < or = 300 employees] greater than RWC. When SWPs were classified into SM enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.
TEAM (Technologies Enabling Agile Manufacturing) shop floor control requirements guide: Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-03-28
TEAM will create a shop floor control system (SFC) to link the pre-production planning to shop floor execution. SFC must meet the requirements of a multi-facility corporation, where control must be maintained between co-located facilities down to individual workstations within each facility. SFC must also meet the requirements of a small corporation, where there may only be one small facility. A hierarchical architecture is required to meet these diverse needs. The hierarchy contains the following levels: Enterprise, Factory, Cell, Station, and Equipment. SFC is focused on the top three levels. Each level of the hierarchy is divided into three basicmore » functions: Scheduler, Dispatcher, and Monitor. The requirements of each function depend on the hierarchical level in which it is to be used. For example, the scheduler at the Enterprise level must allocate production to individual factories and assign due-dates; the scheduler at the Cell level must provide detailed start and stop times of individual operations. Finally the system shall have the following features: distributed and open-architecture. Open architecture software is required in order that the appropriate technology be used at each level of the SFC hierarchy, and even at different instances within the same hierarchical level (for example, Factory A uses discrete-event simulation scheduling software, and Factory B uses an optimization-based scheduler). A distributed implementation is required to reduce the computational burden of the overall system, and allow for localized control. A distributed, open-architecture implementation will also require standards for communication between hierarchical levels.« less
Bønes, Erlend; Hasvold, Per; Henriksen, Eva; Strandenaes, Thomas
2007-09-01
Instant messaging (IM) is suited for immediate communication because messages are delivered almost in real time. Results from studies of IM use in enterprise work settings make us believe that IM based services may prove useful also within the healthcare sector. However, today's public instant messaging services do not have the level of information security required for adoption of IM in healthcare. We proposed MedIMob, our own architecture for a secure enterprise IM service for use in healthcare. MedIMob supports IM clients on mobile devices in addition to desktop based clients. Security threats were identified in a risk analysis of the MedIMob architecture. The risk analysis process consists of context identification, threat identification, analysis of consequences and likelihood, risk evaluation, and proposals for risk treatment. The risk analysis revealed a number of potential threats to the information security of a service like this. Many of the identified threats are general when dealing with mobile devices and sensitive data; others are threats which are more specific to our service and architecture. Individual threats identified in the risks analysis are discussed and possible counter measures presented. The risk analysis showed that most of the proposed risk treatment measures must be implemented to obtain an acceptable risk level; among others blocking much of the additional functionality of the smartphone. To conclude on the usefulness of this IM service, it will be evaluated in a trial study of the human-computer interaction. Further work also includes an improved design of the proposed MedIMob architecture. 2006 Elsevier Ireland Ltd
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.
1999-07-01
The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.
eHealth integration and interoperability issues: towards a solution through enterprise architecture.
Adenuga, Olugbenga A; Kekwaletswe, Ray M; Coleman, Alfred
2015-01-01
Investments in healthcare information and communication technology (ICT) and health information systems (HIS) continue to increase. This is creating immense pressure on healthcare ICT and HIS to deliver and show significance in such investments in technology. It is discovered in this study that integration and interoperability contribute largely to this failure in ICT and HIS investment in healthcare, thus resulting in the need towards healthcare architecture for eHealth. This study proposes an eHealth architectural model that accommodates requirement based on healthcare need, system, implementer, and hardware requirements. The model is adaptable and examines the developer's and user's views that systems hold high hopes for their potential to change traditional organizational design, intelligence, and decision-making.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification & Validation (IV&V) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASAs Office of Safety and Mission Assurance (OSMA) defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domain/component, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IV&V enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
Risk-Significant Adverse Condition Awareness Strengthens Assurance of Fault Management Systems
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2017-01-01
As spaceflight systems increase in complexity, Fault Management (FM) systems are ranked high in risk-based assessment of software criticality, emphasizing the importance of establishing highly competent domain expertise to provide assurance. Adverse conditions (ACs) and specific vulnerabilities encountered by safety- and mission-critical software systems have been identified through efforts to reduce the risk posture of software-intensive NASA missions. Acknowledgement of potential off-nominal conditions and analysis to determine software system resiliency are important aspects of hazard analysis and FM. A key component of assuring FM is an assessment of how well software addresses susceptibility to failure through consideration of ACs. Focus on significant risk predicted through experienced analysis conducted at the NASA Independent Verification Validation (IVV) Program enables the scoping of effective assurance strategies with regard to overall asset protection of complex spaceflight as well as ground systems. Research efforts sponsored by NASA's Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs and allowing queries based on project, mission type, domaincomponent, causal fault, and other key characteristics. Vulnerability in off-nominal situations, architectural design weaknesses, and unexpected or undesirable system behaviors in reaction to faults are curtailed with the awareness of ACs and risk-significant scenarios modeled for analysts through this database. Integration within the Enterprise Architecture at NASA IVV enables interfacing with other tools and datasets, technical support, and accessibility across the Agency. This paper discusses the development of an improved workflow process utilizing this database for adaptive, risk-informed FM assurance that critical software systems will safely and securely protect against faults and respond to ACs in order to achieve successful missions.
77 FR 75361 - 2012-2014 Enterprise Housing Goals
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-20
... 20, 2012. FOR FURTHER INFORMATION CONTACT: Paul Manchester, Principal Economist, (202) 649-3115; Ian..., Senior Economist, (202) 649-3117, Office of National Mortgage Database; Kevin Sheehan, Assistant General...
Database of Industrial Technological Information in Kanagawa : Networks for Technology Activities
NASA Astrophysics Data System (ADS)
Saito, Akira; Shindo, Tadashi
This system is one of the databases which require participation by its members and of which premise is to open all the data in it. Aiming at free technological cooperation and exchange among industries it was constructed by Kanagawa Prefecture in collaboration with enterprises located in it. The input data is 36 items such as major product, special and advantageous technology, technolagy to be wanted for cooperation, facility and equipment, which technologically characterize each enterprise. They are expressed in 2,000 characters and written by natural language including Kanji except for some coded items. 24 search items are accessed by natural language so that in addition to interactive searching procedures including menu-type it enables extensive searching. The information service started in Oct., 1986 covering data from 2,000 enterprisen.
76 FR 26235 - EPAAR Clause for Compliance with EPA Policies for Information Resources Management
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-06
... Architecture) and (4)(Earned Value Management) is deleted. 3. Paragraph (b)(2), Groundwater Program Information... substantial number of small entities. Small entities include small businesses, small organizations, and small... less than 50,000; and (3) a small organization that is any not-for-profit enterprise which is...
NASA's Use of Commercial Satellite Systems: Concepts and Challenges
NASA Technical Reports Server (NTRS)
Budinger, James M.
1998-01-01
Lewis Research Center's Space Communications Program has a responsibility to investigate, plan for, and demonstrate how NASA Enterprises can use advanced commercial communications services and technologies to satisfy their missions' space communications needs. This presentation looks at the features and challenges of alternative hardware system architecture concepts for providing specific categories of communications services.
ERIC Educational Resources Information Center
Parker, Marilyn M.
1993-01-01
Discusses what Office Information Systems and other Information Technology organizations, in concert with the business organizations they support, must do to exploit the opportunities and support the transition to the next generation enterprise: its business processes, its organizations and architectures, and its strategies. (Author/JOW)
Consolidated Afloat Networks and Enterprise Services (CANES)
2016-03-01
Executive DoD - Department of Defense DoDAF - DoD Architecture Framework FD - Full Deployment FDD - Full Deployment Decision FY - Fiscal Year IA...Deployment Decision Jun 2015 Oct 2015 Full Deployment1 Sep 2024 Sep 2023 Memo 1/ Per the FDD ADM approved by USD(AT&L) on October 13, 2015, the FD date was
Integrated Air Surveillance Concept of Operations
2011-11-01
information, intelligence, weather data, and other situational awareness-related information. 4.2.4 Shared Services Automated processing of sensor and...other surveillance information will occur through shared services , accessible through an enterprise network infrastructure, that provide for collecting...also be provided, such as information discovery and translation. The IS architecture effort will identify specific shared services . Shared
XDS-I outsourcing proxy: ensuring confidentiality while preserving interoperability.
Ribeiro, Luís S; Viana-Ferreira, Carlos; Oliveira, José Luís; Costa, Carlos
2014-07-01
The interoperability of services and the sharing of health data have been a continuous goal for health professionals, patients, institutions, and policy makers. However, several issues have been hindering this goal, such as incompatible implementations of standards (e.g., HL7, DICOM), multiple ontologies, and security constraints. Cross-enterprise document sharing (XDS) workflows were proposed by Integrating the Healthcare Enterprise (IHE) to address current limitations in exchanging clinical data among organizations. To ensure data protection, XDS actors must be placed in trustworthy domains, which are normally inside such institutions. However, due to rapidly growing IT requirements, the outsourcing of resources in the Cloud is becoming very appealing. This paper presents a software proxy that enables the outsourcing of XDS architectural parts while preserving the interoperability, confidentiality, and searchability of clinical information. A key component in our architecture is a new searchable encryption (SE) scheme-Posterior Playfair Searchable Encryption (PPSE)-which, besides keeping the same confidentiality levels of the stored data, hides the search patterns to the adversary, bringing improvements when compared to the remaining practical state-of-the-art SE schemes.
NASA Astrophysics Data System (ADS)
Evans, J. D.; Hao, W.; Chettri, S. R.
2014-12-01
Disaster risk management has grown to rely on earth observations, multi-source data analysis, numerical modeling, and interagency information sharing. The practice and outcomes of disaster risk management will likely undergo further change as several emerging earth science technologies come of age: mobile devices; location-based services; ubiquitous sensors; drones; small satellites; satellite direct readout; Big Data analytics; cloud computing; Web services for predictive modeling, semantic reconciliation, and collaboration; and many others. Integrating these new technologies well requires developing and adapting them to meet current needs; but also rethinking current practice to draw on new capabilities to reach additional objectives. This requires a holistic view of the disaster risk management enterprise and of the analytical or operational capabilities afforded by these technologies. One helpful tool for this assessment, the GEOSS Architecture for the Use of Remote Sensing Products in Disaster Management and Risk Assessment (Evans & Moe, 2013), considers all phases of the disaster risk management lifecycle for a comprehensive set of natural hazard types, and outlines common clusters of activities and their use of information and computation resources. We are using these architectural views, together with insights from current practice, to highlight effective, interrelated roles for emerging earth science technologies in disaster risk management. These roles may be helpful in creating roadmaps for research and development investment at national and international levels.
Definition of architectural ideotypes for good yield capacity in Coffea canephora.
Cilas, Christian; Bar-Hen, Avner; Montagnon, Christophe; Godin, Christophe
2006-03-01
Yield capacity is a target trait for selection of agronomically desirable lines; it is preferred to simple yields recorded over different harvests. Yield capacity is derived using certain architectural parameters used to measure the components of yield capacity. Observation protocols for describing architecture and yield capacity were applied to six clones of coffee trees (Coffea canephora) in a comparative trial. The observations were used to establish architectural databases, which were explored using AMAPmod, a software dedicated to the analyses of plant architecture data. The traits extracted from the database were used to identify architectural parameters for predicting the yield of the plant material studied. Architectural traits are highly heritable and some display strong genetic correlations with cumulated yield. In particular, the proportion of fruiting nodes at plagiotropic level 15 counting from the top of the tree proved to be a good predictor of yield over two fruiting cycles.
Designing and application of SAN extension interface based on CWDM
NASA Astrophysics Data System (ADS)
Qin, Leihua; Yu, Shengsheng; Zhou, Jingli
2005-11-01
As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.
Architectural Implications for Spatial Object Association Algorithms*
Kumar, Vijay S.; Kurc, Tahsin; Saltz, Joel; Abdulla, Ghaleb; Kohn, Scott R.; Matarazzo, Celeste
2013-01-01
Spatial object association, also referred to as crossmatch of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server®, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation provides insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST). PMID:25692244
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.
2012-12-01
At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.
Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K
2012-07-01
A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.
HYDRA: A Middleware-Oriented Integrated Architecture for e-Procurement in Supply Chains
NASA Astrophysics Data System (ADS)
Alor-Hernandez, Giner; Aguilar-Lasserre, Alberto; Juarez-Martinez, Ulises; Posada-Gomez, Ruben; Cortes-Robles, Guillermo; Garcia-Martinez, Mario Alberto; Gomez-Berbis, Juan Miguel; Rodriguez-Gonzalez, Alejandro
The Service-Oriented Architecture (SOA) development paradigm has emerged to improve the critical issues of creating, modifying and extending solutions for business processes integration, incorporating process automation and automated exchange of information between organizations. Web services technology follows the SOA's principles for developing and deploying applications. Besides, Web services are considered as the platform for SOA, for both intra- and inter-enterprise communication. However, an SOA does not incorporate information about occurring events into business processes, which are the main features of supply chain management. These events and information delivery are addressed in an Event-Driven Architecture (EDA). Taking this into account, we propose a middleware-oriented integrated architecture that offers a brokering service for the procurement of products in a Supply Chain Management (SCM) scenario. As salient contributions, our system provides a hybrid architecture combining features of both SOA and EDA and a set of mechanisms for business processes pattern management, monitoring based on UML sequence diagrams, Web services-based management, event publish/subscription and reliable messaging service.
NASA Astrophysics Data System (ADS)
Valtonen, Katariina; Leppänen, Mauri
Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.
A Messaging Infrastructure for WLCG
NASA Astrophysics Data System (ADS)
Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin
2011-12-01
During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.
ERIC Educational Resources Information Center
Kreie, Jennifer; Hashemi, Shohreh
2012-01-01
Data is a vital resource for businesses; therefore, it is important for businesses to manage and use their data effectively. Because of this, businesses value college graduates with an understanding of and hands-on experience working with databases, data warehouses and data analysis theories and tools. Faculty in many business disciplines try to…
[Exposed workers to lung cancer risk. An estimation using the ISPESL database of enterprises].
Scarselli, Alberto; Marinaccio, Alessandro; Nesti, Massimo
2007-01-01
lung cancer is the first cause of death in the industrialized country among males and is increasing among females. In 2001 a uniform and standardised list of occupations or jobs known or suspected to be associated with lung cancer has been prepared. The aim of this study is to set up a database of Italian enterprises corresponding to activities related to this list and to assess the number of potentially exposed workers. a detailed and unique list of codes, referred to Ateco91 ISTAT classification with exclusion of the State Railways and the public administration sectors, has been developed. The list is divided into two categories: respectively for occupations or jobs definitely entailing carcinogenic risk and for those which probably/possibly entail a risk. Firms have been selected from the ISPESL database of enterprises and the number of workers has been estimated on the basis of this list. Italy. assessment of the number of workers potentially exposed to lung cancer risk and creation of a register of involved firms. the number of potentially exposed workers in the industrial and services sector related to lung cancer risk is 650,886 blue collars and the number of firms censused in Italy is 117,006 units. Corresponding figures in the agriculture sector are 163,340 and 84,839. This type of evaluation, being based on administrative sources rather then on direct measures of exposure, certainly includes an overestimation of exposed workers. the lists based on a standard classification which have been created allow for the creation of databases which can be used to control occupational exposure to carcinogens and to increase comparability between epidemiologic studies based on job-exposure matrix.
NASA Technical Reports Server (NTRS)
Stensrud, Kjell C.; Hamm, Dustin
2007-01-01
NASA's Johnson Space Center (JSC) / Flight Design and Dynamics Division (DM) has prototyped the use of Open Source middleware technology for building its next generation spacecraft mission support system. This is part of a larger initiative to use open standards and open source software as building blocks for future mission and safety critical systems. JSC is hoping to leverage standardized enterprise architectures, such as Java EE, so that its internal software development efforts can be focused on the core aspects of their problem domain. This presentation will outline the design and implementation of the Trajectory system and the lessons learned during the exercise.
Nursing informatics, outcomes, and quality improvement.
Charters, Kathleen G
2003-08-01
Nursing informatics actively supports nursing by providing standard language systems, databases, decision support, readily accessible research results, and technology assessments. Through normalized datasets spanning an entire enterprise or other large demographic, nursing informatics tools support improvement of healthcare by answering questions about patient outcomes and quality improvement on an enterprise scale, and by providing documentation for business process definition, business process engineering, and strategic planning. Nursing informatics tools provide a way for advanced practice nurses to examine their practice and the effect of their actions on patient outcomes. Analysis of patient outcomes may lead to initiatives for quality improvement. Supported by nursing informatics tools, successful advance practice nurses leverage their quality improvement initiatives against the enterprise strategic plan to gain leadership support and resources.
A mobile information management system used in textile enterprises
NASA Astrophysics Data System (ADS)
Huang, C.-R.; Yu, W.-D.
2008-02-01
The mobile information management system (MIMS) for textile enterprises is based on Microsoft Visual Studios. NET2003 Server, Microsoft SQL Server 2000, C++ language and wireless application protocol (WAP) and wireless markup language (WML) technology. The portable MIMS is composed of three-layer structures, i.e. showing layer; operating layer; and data visiting layer corresponding to the port-link module; processing module; and database module. By using the MIMS, not only the information exchanges become more convenient and easier, but also the compatible between the giant information capacity and a micro-cell phone and functional expansion nature in operating and designing can be realized by means of build-in units. The development of MIMS is suitable for the utilization in textile enterprises.
Incorporating Spatial Data into Enterprise Applications
NASA Astrophysics Data System (ADS)
Akiki, Pierre; Maalouf, Hoda
The main goal of this chapter is to discuss the usage of spatial data within enterprise as well as smaller line-of-business applications. In particular, this chapter proposes new methodologies for storing and manipulating vague spatial data and provides methods for visualizing both crisp and vague spatial data. It also provides a comparison between different types of spatial data, mainly 2D crisp and vague spatial data, and their respective fields of application. Additionally, it compares existing commercial relational database management systems, which are the most widely used with enterprise applications, and discusses their deficiencies in terms of spatial data support. A new spatial extension package called Spatial Extensions (SPEX) is provided in this chapter and is tested on a software prototype.
ERIC Educational Resources Information Center
Engelbrecht, Jeffrey C.
2003-01-01
Delivering content to distant users located in dispersed networks, separated by firewalls and different web domains requires extensive customization and integration. This article outlines some of the problems of implementing the Sharable Content Object Reference Model (SCORM) in the Marine Corps' Distance Learning System (MarineNet) and extends…
Uniforming information management in Finnish Social Welfare.
Laaksonen, Maarit; Kärki, Jarmo; Ailio, Erja
2012-01-01
This paper describes the phases and methods used in the National project for IT in Social Services in Finland (Tikesos). The main goals of Tikesos were to unify the client information systems in social services, to develop electronic documentation and to produce specifications for nationally organized electronic archive. The method of Enterprise Architecture was largely used in the project.
Helix: A Self-Regenerative Architecture for the Incorruptible Enterprise
2012-11-13
set of available applications. 15 DNADroid: Detection of plagiarized Android applications. DNADroid...Technical News, 2/26/07 and UNM Today, 2/27/07.``Professor goes to war," Front page lead article
ERIC Educational Resources Information Center
Chaudhry, Hina
2013-01-01
This study is a part of the smart grid initiative providing electric vehicle charging infrastructure. It is a refueling structure, an energy generating photovoltaic system and charge point electric vehicle charging station. The system will utilize advanced design and technology allowing electricity to flow from the site's normal electric service…
The Exchange Data Communication System based on Centralized Database for the Meat Industry
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Taniguchi, Yoji; Terada, Shuji; Komoda, Norihisa
We propose applying the EDI system that is based on centralized database and supports conversion of code data to the meat industry. This system makes it possible to share exchange data on beef between enterprises from producers to retailers by using Web EDI technology. In order to efficiently convert code direct conversion of a sender's code to a receiver's code using a code map is used. This system that mounted this function has been implemented in September 2004. Twelve enterprises including retailers, and processing traders, and wholesalers were using the system as of June 2005. In this system, the number of code maps relevant to the introductory cost of the code conversion function was lower than the theoretical value and were close to the case that a standard code is mediated.
Architectural Implications for Spatial Object Association Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, V S; Kurc, T; Saltz, J
2009-01-29
Spatial object association, also referred to as cross-match of spatial datasets, is the problem of identifying and comparing objects in two or more datasets based on their positions in a common spatial coordinate system. In this work, we evaluate two crossmatch algorithms that are used for astronomical sky surveys, on the following database system architecture configurations: (1) Netezza Performance Server R, a parallel database system with active disk style processing capabilities, (2) MySQL Cluster, a high-throughput network database system, and (3) a hybrid configuration consisting of a collection of independent database system instances with data replication support. Our evaluation providesmore » insights about how architectural characteristics of these systems affect the performance of the spatial crossmatch algorithms. We conducted our study using real use-case scenarios borrowed from a large-scale astronomy application known as the Large Synoptic Survey Telescope (LSST).« less
Database on Demand: insight how to build your own DBaaS
NASA Astrophysics Data System (ADS)
Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio
2015-12-01
At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.
Drozda, Joseph P; Roach, James; Forsyth, Thomas; Helmering, Paul; Dummitt, Benjamin; Tcheng, James E
2018-02-01
The US Food and Drug Administration (FDA) has recognized the need to improve the tracking of medical device safety and performance, with implementation of Unique Device Identifiers (UDIs) in electronic health information as a key strategy. The FDA funded a demonstration by Mercy Health wherein prototype UDIs were incorporated into its electronic information systems. This report describes the demonstration's informatics architecture. Prototype UDIs for coronary stents were created and implemented across a series of information systems, resulting in UDI-associated data flow from manufacture through point of use to long-term follow-up, with barcode scanning linking clinical data with UDI-associated device attributes. A reference database containing device attributes and the UDI Research and Surveillance Database (UDIR) containing the linked clinical and device information were created, enabling longitudinal assessment of device performance. The demonstration included many stakeholders: multiple Mercy departments, manufacturers, health system partners, the FDA, professional societies, the National Cardiovascular Data Registry, and information system vendors. The resulting system of systems is described in detail, including entities, functions, linkage between the UDIR and proprietary systems using UDIs as the index key, data flow, roles and responsibilities of actors, and the UDIR data model. The demonstration provided proof of concept that UDIs can be incorporated into provider and enterprise electronic information systems and used as the index key to combine device and clinical data in a database useful for device evaluation. Keys to success and challenges to achieving this goal were identified. Fundamental informatics principles were central to accomplishing the system of systems model. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
EarthCube as an information resource marketplace; the GEAR Project conceptual design
NASA Astrophysics Data System (ADS)
Richard, S. M.; Zaslavsky, I.; Gupta, A.; Valentine, D.
2015-12-01
Geoscience Architecture for Research (GEAR) is approaching EarthCube design as a complex and evolving socio-technical federation of systems. EarthCube is intended to support the science research enterprise, for which there is no centralized command and control, requirements are a moving target, the function and behavior of the system must evolve and adapt as new scientific paradigms emerge, and system participants are conducting research that inherently implies seeking new ways of doing things. EarthCube must address evolving user requirements and enable domain and project systems developed under different management and for different purposes to work together. The EC architecture must focus on creating a technical environment that enables new capabilities by combining existing and newly developed resources in various ways, and encourages development of new resource designs intended for re-use and interoperability. In a sense, instead of a single architecture design, GEAR provides a way to accommodate multiple designs tuned to different tasks. This agile, adaptive, evolutionary software development style is based on a continuously updated portfolio of compatible components that enable new sub-system architecture. System users make decisions about which components to use in this marketplace based on performance, satisfaction, and impact metrics collected continuously to evaluate components, determine priorities, and guide resource allocation decisions by the system governance agency. EC is designed as a federation of independent systems, and although the coordinator of the EC system may be named an enterprise architect, the focus of the role needs to be organizing resources, assessing their readiness for interoperability with the existing EC component inventory, managing dependencies between transient subsystems, mechanisms of stakeholder engagement and inclusion, and negotiation of standard interfaces, rather than actual specification of components. Composition of components will be developed by projects that involve both domain scientists and CI experts for specific research problems. We believe an agile, marketplace type approach is an essential architectural strategy for EarthCube.
FRED, a Front End for Databases.
ERIC Educational Resources Information Center
Crystal, Maurice I.; Jakobson, Gabriel E.
1982-01-01
FRED (a Front End for Databases) was conceived to alleviate data access difficulties posed by the heterogeneous nature of online databases. A hardware/software layer interposed between users and databases, it consists of three subsystems: user-interface, database-interface, and knowledge base. Architectural alternatives for this database machine…
Cost Considerations in Cloud Computing
2014-01-01
investments. 2. Database Options The potential promise that “ big data ” analytics holds for many enterprise mission areas makes relevant the question of the...development of a range of new distributed file systems and data - bases that have better scalability properties than traditional SQL databases. Hadoop ... data . Many systems exist that extend or supplement Hadoop —such as Apache Accumulo, which provides a highly granular mechanism for managing security
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
Code of Federal Regulations, 2012 CFR
2012-07-01
... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...
Code of Federal Regulations, 2013 CFR
2013-07-01
... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...
Code of Federal Regulations, 2014 CFR
2014-07-01
... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...
Amin, Waqas; Parwani, Anil V; Schmandt, Linda; Mohanty, Sambit K; Farhat, Ghada; Pople, Andrew K; Winters, Sharon B; Whelan, Nancy B; Schneider, Althea M; Milnes, John T; Valdivieso, Federico A; Feldman, Michael; Pass, Harvey I; Dhir, Rajiv; Melamed, Jonathan; Becich, Michael J
2008-08-13
Advances in translational research have led to the need for well characterized biospecimens for research. The National Mesothelioma Virtual Bank is an initiative which collects annotated datasets relevant to human mesothelioma to develop an enterprising biospecimen resource to fulfill researchers' need. The National Mesothelioma Virtual Bank architecture is based on three major components: (a) common data elements (based on College of American Pathologists protocol and National North American Association of Central Cancer Registries standards), (b) clinical and epidemiologic data annotation, and (c) data query tools. These tools work interoperably to standardize the entire process of annotation. The National Mesothelioma Virtual Bank tool is based upon the caTISSUE Clinical Annotation Engine, developed by the University of Pittsburgh in cooperation with the Cancer Biomedical Informatics Grid (caBIG, see http://cabig.nci.nih.gov). This application provides a web-based system for annotating, importing and searching mesothelioma cases. The underlying information model is constructed utilizing Unified Modeling Language class diagrams, hierarchical relationships and Enterprise Architect software. The database provides researchers real-time access to richly annotated specimens and integral information related to mesothelioma. The data disclosed is tightly regulated depending upon users' authorization and depending on the participating institute that is amenable to the local Institutional Review Board and regulation committee reviews. The National Mesothelioma Virtual Bank currently has over 600 annotated cases available for researchers that include paraffin embedded tissues, tissue microarrays, serum and genomic DNA. The National Mesothelioma Virtual Bank is a virtual biospecimen registry with robust translational biomedical informatics support to facilitate basic science, clinical, and translational research. Furthermore, it protects patient privacy by disclosing only de-identified datasets to assure that biospecimens can be made accessible to researchers.
Development of Geospatial Map Based Portal for New Delhi Municipal Council
NASA Astrophysics Data System (ADS)
Gupta, A. Kumar Chandra; Kumar, P.; Sharma, P. Kumar
2017-09-01
The Geospatial Delhi Limited (GSDL), a Govt. of NCT of Delhi Company formed in order to provide the geospatial information of National Capital Territory of Delhi (NCTD) to the Government of National Capital Territory of Delhi (GNCTD) and its organs such as DDA, MCD, DJB, State Election Department, DMRC etc., for the benefit of all citizens of Government of National Capital Territory of Delhi (GNCTD). This paper describes the development of Geospatial Map based Portal (GMP) for New Delhi Municipal Council (NDMC) of NCT of Delhi. The GMP has been developed as a map based spatial decision support system (SDSS) for planning and development of NDMC area to the NDMC department and It's heaving the inbuilt information searching tools (identifying of location, nearest utilities locations, distance measurement etc.) for the citizens of NCTD. The GMP is based on Client-Server architecture model. It has been developed using Arc GIS Server 10.0 with .NET (pronounced dot net) technology. The GMP is scalable to enterprise SDSS with enterprise Geo Database & Virtual Private Network (VPN) connectivity. Spatial data to GMP includes Circle, Division, Sub-division boundaries of department pertaining to New Delhi Municipal Council, Parcels of residential, commercial, and government buildings, basic amenities (Police Stations, Hospitals, Schools, Banks, ATMs and Fire Stations etc.), Over-ground and Underground utility network lines, Roads, Railway features. GMP could help achieve not only the desired transparency and easiness in planning process but also facilitates through efficient & effective tools for development and management of MCD area. It enables a faster response to the changing ground realities in the development planning, owing to its in-built scientific approach and open-ended design.
Development of Geospatial Map Based Portal for Delimitation of Mcd Wards
NASA Astrophysics Data System (ADS)
Gupta, A. Kumar Chandra; Kumar, P.; Sharma, P. Kumar
2017-09-01
The Geospatial Delhi Limited (GSDL), a Govt. of NCT of Delhi Company formed in order to provide the geospatial information of National Capital Territory of Delhi (NCTD) to the Government of National Capital Territory of Delhi (GNCTD) and its organs such as DDA, MCD, DJB, State Election Department, DMRC etc., for the benefit of all citizens of Government of National Capital Territory of Delhi (GNCTD). This paper describes the development of Geospatial Map based Portal for Delimitation of MCD Wards (GMPDW) and election of 3 Municipal Corporations of NCT of Delhi. The portal has been developed as a map based spatial decision support system (SDSS) for delimitation of MCD Wards and draw of peripheral wards boundaries to planning and management of MCD Election process of State Election Commission, and as an MCD election related information searching tools (Polling Station, MCD Wards and Assembly constituency etc.,) for the citizens of NCTD. The GMPDW is based on Client-Server architecture model. It has been developed using Arc GIS Server 10.0 with .NET (pronounced dot net) technology. The GMPDW is scalable to enterprise SDSS with enterprise Geo Database & Virtual Private Network (VPN) connectivity. Spatial data to GMPDW includes Enumeration Block (EB) and Enumeration Blocks Group (EBG) boundaries of Citizens of Delhi, Assembly Constituency, Parliamentary Constituency, Election District, Landmark locations of Polling Stations & basic amenities (Police Stations, Hospitals, Schools and Fire Stations etc.). GMPDW could help achieve not only the desired transparency and easiness in planning process but also facilitates through efficient & effective tools for management of MCD election. It enables a faster response to the changing ground realities in the development planning, owing to its in-built scientific approach and open-ended design.
OLYMPUS DISS - A Readily Implemented Geographic Data and Information Sharing System
NASA Astrophysics Data System (ADS)
Necsoiu, D. M.; Winfrey, B.; Murphy, K.; McKague, H. L.
2002-12-01
Electronic information technology has become a crucial component of business, government, and scientific organizations. In this technology era, many enterprises are moving away from the perception that information repositories are only a tool for decision-making. Instead, many organizations are learning that information systems, which are capable of organizing and following the interrelations between information and both the short-term and strategic organizational goals, are assets themselves, with inherent value. Olympus Data and Information Sharing System (DISS) is a system developed at the Center for Nuclear Waste Regulatory Analyses (CNWRA) to solve several difficult tasks associated with the management of geographical, geological and geophysical data. Three of the tasks were to (1) gather the large amount of heterogeneous information that has accumulated over the operational lifespan of CNWRA, (2) store the data in a central, knowledge-based, searchable database and (3) create quick, easy, convenient, and reliable access to that information. Faced with these difficult tasks CNWRA identified the requirements for designing such a system. Key design criteria were: (a) ability to ingest different data formats (i.e., raster, vector, and tabular data); (b) minimal expense using open-source and commercial off-the-shelf software; (c) seamless management of geospatial data, freeing up time for researchers to focus on analyses or algorithm development, rather than on time consuming format conversions; (d) controlled access; and (e) scalable architecture to meet new and continuing demands. Olympus DISS is a solution that can be easily adapted to small and mid-size enterprises dealing with heterogeneous geographic data. It uses established data standards, provides a flexible mechanism to build applications upon and output geographic data in multiple and clear ways. This abstract is an independent product of the CNWRA and does not necessarily reflect the views or regulatory position of the Nuclear Regulatory Commission.
NASA Astrophysics Data System (ADS)
Hsu, Charles; Viazanko, Michael; O'Looney, Jimmy; Szu, Harold
2009-04-01
Modularity Biometric System (MBS) is an approach to support AiTR of the cooperated and/or non-cooperated standoff biometric in an area persistent surveillance. Advanced active and passive EOIR and RF sensor suite is not considered here. Neither will we consider the ROC, PD vs. FAR, versus the standoff POT in this paper. Our goal is to catch the "most wanted (MW)" two dozens, separately furthermore ad hoc woman MW class from man MW class, given their archrivals sparse front face data basis, by means of various new instantaneous input called probing faces. We present an advanced algorithm: mini-Max classifier, a sparse sample realization of Cramer-Rao Fisher bound of the Maximum Likelihood classifier that minimize the dispersions among the same woman classes and maximize the separation among different man-woman classes, based on the simple feature space of MIT Petland eigen-faces. The original aspect consists of a modular structured design approach at the system-level with multi-level architectures, multiple computing paradigms, and adaptable/evolvable techniques to allow for achieving a scalable structure in terms of biometric algorithms, identification quality, sensors, database complexity, database integration, and component heterogenity. MBS consist of a number of biometric technologies including fingerprints, vein maps, voice and face recognitions with innovative DSP algorithm, and their hardware implementations such as using Field Programmable Gate arrays (FPGAs). Biometric technologies and the composed modularity biometric system are significant for governmental agencies, enterprises, banks and all other organizations to protect people or control access to critical resources.
Murphy, SN; Barnett, GO; Chueh, HC
2000-01-01
The patient base of the Partners HealthCare System in Boston exceeds 1.8 million. Many of these patients are desirable for participation in research studies. To facilitate their discovery, we developed a data warehouse to contain clinical characteristics of these patients. The data warehouse contains diagnosis and procedures from administrative databases. The patients are indexed across institutions and their demographics provided by an Enterprise Master Patient Indexing service. Characteristics of the diagnoses and procedures such as associated providers, dates of service, inpatient/outpatient status, and other visit-related characteristics are also fed from the administrative systems. The targeted users of this system are research clinician s interested in finding patient cohorts for research studies. Their data requirements were analyzed and have been reported elsewhere. We did not expect the clinicians to become expert users of the system. Tools for querying healthcare data have traditionally been text based, although graphical interfaces have been pursued. In order to support the simple drag and drop visual model, as well as the identification and distribution of the patient data, a three-tier software architecture was developed. The user interface was developed in Visual Basic and distributed as an ActiveX object embedded in an HTML page. The middle layer was developed in Java and Microsoft COM. The queries are represented throughout their lifetime as XML objects, and the Microsoft SQL7 database is queried and managed in standard SQL. PMID:11080028
Murphy; Barnett; Chueh
2000-01-01
The patient base of the Partners HealthCare System in Boston exceeds 1.8 million. Many of these patients are desirable for participation in research studies. To facilitate their discovery, we developed a data warehouse to contain clinical characteristics of these patients. The data warehouse contains diagnosis and procedures from administrative databases. The patients are indexed across institutions and their demographics provided by an Enterprise Master Patient Indexing service. Characteristics of the diagnoses and procedures such as associated providers, dates of service, inpatient/outpatient status, and other visit-related characteristics are also fed from the administrative systems. The targeted users of this system are research clinician s interested in finding patient cohorts for research studies. Their data requirements were analyzed and have been reported elsewhere. We did not expect the clinicians to become expert users of the system. Tools for querying healthcare data have traditionally been text based, although graphical interfaces have been pursued. In order to support the simple drag and drop visual model, as well as the identification and distribution of the patient data, a three-tier software architecture was developed. The user interface was developed in Visual Basic and distributed as an ActiveX object embedded in an HTML page. The middle layer was developed in Java and Microsoft COM. The queries are represented throughout their lifetime as XML objects, and the Microsoft SQL7 database is queried and managed in standard SQL.
Migration strategies for service-enabling ground control stations for unmanned systems
NASA Astrophysics Data System (ADS)
Kroculick, Joseph B.
2011-06-01
Future unmanned systems will be integrated into the Global Information Grid (GIG) and support net-centric data sharing, where information in a domain is exposed to a wide variety of GIG stakeholders that can make use of the information provided. Adopting a Service-Oriented Architecture (SOA) approach to package reusable UAV control station functionality into common control services provides a number of benefits including enabling dynamic plug and play of components depending on changing mission requirements, supporting information sharing to the enterprise, and integrating information from authoritative sources such as mission planners with the UAV control stations data model. It also allows the wider enterprise community to use the services provided by unmanned systems and improve data quality to support more effective decision-making. We explore current challenges in migrating UAV control systems that manage multiple types of vehicles to a Service-Oriented Architecture (SOA). Service-oriented analysis involves reviewing legacy systems and determining which components can be made into a service. Existing UAV control stations provide audio/visual, navigation, and vehicle health and status information that are useful to C4I systems. However, many were designed to be closed systems with proprietary software and hardware implementations, message formats, and specific mission requirements. An architecture analysis can be performed that reviews legacy systems and determines which components can be made into a service. A phased SOA adoption approach can then be developed that improves system interoperability.
Knowledge Framework Implementation with Multiple Architectures - 13090
DOE Office of Scientific and Technical Information (OSTI.GOV)
Upadhyay, H.; Lagos, L.; Quintero, W.
2013-07-01
Multiple kinds of knowledge management systems are operational in public and private enterprises, large and small organizations with a variety of business models that make the design, implementation and operation of integrated knowledge systems very difficult. In recent days, there has been a sweeping advancement in the information technology area, leading to the development of sophisticated frameworks and architectures. These platforms need to be used for the development of integrated knowledge management systems which provides a common platform for sharing knowledge across the enterprise, thereby reducing the operational inefficiencies and delivering cost savings. This paper discusses the knowledge framework andmore » architecture that can be used for the system development and its application to real life need of nuclear industry. A case study of deactivation and decommissioning (D and D) is discussed with the Knowledge Management Information Tool platform and framework. D and D work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with DOE sites, the Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintain this valuable information in a universally available and easily usable system. (authors)« less
2008-06-01
numbers—into inventory, sales, purchasing, marketing , and similar database systems distributed throughout an enterprise.(Sweeney, 2005) It can be seen as...the following: • Data sharing , both inside and outside of an enterprise. • Efficient management of massive data produced by an RFID system...matrix can be read omni-directionally and can be scaled down so that it can be affixed to small items. The DoD brokered an agreement with EAN/ UCC , the
The Health Service Bus: an architecture and case study in achieving interoperability in healthcare.
Ryan, Amanda; Eklund, Peter
2010-01-01
Interoperability in healthcare is a requirement for effective communication between entities, to ensure timely access to up to-date patient information and medical knowledge, and thus facilitate consistent patient care. An interoperability framework called the Health Service Bus (HSB), based on the Enterprise Service Bus (ESB) middleware software architecture is presented here as a solution to all three levels of interoperability as defined by the HL7 EHR Interoperability Work group in their definitive white paper "Coming to Terms". A prototype HSB system was implemented based on the Mule Open-Source ESB and is outlined and discussed, followed by a clinically-based example.
Deployment of a Testbed in a Brazilian Research Network using IPv6 and Optical Access Technologies
NASA Astrophysics Data System (ADS)
Martins, Luciano; Ferramola Pozzuto, João; Olimpio Tognolli, João; Chaves, Niudomar Siqueira De A.; Reggiani, Atilio Eduardo; Hortêncio, Claudio Antonio
2012-04-01
This article presents the implementation of a testbed and the experimental results obtained with it on the Brazilian Experimental Network of the government-sponsored "GIGA Project." The use of IPv6 integrated to current and emerging optical architectures and technologies, such as dense wavelength division multiplexing and 10-gigabit Ethernet on the core and gigabit capable passive optical network and optical distribution network on access, were tested. These protocols, architectures, and optical technologies are promising and part of a brand new worldwide technological scenario that has being fairly adopted in the networks of enterprises and providers of the world.
Medicaid information technology architecture: an overview.
Friedman, Richard H
2006-01-01
The Medicaid Information Technology Architecture (MITA) is a roadmap and tool-kit for States to transform their Medicaid Management Information System (MMIS) into an enterprise-wide, beneficiary-centric system. MITA will enable State Medicaid agencies to align their information technology (IT) opportunities with their evolving business needs. It also addresses long-standing issues of interoperability, adaptability, and data sharing, including clinical data, across organizational boundaries by creating models based on nationally accepted technical standards. Perhaps most significantly, MITA allows State Medicaid Programs to actively participate in the DHHS Secretary's vision of a transparent health care market that utilizes electronic health records (EHRs), ePrescribing and personal health records (PHRs).
Military Cyberspace: From Evolution to Revolution
2012-02-08
support the GCCs and enable USCYBERCOM to accomplish its mission? 15. SUBJECT TERMS Network Operations, Global Information Grid ( GIG ), Network...DATE: 08 February 2012 WORD COUNT: 5,405 PAGES: 30 KEY TERMS: Network Operations, Global Information Grid ( GIG ), Network Architecture...defense of the DOD global information grid ( GIG ). The DOD must pursue an enterprise approach to network management in the cyberspace domain to
SPECIAL PURPOSE IT DERAILED: UNINTENDED CONSEQUENCES OF UNIVERSAL IT LAWS AND POLICIES
2017-10-26
Information Services Division ........................ 3 Figure 2: iNET Instrumentation Telemetry Ground Station...consolidate local Information Technology (IT) networks into an enterprise architecture to reduce costs and to increase security. Leadership coined this...IT network was established to link Air Force and contractor sites to seamlessly share program information . So when Air Force IT leadership tried to
Defense Security Enterprise Architecture (DSEA) Product Reference Guide. Revision 1.0
2016-06-01
research and development efforts and functional requirements to provide an information sharing capability across all defense security domains. The...Office of the Secretary of Defense (OSD) Research and Development (RDT&E) initiative addressing vertical and horizontal information sharing across the...legal responsibilities to ensure data received by analysts meets user- specified criteria. This advancement in information sharing is made
Engineering Software for Interoperability through Use of Enterprise Architecture Techniques
2003-03-01
Response Home/ Business Security . To detect flood conditions (i.e. excess water levels) within the monitored area and alert authorities, as necessary...Response; Fire Detection & Response; and Flood Detection & Response. Functional Area Description Intruder Detection & Response Home/ Business ... Security . To monitor and detect unauthorized entry into the secured area and sound alarms/alert authorities, as necessary. Fire Detection
Enterprise Architecture Tradespace Analysis
2014-02-21
EXECUTIVE SUMMARY The Department of Defense (DoD)’s Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for...Science & Technology (S&T) priority for Engineered Resilient Systems (ERS) calls for adaptable designs with diverse systems models that can easily be...Department of Defense [Holland, 2012]. Some explicit goals are: • Establish baseline resiliency of current capabilities • More complete and robust
A hybrid method for evaluating enterprise architecture implementation.
Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam
2017-02-01
Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects. Copyright © 2016 Elsevier Ltd. All rights reserved.
A ROle-Oriented Filtering (ROOF) approach for collaborative recommendation
NASA Astrophysics Data System (ADS)
Ghani, Imran; Jeong, Seung Ryul
2016-09-01
In collaborative filtering (CF) recommender systems, existing techniques frequently focus on determining similarities among users' historical interests. This generally refers to situations in which each user normally plays a single role and his/her taste remains consistent over the long term. However, we note that existing techniques have not been significantly employed in a role-oriented context. This is especially so in situations where users may change their roles over time or play multiple roles simultaneously, while still expecting to access relevant information resources accordingly. Such systems include enterprise architecture management systems, e-commerce sites or journal management systems. In scenarios involving existing techniques, each user needs to build up very different profiles (preferences and interests) based on multiple roles which change over time. Should this not occur to a satisfactory degree, their previous information will either be lost or not utilised at all. To limit the occurrence of such issues, we propose a ROle-Oriented Filtering (ROOF) approach focusing on the manner in which multiple user profiles are obtained and maintained over time. We conducted a number of experiments using an enterprise architecture management scenario. In so doing, we observed that the ROOF approach performs better in comparison with other existing collaborative filtering-based techniques.
Buildings, Barriers, and Breakthroughs: Bridging Gaps in the Health Care Enterprise.
Kaelin, Karla; Okland, Kathy
Health care architecture and design are critical resources that are often underestimated and overlooked. As we seek to extract every available resource at our disposal to serve patients and sustain the bottom line, it is vital that we consider the influence the building imposes on the patient and caregiver experiences. Buildings impact both caregiver behaviors and the economic enterprise and are, therefore, the business of health care executives. This understanding is not only an executive obligation, it is an executive opportunity. Furthermore, the built environment can be a source for innovation in an industry whose future depends on nurse leaders to champion ingenuity with simplicity and relevance. Nurse leaders are ideally positioned to bridge health care building design and best practice.
JNDMS Task Authorization 2 Report
2013-10-01
uses Barnyard to store alarms from all DREnet Snort sensors in a MySQL database. Barnyard is an open source tool designed to work with Snort to take...Technology ITI Information Technology Infrastructure J2EE Java 2 Enterprise Edition JAR Java Archive. This is an archive file format defined by Java ...standards. JDBC Java Database Connectivity JDW JNDMS Data Warehouse JNDMS Joint Network and Defence Management System JNDMS Joint Network Defence and
The Battle Command Sustainment Support System: Initial Analysis Report
2016-09-01
diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data
Review of Spatial-Database System Usability: Recommendations for the ADDNS Project
2007-12-01
basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18
[Occupational exposure to biological agents intentionally used in Polish enterprises].
Kozajda, Anna; Szadkowska-Stańczyk, Irena
2015-01-01
The paper presents the intentional use of biological agents for industrial, diagnostic and research purposes in Polish enterprises. The National Register of Biological Agents (Krajowy Rejestr Czynników Biologicznych - KRCB) is an online database that collects the data on the intentional use of biological agents at work in Poland. As of December 2013 there were 533 notifications in KRCB, mainly for diagnostic (73%), research (20%) and industrial purposes (7%). Mostly there were hospital diagnostic laboratories (37%), and other laboratories (35%), as well as higher education and research institutions (11%). In total, 4015 workers (91.7% of women, 8.3% of men) were exposed tobiological agents. Agents classified in risk group 2 were used in 518 enterprises, and in risk group 3 in 107 enterprises. Of those agents the following bacteria were the most frequently used: Escherichia coli except for non-pathogenic strains (455 enterprises and 3314 exposed workers); Staphylococcus aureus (445 and 3270); and Pseudomonas aeruginosa (406 and 2969, respectively). In 66 enterprises there were used biological agents recognized by the International Agency for Research on Cancer (IARC) as carcinogens. They are viruses: Epstein-Barr (7 enterprises, 181 exposed workers); hepatitis B (16 and 257); hepatitis C virus (15 and 243); human immunodefi- ciency virus (8 and 107); human papillomaviruses (2 and 4); parasites: Clonorchis viverrini (1 and 2 ); Clonorchos sinensis (1 and 2); Schistosoma haematobium (1 and 2) and bacteria Helicobacter pylori; (15 and 230, respectively). The National Register of Biological Agents at Work permits to evaluate the situation of occupational exposure to biological agents used intentionally in enterprises in Poland.
Demand Activated Manufacturing Architecture (DAMA) model for supply chain collaboration
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHAPMAN,LEON D.; PETERSEN,MARJORIE B.
The Demand Activated Manufacturing Architecture (DAMA) project during the last five years of work with the U.S. Integrated Textile Complex (retail, apparel, textile, and fiber sectors) has developed an inter-enterprise architecture and collaborative model for supply chains. This model will enable improved collaborative business across any supply chain. The DAMA Model for Supply Chain Collaboration is a high-level model for collaboration to achieve Demand Activated Manufacturing. The five major elements of the architecture to support collaboration are (1) activity or process, (2) information, (3) application, (4) data, and (5) infrastructure. These five elements are tied to the application of themore » DAMA architecture to three phases of collaboration - prepare, pilot, and scale. There are six collaborative activities that may be employed in this model: (1) Develop Business Planning Agreements, (2) Define Products, (3) Forecast and Plan Capacity Commitments, (4) Schedule Product and Product Delivery, (5) Expedite Production and Delivery Exceptions, and (6) Populate Supply Chain Utility. The Supply Chain Utility is a set of applications implemented to support collaborative product definition, forecast visibility, planning, scheduling, and execution. The DAMA architecture and model will be presented along with the process for implementing this DAMA model.« less
Software Architecture Evolution
2013-12-01
system’s major components occurring via a Java Message Service message bus [69]. This architecture was designed to promote loose coupling of soft- ware...play reconfiguration of the system. The components were Java -based and platform-independent; the interfaces by which they communicated were based on...The MPCS database, a MySQL database used for storing telemetry as well as some other information, such as logs and commanding data [68]. This
The D3 Middleware Architecture
NASA Technical Reports Server (NTRS)
Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang
2002-01-01
DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.
Tailoring PKI for the battlespace
NASA Astrophysics Data System (ADS)
Covey, Carlin R.
2003-07-01
A Public Key Infrastructure (PKI) can provide useful communication protections for friendly forces in the battlespace. The PKI would be used in conjunction with communication facilities that are accorded physical and Type-1 cryptographic protections. The latter protections would safeguard the confidentiality and (optionally) the integrity of communications between enclaves of users, whereas the PKI protections would furnish identification, authentication, authorization and privacy services for individual users. However, Commercial-Off-the-Shelf (COTS) and most Government-Off-the-Shelf (GOTS) PKI solutions are not ideally tailored for the battlespace environment. Most PKI solutions assume a relatively static, high-bandwidth communication network, whereas communication links in the battlespace will be dynamically reconfigured and bandwidth-limited. Most enterprise-wide PKI systems assume that users will enroll and disenroll at an orderly pace, whereas the battlespace PKI "enterprise" will grow and shrink abruptly as units are deployed or withdrawn from the battlespace. COTS and GOTS PKIs are seldom required to incorporate temporary "enterprise mergers", whereas the battlespace "enterprise" will need to incorporate temporary coalitions of forces drawn from various nations. This paper addresses both well-known and novel techniques for tailoring PKI for the battlespace environment. These techniques include the design of the security architecture, the selection of appropriate options within PKI standards, and some new PKI protocols that offer significant advantages in the battlespace.
An ontology-based semantic configuration approach to constructing Data as a Service for enterprises
NASA Astrophysics Data System (ADS)
Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi
2016-03-01
To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
Using Organizational Behavior To Increase the Efficiency of The Total Force Enterprise
2013-06-01
database and through personal interviews , the reader will be shown the common struggles faced by units undergoing change without following the...active, ARC, or hybrid may all play a role. Through this database and personal interviews , we can determine how the unit change was dealt with, if...organizational structures, as well as strategy . If we overlay the speed of technological change and the power of social media, military organizations failing
2008-09-01
Abbreviations ATM automated teller machine BEA business enterprise architecture DOD...Limitations Automated Teller Machines (ATMs)-At-Sea 1988 Localized, shipboard ATMs that received and accounted for a portion of sailors’ and...use smart card technology for electronic retail ransactions and (2) economically justified on the basis of reliable analyses of stimated costs and
Interagency and Multinational Information Sharing Architecture and Solutions (IMISAS) Project
2012-02-01
Defense (DOD) Enterprise Unclassified Information Sharing Service, August 10, 2010 12 Lindenmayer, Martin J. Civil Information and Intelligence Fusion...Organizations (1:2), 44-65. Lindenmayer, Martin J. Civil Information and Intelligence Fusion: Making “Non-Traditional” into “New Traditional” for...perceived as a good start which needs more development. References [Badke-Schaub et al. 2008] Badke-Schaub, Petra ; Hofinger, Gesine; Lauche
Emergency Management Operations Process Mapping: Public Safety Technical Program Study
2011-02-01
Enterprise Architectures in industry, and have been successfully applied to assist companies to optimise interdependencies and relationships between...model for more in-depth analysis of EM processes, and for use in tandem with other studies that apply modeling and simulation to assess EM...for use in tandem with other studies that apply modeling and simulation to assess EM operational effectiveness before and after changing elements
Effectively Managing the Air Force Enterprise Architecture
2005-01-18
infrastructure, systems development, and strategic data planning. Denzin and Lincoln suggests that a content analysis is an acceptable research...methodology for this type of data ( Denzin and Lincoln , 2000). Leedy and Ormrod agree that a content analysis is the systematic examination of written...2003). Advances in Mixed Method Design. Thousand Oaks, CA, Sage. Denzin , N. K. and Y. S. Lincoln (2000). Handbook of Qualitative Research
Defense Against National Vulnerabilities in Public Data
2017-02-28
ingestion of subscription based precision data sources ( Business Intelligence Databases, Monster, others). Flexible data architecture that allows for... Architecture Objective: Develop a data acquisition architecture that can successfully ingest 1,000,000 records per hour from up to 100 different open...data sources. Developed and operate a data acquisition architecture comprised of the four following major components: Robust website
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
Applications of Ontologies in Knowledge Management Systems
NASA Astrophysics Data System (ADS)
Rehman, Zobia; Kifor, Claudiu V.
2014-12-01
Enterprises are realizing that their core asset in 21st century is knowledge. In an organization knowledge resides in databases, knowledge bases, filing cabinets and peoples' head. Organizational knowledge is distributed in nature and its poor management causes repetition of activities across the enterprise. To get true benefits from this asset, it is important for an organization to "know what they know". That's why many organizations are investing a lot in managing their knowledge. Artificial intelligence techniques have a huge contribution in organizational knowledge management. In this article we are reviewing the applications of ontologies in knowledge management realm
Maritime domain awareness community of interest net centric information sharing
NASA Astrophysics Data System (ADS)
Andress, Mark; Freeman, Brian; Rhiddlehover, Trey; Shea, John
2007-04-01
This paper highlights the approach taken by the Maritime Domain Awareness (MDA) Community of Interest (COI) in establishing an approach to data sharing that seeks to overcome many of the obstacles to sharing both within the federal government and with international and private sector partners. The approach uses the DOD Net Centric Data Strategy employed through Net Centric Enterprise Services (NCES) Service Oriented Architecture (SOA) foundation provided by Defense Information Systems Agency (DISA), but is unique in that the community is made up of more than just Defense agencies. For the first pilot project, the MDA COI demonstrated how four agencies from DOD, the Intelligence Community, Department of Homeland Security (DHS), and Department of Transportation (DOT) could share Automatic Identification System (AIS) data in a common format using shared enterprise service components.
NASA Astrophysics Data System (ADS)
Garcia-Gonzalez, Juan P.; Gacitua-Decar, Geronica; Pahl, Claus
Providing mobility to participants of business processes is an increasing trend in the banking sector. Independence of a physical place to interact with clients, while been able to use the information managed in the banking applications is one, of the benefits of mobile business processes. Challenges arising from this approach include to deal with a scenario of occasionally connected communication; security issues regarding the exposition of internal information on devices-that could be lost-; and restrictions on the capacity of mobile devices. This paper presents our experience in implementing a service-based architecture solution to extend centralised resources from a financial institution to a mobile platform.
NASA Astrophysics Data System (ADS)
Wattawa, Scott
1995-11-01
Offering interactive services and data in a hybrid fiber/coax cable system requires the coordination of a host of operations and business support systems. New service offerings and network growth and evolution create never-ending changes in the network infrastructure. Agent-based enterprise models provide a flexible mechanism for systems integration of service and support systems. Agent models also provide a mechanism to decouple interactive services from network architecture. By using the Java programming language, agents may be made safe, portable, and intelligent. This paper investigates the application of the Object Management Group's Common Object Request Brokering Architecture to the integration of a multiple services metropolitan area network.
Design and implementation of workflow engine for service-oriented architecture
NASA Astrophysics Data System (ADS)
Peng, Shuqing; Duan, Huining; Chen, Deyun
2009-04-01
As computer network is developed rapidly and in the situation of the appearance of distribution specialty in enterprise application, traditional workflow engine have some deficiencies, such as complex structure, bad stability, poor portability, little reusability and difficult maintenance. In this paper, in order to improve the stability, scalability and flexibility of workflow management system, a four-layer architecture structure of workflow engine based on SOA is put forward according to the XPDL standard of Workflow Management Coalition, the route control mechanism in control model is accomplished and the scheduling strategy of cyclic routing and acyclic routing is designed, and the workflow engine which adopts the technology such as XML, JSP, EJB and so on is implemented.
NASA Astrophysics Data System (ADS)
Ferrari, F.; Medici, M.
2017-02-01
Since 2005, DIAPReM Centre of the Department of Architecture of the University of Ferrara, in collaboration with the "Centro Studi Leon Battista Alberti" Foundation and the Consorzio Futuro in Ricerca, is carrying out a research project for the creation of 3D databases that could allow the development of a critical interpretation of Alberti's architectural work. The project is primarily based on a common three-dimensional integrated survey methodology for the creation of a navigable multilayered database. The research allows the possibility of reiterative metrical analysis, thanks to the use of a coherent data in order to check and validate hypothesis by researchers, art historians and scholars on Alberti's architectural work. Coherently with this methodological framework, indeed, two case studies are explained in this paper: the church of San Sebastiano in Matua and The Church of the Santissima Annunziata in Florence. Furthermore, thanks to a brief introduction of further developments of the project, a short graphical analysis of preliminary results on Tempio Malatestiano in Rimini opens new perspectives of research.
An architecture for a brain-image database
NASA Technical Reports Server (NTRS)
Herskovits, E. H.
2000-01-01
The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.
The WLCG Messaging Service and its Future
NASA Astrophysics Data System (ADS)
Cons, Lionel; Paladin, Massimo
2012-12-01
Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.
End-to-end network models encompassing terrestrial, wireless, and satellite components
NASA Astrophysics Data System (ADS)
Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.
2004-08-01
Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.
Orthographic Software Modelling: A Novel Approach to View-Based Software Engineering
NASA Astrophysics Data System (ADS)
Atkinson, Colin
The need to support multiple views of complex software architectures, each capturing a different aspect of the system under development, has been recognized for a long time. Even the very first object-oriented analysis/design methods such as the Booch method and OMT supported a number of different diagram types (e.g. structural, behavioral, operational) and subsequent methods such as Fusion, Kruchten's 4+1 views and the Rational Unified Process (RUP) have added many more views over time. Today's leading modeling languages such as the UML and SysML, are also oriented towards supporting different views (i.e. diagram types) each able to portray a different facets of a system's architecture. More recently, so called enterprise architecture frameworks such as the Zachman Framework, TOGAF and RM-ODP have become popular. These add a whole set of new non-functional views to the views typically emphasized in traditional software engineering environments.
Development of a data warehouse at an academic health system: knowing a place for the first time.
Dewitt, Jocelyn G; Hampton, Philip M
2005-11-01
In 1998, the University of Michigan Health System embarked upon the design, development, and implementation of an enterprise-wide data warehouse, intending to use prioritized business questions to drive its design and implementation. Because of the decentralized nature of the academic health system and the development team's inability to identify and prioritize those institutional business questions, however, a bottom-up approach was used to develop the enterprise-wide data warehouse. Specific important data sets were identified for inclusion, and the technical team designed the system with an enterprise view and architecture rather than as a series of data marts. Using this incremental approach of adding data sets, institutional leaders were able to experience and then further define successful use of the integrated data made available to them. Even as requests for the use and expansion of the data warehouse outstrip the resources assigned for support, the data warehouse has become an integral component of the institution's information management strategy. The authors discuss the approach, process, current status, and successes and failures of the data warehouse.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Striped Data Server for Scalable Parallel Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jin; Gutsche, Oliver; Mandrichenko, Igor
A columnar data representation is known to be an efficient way for data storage, specifically in cases when the analysis is often done based only on a small fragment of the available data structures. A data representation like Apache Parquet is a step forward from a columnar representation, which splits data horizontally to allow for easy parallelization of data analysis. Based on the general idea of columnar data storage, working on the [LDRD Project], we have developed a striped data representation, which, we believe, is better suited to the needs of High Energy Physics data analysis. A traditional columnar approachmore » allows for efficient data analysis of complex structures. While keeping all the benefits of columnar data representations, the striped mechanism goes further by enabling easy parallelization of computations without requiring special hardware. We will present an implementation and some performance characteristics of such a data representation mechanism using a distributed no-SQL database or a local file system, unified under the same API and data representation model. The representation is efficient and at the same time simple so that it allows for a common data model and APIs for wide range of underlying storage mechanisms such as distributed no-SQL databases and local file systems. Striped storage adopts Numpy arrays as its basic data representation format, which makes it easy and efficient to use in Python applications. The Striped Data Server is a web service, which allows to hide the server implementation details from the end user, easily exposes data to WAN users, and allows to utilize well known and developed data caching solutions to further increase data access efficiency. We are considering the Striped Data Server as the core of an enterprise scale data analysis platform for High Energy Physics and similar areas of data processing. We have been testing this architecture with a 2TB dataset from a CMS dark matter search and plan to expand it to multiple 100 TB or even PB scale. We will present the striped format, Striped Data Server architecture and performance test results.« less
High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan
2010-10-04
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2014-12-24
... for the semantic content of orthoimagery databases for public agencies and private enterprises. It... to the public on the FGDC Web site, www.fgdc.gov . DATES: Comments on the draft Part 2 (revision...
Employing the Intelligence Cycle Process Model Within the Homeland Security Enterprise
2013-12-01
the Iraq anti-war movement, a former U.S. Congresswoman, the U.S. Treasury Department and hip hop bands to spread Sharia law in the U.S. A Virginia...challenges remain with threat notification, access to information, and database management of information that may have contributed the 2013 Boston...The FBI said it took a number of investigative steps to check on the request, including looking at his travel history, checking databases for
Searching Across the International Space Station Databases
NASA Technical Reports Server (NTRS)
Maluf, David A.; McDermott, William J.; Smith, Ernest E.; Bell, David G.; Gurram, Mohana
2007-01-01
Data access in the enterprise generally requires us to combine data from different sources and different formats. It is advantageous thus to focus on the intersection of the knowledge across sources and domains; keeping irrelevant knowledge around only serves to make the integration more unwieldy and more complicated than necessary. A context search over multiple domain is proposed in this paper to use context sensitive queries to support disciplined manipulation of domain knowledge resources. The objective of a context search is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The search supports formally the tasks of selecting, combining, extending, specializing, and modifying components from a diverse set of domains. This paper demonstrates a new paradigm in composition of information for enterprise applications. In particular, it discusses an approach to achieving data integration across multiple sources, in a manner that does not require heavy investment in database and middleware maintenance. This lean approach to integration leads to cost-effectiveness and scalability of data integration with an underlying schemaless object-relational database management system. This highly scalable, information on demand system framework, called NX-Search, which is an implementation of an information system built on NETMARK. NETMARK is a flexible, high-throughput open database integration framework for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML used widely at the National Aeronautics Space Administration (NASA) and industry.
CaveMan Enterprise version 1.0 Software Validation and Verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, David
The U.S. Department of Energy Strategic Petroleum Reserve stores crude oil in caverns solution-mined in salt domes along the Gulf Coast of Louisiana and Texas. The CaveMan software program has been used since the late 1990s as one tool to analyze pressure mea- surements monitored at each cavern. The purpose of this monitoring is to catch potential cavern integrity issues as soon as possible. The CaveMan software was written in Microsoft Visual Basic, and embedded in a Microsoft Excel workbook; this method of running the CaveMan software is no longer sustainable. As such, a new version called CaveMan Enter- prisemore » has been developed. CaveMan Enterprise version 1.0 does not have any changes to the CaveMan numerical models. CaveMan Enterprise represents, instead, a change from desktop-managed work- books to an enterprise framework, moving data management into coordinated databases and porting the numerical modeling codes into the Python programming language. This document provides a report of the code validation and verification testing.« less
Villotti, Patrizia; Zaniboni, Sara; Corbière, Marc; Guay, Stéphane; Fraccaroli, Franco
2018-06-01
People with mental illnesses face stigma that hinders their full integration into society. Work is a major determinant of social inclusion, however, people with mental disorders have fewer opportunities to work. Emerging evidence suggests that social enterprises help disadvantaged people with their work integration process. The purpose of this study is to enhance our understanding about how perceptions of stigma can be decreased for people with mental disorders throughout their work experience in a social enterprise. Using a longitudinal study design, 310 individuals with mental disorders employed in Italian social enterprises completed a battery of questionnaires on individual (e.g., severity of symptoms; occupational self-efficacy) and environmental (e.g., social support; organizational constraints) variables. Of the 223 individuals potentially eligible at the 12-month follow up, 139 completed a battery of questionnaires on social and working skills, perceived work productivity and perceived stigma. Path analyses were used to test a model delineating how people with mental disorders working in social enterprises improve social and work outcomes (i.e., motivation, skills and productivity), and reduce the perception of being stigmatized. Working in a social enterprise enhances working social skills, which leads to a perception of higher productivity and, consequently, the perception of being discriminated against and stigmatized is reduced. Social enterprise provides a context in which people with mental disorders reach a sense of work-related and social competence. This sense of competence helps them to reduce perceived stigma, which is a crucial step toward social inclusion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Gamification Can It Increase the Quantity and Quality of Software
2012-04-25
Gamification? • iCollege –Visits to Second Life, debates in Second Life. • Michelin – teaches “enterprise architecture*” • Nike + –http...gamification to train interns , restructure proteins, and help solve medical mysteries –Foldit-- www.wired.com/medtech/genetics/magazine/17- 05...education. • IT Services –Help desks and network administrators are areas where gamification techniques could prove useful. • Businesses and Marketing
Fiscal Year 2008 Agency Financial Report
2008-11-17
searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...core financial system or in the mixed systems that provide source transactional information. The Business Enterprise Architecture is the Department’s...924,834.2) $ (1,206,769.4) $ 302,387.7 $ 0 $ (904,381.7) Budgetary Financing Sources : Appropriations used 3.2 662,422.0 0 662,425.2 3.3
Exploring a Net Centric Architecture Using the Net Warrior Airborne Early Warning and Control Node
2007-12-01
implemented in different languages. Customisation Interfaces for customising components. User-friendly customisation tools will use these interfaces...Sun Enterprise Java Beans. Customisation Customisation in the context of components is defined in [Heineman & Councill 2001, p. 42] as ‘…the ability...of a consumer to adapt a component prior to its installation or use’. Customisation can be facilitated through the use of specialised interfaces
An Analysis of the Navy Manpower, Personnel, Training and Education Architecture
2017-03-01
the courses offered on the Navy 11 Education and Training Command (NETC) Learning Management System (LMS), better known as “E-Learning,” are... Training ,” offers a trimmed down version of the Defense Manpower Course offered at the Naval Postgraduate School (NPS). None fully satisfy training ...Environments (ROC/POE). • “ Training requirements are generated by customer organizations (COCOM’s, Type Commanders, Enterprises, Agencies, and other
Creating a National Framework for Cybersecurity: An Analysis of Issues and Options
2005-02-22
of those measures; and the associated field of professional endeavor. Virtually any element of cyberspace can be at risk , and the degree of...weaknesses in U.S. cybersecurity is an area of some controversy. However, some components appear to be sources of potentially significant risk because either...security into enterprise architecture, using risk management, and using metrics. These different approaches all have different strengths and weaknesses
Strategic Mobility 21 Transition Plan: From Research Federation to Business Enterprise
2010-12-31
Transportation Management System (GTMS), Service Oriented Architecture (SOA), Service -as-a- Software ( SaaS ), Joint Capability Technolgoy Demonstration...the Software -as-a- Service ( SaaS ) format, whereby users access the application with the appropriate Internet authorizations. Security is provided by...integrating best-of-breed dual-use systems deployed in the software as a service ( SaaS ) environment. It includes single sign-on capabilities and was
Working Group 11F Opening Comments NASA Planning for NASA's Future Ground Systems
NASA Technical Reports Server (NTRS)
Smith, Danford S.
2016-01-01
These are simple charts for the introductory comments to be made at the start of a panel session at the Ground System Architecture Workshop (GSAW2016). It is not meant as a formal paper, but rather contains information to prompt further discussion of the panel members and audience. The panel topic is: Embracing Change via the Use of Service-Based Frameworks and Products in an Enterprise.
caGrid 1.0 : an enterprise Grid infrastructure for biomedical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oster, S.; Langella, S.; Hastings, S.
To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design: An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG{trademark}) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including (1) discovery, (2) integrated and large-scale data analysis, and (3) coordinated study. Measurements: The caGrid is built as a Grid software infrastructure andmore » leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results: The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL:
Integrating MPI and deduplication engines: a software architecture roadmap.
Baksi, Dibyendu
2009-03-01
The objective of this paper is to clarify the major concepts related to architecture and design of patient identity management software systems so that an implementor looking to solve a specific integration problem in the context of a Master Patient Index (MPI) and a deduplication engine can address the relevant issues. The ideas presented are illustrated in the context of a reference use case from Integrating the Health Enterprise Patient Identifier Cross-referencing (IHE PIX) profile. Sound software engineering principles using the latest design paradigm of model driven architecture (MDA) are applied to define different views of the architecture. The main contribution of the paper is a clear software architecture roadmap for implementors of patient identity management systems. Conceptual design in terms of static and dynamic views of the interfaces is provided as an example of platform independent model. This makes the roadmap applicable to any specific solutions of MPI, deduplication library or software platform. Stakeholders in need of integration of MPIs and deduplication engines can evaluate vendor specific solutions and software platform technologies in terms of fundamental concepts and can make informed decisions that preserve investment. This also allows freedom from vendor lock-in and the ability to kick-start integration efforts based on a solid architecture.
An economic analysis of disaggregation of space assets: Application to GPS
NASA Astrophysics Data System (ADS)
Hastings, Daniel E.; La Tour, Paul A.
2017-05-01
New ideas, technologies and architectural concepts are emerging with the potential to reshape the space enterprise. One of those new architectural concepts is the idea that rather than aggregating payloads onto large very high performance buses, space architectures should be disaggregated with smaller numbers of payloads (as small as one) per bus and the space capabilities spread across a correspondingly larger number of systems. The primary rationale is increased survivability and resilience. The concept of disaggregation is examined from an acquisition cost perspective. A mixed system dynamics and trade space exploration model is developed to look at long-term trends in the space acquisition business. The model is used to examine the question of how different disaggregated GPS architectures compare in cost to the well-known current GPS architecture. A generation-over-generation examination of policy choices is made possible through the application of soft systems modeling of experience and learning effects. The assumptions that are allowed to vary are: design lives, production quantities, non-recurring engineering and time between generations. The model shows that there is always a premium in the first generation to be paid to disaggregate the GPS payloads. However, it is possible to construct survivable architectures where the premium after two generations is relatively low.
Mexican Art and Architecture Databases: Needs, Achievements, Problems.
ERIC Educational Resources Information Center
Barberena, Elsa
At the international level, a lack of diffusion of Mexican art and architecture in indexes and abstracts has been detected. Reasons for this could be lack of continuity in publications, the use of the Spanish language, lack of interest in Mexican art and architecture, and sporadic financial resources. Nevertheless, even though conditions are not…
77 FR 187 - Federal Acquisition Regulation; Transition to the System for Award Management (SAM)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-03
... architecture. Deletes reference to ``business partner network'' at 4.1100, Scope, which is no longer necessary...) architecture has begun. This effort will transition the Central Contractor Registration (CCR) database, the...) to the new architecture. This case provides the first step in updating the FAR for these changes, and...
75 FR 5579 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
... with re-entry controlled by passwords. The DLA Enterprise Hotline Program Database is also password...: * * * * * System location: Delete entry and replace with ``Director, DLA Accountability Office (DA), Headquarters....'' * * * * * Retention and disposal: Delete entry and replace with ``Records are destroyed/deleted 10 years after...
Advanced Technology Lifecycle Analysis System (ATLAS) Technology Tool Box (TTB)
NASA Technical Reports Server (NTRS)
Doyle, Monica; ONeil, Daniel A.; Christensen, Carissa B.
2005-01-01
The Advanced Technology Lifecycle Analysis System (ATLAS) is a decision support tool designed to aid program managers and strategic planners in determining how to invest technology research and development dollars. It is an Excel-based modeling package that allows a user to build complex space architectures and evaluate the impact of various technology choices. ATLAS contains system models, cost and operations models, a campaign timeline and a centralized technology database. Technology data for all system models is drawn from a common database, the ATLAS Technology Tool Box (TTB). The TTB provides a comprehensive, architecture-independent technology database that is keyed to current and future timeframes.
1981-05-01
factors that cause damage are discussed below. a. Architectural elements. Damage to architectural elements can result in both significant dollar losses...hazard priority- ranking procedure are: 1. To produce meaningful results which are as simple as possible, con- sidering the existing databases. 2. To...minimize the amount of data required for meaningful results , i.e., the database should contain only the most fundamental building characteris- tics. 3. To
A GH-Based Ontology to Support Applications for Automating Decision Support
2005-03-01
architecture for a decision support sys - tem. For this reason, it obtains data from, and updates, a database. IDA also wanted the prototype’s architecture...Chief In- formation Officer CoABS Control of Agent Based Sys - tems DBMS Database Management System DoD Department of Defense DTD Document Type...Generic Hub, the Moyeu Générique, and the Generische Nabe , specifying each as a separate service description with property names and values of the GH
NASA Technical Reports Server (NTRS)
Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)
2004-01-01
Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.
NASA Astrophysics Data System (ADS)
Hodijah, A.; Sundari, S.; Nugraha, A. C.
2018-05-01
As a Local Government Agencies who perform public services, General Government Office already has utilized Reporting Information System of Local Government Implementation (E-LPPD). However, E-LPPD has upgrade limitation for the integration processes that cannot accommodate General Government Offices’ needs in order to achieve Good Government Governance (GGG), while success stories of the ultimate goal of e-government implementation requires good governance practices. Currently, citizen demand public services as private sector do, which needs service innovation by utilizing the legacy system as a service based e-government implementation, while Service Oriented Architecture (SOA) to redefine a business processes as a set of IT enabled services and Enterprise Architecture from the Open Group Architecture Framework (TOGAF) as a comprehensive approach in redefining business processes as service innovation towards GGG. This paper takes a case study on Performance Evaluation of Local Government Implementation (EKPPD) system on General Government Office. The results show that TOGAF will guide the development of integrated business processes of EKPPD system that fits good governance practices to attain GGG with SOA methodology as technical approach.
[The role of Integrating the Healthcare Enterprise (IHE) in telemedicine].
Bergh, B; Brandner, A; Heiß, J; Kutscha, U; Merzweiler, A; Pahontu, R; Schreiweis, B; Yüksekogul, N; Bronsch, T; Heinze, O
2015-10-01
Telemedicine systems are today already used in a variety of areas to improve patient care. The lack of standardization in those solutions creates a lack of interoperability of the systems. Internationally accepted standards can help to solve the lack of system interoperability. With Integrating the Healthcare Enterprise (IHE), a worldwide initiative of users and vendors is working on the use of defined standards for specific use cases by describing those use cases in so called IHE Profiles. The aim of this work is to determine how telemedicine applications can be implemented using IHE profiles. Based on a literature review, exemplary telemedicine applications are described and technical abilities of IHE Profiles are evaluated. These IHE Profiles are examined for their usability and are then evaluated in exemplary telemedicine application architectures. There are IHE Profiles which can be identified as being useful for intersectoral patient records (e.g. PEHR at Heidelberg), as well as for point to point communication where no patient record is involved. In the area of patient records, the IHE Profile "Cross-Enterprise Document Sharing (XDS)" is often used. The point to point communication can be supported using the IHE "Cross-Enterprise Document Media Interchange (XDM)". IHE-based telemedicine applications offer caregivers the possibility to be informed about their patients using data from intersectoral patient records, but also there are possible savings by reusing the standardized interfaces in other scenarios.
High performance semantic factoring of giga-scale semantic graph databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Adolf, Bob; Haglin, David
2010-10-01
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less
Efficiently Distributing Component-based Applications Across Wide-Area Environments
2002-01-01
a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running
NASA Astrophysics Data System (ADS)
Anderson, D.; Lewis, D.; O'Hara, C.; Katragadda, S.
2006-12-01
The Partnership Network Knowledge Base (PNKB) is being developed to provide connectivity and deliver content for the research information needs of NASA's Applied Science Program and related scientific communities of practice. Data has been collected which will permit users to identify and analyze the current network of interactions between organizations within the community of practice, harvest research results fixed to those interactions, and identify potential collaborative opportunities to further research streams. The PNKB is being developed in parallel with the Research Projects Knowledge Base (RPKB) and will be deployed in a manner that is fully compatible and interoperable with the NASA enterprise architecture (EA). Information needs have been assessed through a survey of potential users, evaluations of existing NASA resource users, and collaboration between Stennis Space Center and The Mississippi Research Consortium (MRC). The PNKB will assemble information on funded research institutions and categorize the research emphasis of each as it relates to NASA's six major science focus areas and 12 national applications. The PNKB will include information about organizations that conduct NASA Earth Science research such as, principal investigators' affiliation, contact information, relationship-type with NASA and other NASA partners, funding arrangements, and formal agreements like memoranda-of-understanding. To further the utility of the PNKB, relational links have been integrated into the RPKB - which will contain data about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The combined PNKB and RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry.
The personal receiving document management and the realization of email function in OAS
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao
2017-05-01
This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.
A New Approach To Secure Federated Information Bases Using Agent Technology.
ERIC Educational Resources Information Center
Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang
2003-01-01
Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…
Software database creature for investment property measurement according to international standards
NASA Astrophysics Data System (ADS)
Ponomareva, S. V.; Merzliakova, N. A.
2018-05-01
The article deals with investment property measurement and accounting problems at the international, national and enterprise levels. The need to create the software for investment property measurement according to International Accounting Standards was substantiated. The necessary software functions and the processes were described.
Application developer's tutorial for the CSM testbed architecture
NASA Technical Reports Server (NTRS)
Underwood, Phillip; Felippa, Carlos A.
1988-01-01
This tutorial serves as an illustration of the use of the programmer interface on the CSM Testbed Architecture (NICE). It presents a complete, but simple, introduction to using both the GAL-DBM (Global Access Library-Database Manager) and CLIP (Command Language Interface Program) to write a NICE processor. Familiarity with the CSM Testbed architecture is required.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
US NDC Modernization: Service Oriented Architecture Proof of Concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamlet, Benjamin R.; Encarnacao, Andre Villanova; Jackson, Keilan R.
2014-12-01
This report is a progress update on the US NDC Modernization Service Oriented Architecture (SOA) study describing results from a proof of concept project completed from May through September 2013. Goals for this proof of concept are 1) gain experience configuring, using, and running an Enterprise Service Bus (ESB), 2) understand the implications of wrapping existing software in standardized interfaces for use as web services, and 3) gather performance metrics for a notional seismic event monitoring pipeline implemented using services with various data access and communication patterns. The proof of concept is a follow on to a previous SOA performancemore » study. Work was performed by four undergraduate summer student interns under the guidance of Sandia staff.« less
Automated Planning and Scheduling for Space Mission Operations
NASA Technical Reports Server (NTRS)
Chien, Steve; Jonsson, Ari; Knight, Russell
2005-01-01
Research Trends: a) Finite-capacity scheduling under more complex constraints and increased problem dimensionality (subcontracting, overtime, lot splitting, inventory, etc.) b) Integrated planning and scheduling. c) Mixed-initiative frameworks. d) Management of uncertainty (proactive and reactive). e) Autonomous agent architectures and distributed production management. e) Integration of machine learning capabilities. f) Wider scope of applications: 1) analysis of supplier/buyer protocols & tradeoffs; 2) integration of strategic & tactical decision-making; and 3) enterprise integration.
Addressing Challenges in the Acquisition of Secure Software Systems With Open Architectures
2012-04-30
as a “broker” to market specific research topics identified by our sponsors to NPS graduate students. This three-pronged approach provides for a...breaks, and the day-ending socials. Many of our researchers use these occasions to establish new teaming arrangements for future research work. In the...software (CSS) and open source software (OSS). Federal government acquisition policy, as well as many leading enterprise IT centers, now encourage the use
Building a Foundation for the Implementation of an Enterprise Architecture for the Argentinian Army
2016-06-01
foundation for execution, information technology, chief information officer , public administration 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY...effectively implement IT standardization in the Argentinian Army, the role of Chief Information Officer (CIO) has to be created. The term was introduced...organizations, this is the role of the Chief Information Officer (CIO). The Army should appoint this position and assign responsibility and resources to it
2008-03-01
Machine [29]. OC4J applications support Java Servlets , Web services, and the following J2EE specific standards: Extensible Markup Language (XML...IMAP Internet Message Access Protocol IP Internet Protocol IT Information Technology xviii J2EE Java Enterprise Environment JSR 168 Java ...LDAP), World Wide Web Distributed Authoring and Versioning (WebDav), Java Specification Request 168 (JSR 168), and Web Services for Remote
A Pattern for Increased Monitoring for Intellectual Property Theft by Departing Insiders
2012-04-01
2012 TECHNICAL REPORT CMU/SEI-2012-TR-008 ESC-TR-2012-008 CERT® Program http://www.sei.cmu.edu SEI markings v3.2 / 30 August 2011... Programs Conference (PLoP) 2011 (http://www.hillside.net/plop/2011/). This material is based upon work funded and supported by the United States...research project at the CERT® Program is identifying enterprise architectural patterns to protect against the insider threat to organizations. This
Meeting Capability Goals through Effective Modelling and Experimentation of C4ISTAR Options
2011-06-01
UNCLASSIFIED 9 Key Facts 12 industry partners drawn from the major defence providers ~80 associate members made up of small and medium sized...in the emergence of a number of effective monopolies. The UK Defence marketplace has become too small and the major equipment ‘replacement’ cycles too...ProcessThreat & Need Figure 3. Environment for Capability Trading The environment is aligned with the MOD’s strategy for Enterprise Architecture *10
U.S. Army Workshop on Exploring Enterprise, System of Systems, System, and Software Architectures
2009-03-01
state of a net-centric intelligence /surveillance/reconnaissance (ISR) capability featuring DCGS by the middle of the next decade.5 In some situations...boundaries. The DoDAF has a relatively long history. It started as a Command, Control, Communications, Computers, Surveillance and Intelligence ...Army have needed to perform tasks such as: col- lect and analyze intelligence information; maneuver the force; target and provide fire support; conduct
Dietzel, Matthias; Baltzer, Pascal A T; Dietzel, Andreas; Zoubi, Ramy; Gröschel, Tobias; Burmeister, Hartmut P; Bogdan, Martin; Kaiser, Werner A
2012-07-01
Differential diagnosis of lesions in MR-Mammography (MRM) remains a complex task. The aim of this MRM study was to design and to test robustness of Artificial Neural Network architectures to predict malignancy using a large clinical database. For this IRB-approved investigation standardized protocols and study design were applied (T1w-FLASH; 0.1 mmol/kgBW Gd-DTPA; T2w-TSE; histological verification after MRM). All lesions were evaluated by two experienced (>500 MRM) radiologists in consensus. In every lesion, 18 previously published descriptors were assessed and documented in the database. An Artificial Neural Network (ANN) was developed to process this database (The-MathWorks/Inc., feed-forward-architecture/resilient back-propagation-algorithm). All 18 descriptors were set as input variables, whereas histological results (malignant vs. benign) was defined as classification variable. Initially, the ANN was optimized in terms of "Training Epochs" (TE), "Hidden Layers" (HL), "Learning Rate" (LR) and "Neurons" (N). Robustness of the ANN was addressed by repeated evaluation cycles (n: 9) with receiver operating characteristics (ROC) analysis of the results applying 4-fold Cross Validation. The best network architecture was identified comparing the corresponding Area under the ROC curve (AUC). Histopathology revealed 436 benign and 648 malignant lesions. Enhancing the level of complexity could not increase diagnostic accuracy of the network (P: n.s.). The optimized ANN architecture (TE: 20, HL: 1, N: 5, LR: 1.2) was accurate (mean-AUC 0.888; P: <0.001) and robust (CI: 0.885-0.892; range: 0.880-0.898). The optimized neural network showed robust performance and high diagnostic accuracy for prediction of malignancy on unknown data. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
An Adaptive Database Intrusion Detection System
ERIC Educational Resources Information Center
Barrios, Rita M.
2011-01-01
Intrusion detection is difficult to accomplish when attempting to employ current methodologies when considering the database and the authorized entity. It is a common understanding that current methodologies focus on the network architecture rather than the database, which is not an adequate solution when considering the insider threat. Recent…
An Information Architect's View of Earth Observations for Disaster Risk Management
NASA Astrophysics Data System (ADS)
Moe, K.; Evans, J. D.; Cappelaere, P. G.; Frye, S. W.; Mandl, D.; Dobbs, K. E.
2014-12-01
Satellite observations play a significant role in supporting disaster response and risk management, however data complexity is a barrier to broader use especially by the public. In December 2013 the Committee on Earth Observation Satellites Working Group on Information Systems and Services documented a high-level reference model for the use of Earth observation satellites and associated products to support disaster risk management within the Global Earth Observation System of Systems context. The enterprise architecture identified the important role of user access to all key functions supporting situational awareness and decision-making. This paper focuses on the need to develop actionable information products from these Earth observations to simplify the discovery, access and use of tailored products. To this end, our team has developed an Open GeoSocial API proof-of-concept for GEOSS. We envision public access to mobile apps available on smart phones using common browsers where users can set up a profile and specify a region of interest for monitoring events such as floods and landslides. Information about susceptibility and weather forecasts about flood risks can be accessed. Users can generate geo-located information and photos of local events, and these can be shared on social media. The information architecture can address usability challenges to transform sensor data into actionable information, based on the terminology of the emergency management community responsible for informing the public. This paper describes the approach to collecting relevant material from the disasters and risk management community to address the end user needs for information. The resulting information architecture addresses the structural design of the shared information in the disasters and risk management enterprise. Key challenges are organizing and labeling information to support both online user communities and machine-to-machine processing for automated product generation.
Menominee Tribe Links Gaming and Education.
ERIC Educational Resources Information Center
Simonelli, Richard
1995-01-01
The Menominee Gaming and Hospitality Institute (College of the Menominee Nation, WI) assists Indian people in mastering skills needed to operate their own gaming enterprises and to manage hotels or resorts. In addition to certificate and degree coursework, the institute is developing a computerized industry database and a product development…
Administering a Web-Based Course on Database Technology
ERIC Educational Resources Information Center
de Oliveira, Leonardo Rocha; Cortimiglia, Marcelo; Marques, Luis Fernando Moraes
2003-01-01
This article presents a managerial experience with a web-based course on data base technology for enterprise management. The course has been developed and managed by a Department of Industrial Engineering in Brazil in a Public University. Project's managerial experiences are described covering its conception stage where the Virtual Learning…
40 CFR 33.405 - How does a recipient determine its fair share objectives?
Code of Federal Regulations, 2010 CFR
2010-07-01
... AND OTHER FEDERAL ASSISTANCE PARTICIPATION BY DISADVANTAGED BUSINESS ENTERPRISES IN UNITED STATES... Business Pattern (CBP) database, determine the number of all qualified businesses available in the market... the number of all businesses to derive a base figure for the relative availability of MBEs and WBEs in...
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Finkin, Matthew W.; Post, Robert C.; Thomson, Judith J.
2004-01-01
Many employers in the United States have responded to the terrorist attacks of September 11, 2001, by initiating or expanding policies requiring background checks of prospective employees. Their ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal…
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Academe, 2004
2004-01-01
Many employers in the United States have been initiating or expanding policies requiring background checks of prospective employees. The ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal information. Employers now have ready access to public…
An Emerging Role for Polystores in Precision Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begoli, Edmon; Christian, J. Blair; Gadepally, Vijay
Medical data is organically heterogeneous, and it usually varies significantly in both size and composition. Yet, this data is also a key for the recent and promising field of precision medicine, which focuses on identifying and tailoring appropriate medical treatments for the needs of the individual patients, based on their specific conditions, their medical history, lifestyle, genetic, and other individual factors. As we, and a database community at large, recognize that a “one size does not fit all” solution is required to work with such data, we present in this paper our observations based on our experiences, and the applicationsmore » in the field of precision medicine. Finally, we make the case for the use of polystore architecture; how it applies for precision medicine; we discuss the reference architecture; describe some of its critical components (array database); and discuss the specific types of analysis that directly benefit from this database architecture, and the ways it serves the data.« less
Active in-database processing to support ambient assisted living systems.
de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas
2014-08-12
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.
Active In-Database Processing to Support Ambient Assisted Living Systems
de Morais, Wagner O.; Lundström, Jens; Wickström, Nicholas
2014-01-01
As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare. PMID:25120164
Functions and requirements document for interim store solidified high-level and transuranic waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith-Fewell, M.A., Westinghouse Hanford
1996-05-17
The functions, requirements, interfaces, and architectures contained within the Functions and Requirements (F{ampersand}R) Document are based on the information currently contained within the TWRS Functions and Requirements database. The database also documents the set of technically defensible functions and requirements associated with the solidified waste interim storage mission.The F{ampersand}R Document provides a snapshot in time of the technical baseline for the project. The F{ampersand}R document is the product of functional analysis, requirements allocation and architectural structure definition. The technical baseline described in this document is traceable to the TWRS function 4.2.4.1, Interim Store Solidified Waste, and its related requirements, architecture,more » and interfaces.« less
Environmental modeling and recognition for an autonomous land vehicle
NASA Technical Reports Server (NTRS)
Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.
1987-01-01
An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
Towards Introducing a Geocoding Information System for Greenland
NASA Astrophysics Data System (ADS)
Siksnans, J.; Pirupshvarre, Hans R.; Lind, M.; Mioc, D.; Anton, F.
2011-08-01
Currently, addressing practices in Greenland do not support geocoding. Addressing points on a map by geographic coordinates is vital for emergency services such as police and ambulance for avoiding ambiguities in finding incident locations (Government of Greenland, 2010) Therefore, it is necessary to investigate the current addressing practices in Greenland. Asiaq (Asiaq, 2011) is a public enterprise of the Government of Greenland which holds three separate databases regards addressing and place references: - list of locality names (towns, villages, farms), - technical base maps (including road center lines not connected with names, and buildings), - the NIN registry (The Land Use Register of Greenland - holds information on the land allotments and buildings in Greenland). The main problem is that these data sets are not interconnected, thus making it impossible to address a point in a map with geographic coordinates in a standardized way. The possible solutions suffer from the fact that Greenland has a scattered habitation pattern and the generalization of the address assignment schema is a difficult task. A schema would be developed according to the characteristics of the settlement pattern, e.g. cities, remote locations and place names. The aim is to propose an ontology for a common postal address system for Greenland. The main part of the research is dedicated to the current system and user requirement engineering. This allowed us to design a conceptual database model which corresponds to the user requirements, and implement a small scale prototype. Furthermore, our research includes resemblance findings in Danish and Greenland's addressing practices, data dictionary for establishing Greenland addressing system's logical model and enhanced entity relationship diagram. This initial prototype of the Greenland addressing system could be used to evaluate and build the full architecture of the addressing information system for Greenland. Using software engineering methods the implementation can be done according to the developed data model and initial database prototype. Development of the Greenland addressing system using a modern GIS and database technology would ease the work and improve the quality of public services such as: postal delivery, emergency response, customer/business relationship management, administration of land, utility planning and maintenance and public statistical data analysis.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
Scaling an expert system data mart: more facilities in real-time.
McNamee, L A; Launsby, B D; Frisse, M E; Lehmann, R; Ebker, K
1998-01-01
Clinical Data Repositories are being rapidly adopted by large healthcare organizations as a method of centralizing and unifying clinical data currently stored in diverse and isolated information systems. Once stored in a clinical data repository, healthcare organizations seek to use this centralized data to store, analyze, interpret, and influence clinical care, quality and outcomes. A recent trend in the repository field has been the adoption of data marts--specialized subsets of enterprise-wide data taken from a larger repository designed specifically to answer highly focused questions. A data mart exploits the data stored in the repository, but can use unique structures or summary statistics generated specifically for an area of study. Thus, data marts benefit from the existence of a repository, are less general than a repository, but provide more effective and efficient support for an enterprise-wide data analysis task. In previous work, we described the use of batch processing for populating data marts directly from legacy systems. In this paper, we describe an architecture that uses both primary data sources and an evolving enterprise-wide clinical data repository to create real-time data sources for a clinical data mart to support highly specialized clinical expert systems.
AN-CASE NET-CENTRIC modeling and simulation
NASA Astrophysics Data System (ADS)
Baskinger, Patricia J.; Chruscicki, Mary Carol; Turck, Kurt
2009-05-01
The objective of mission training exercises is to immerse the trainees into an environment that enables them to train like they would fight. The integration of modeling and simulation environments that can seamlessly leverage Live systems, and Virtual or Constructive models (LVC) as they are available offers a flexible and cost effective solution to extending the "war-gaming" environment to a realistic mission experience while evolving the development of the net-centric enterprise. From concept to full production, the impact of new capabilities on the infrastructure and concept of operations, can be assessed in the context of the enterprise, while also exposing them to the warfighter. Training is extended to tomorrow's tools, processes, and Tactics, Techniques and Procedures (TTPs). This paper addresses the challenges of a net-centric modeling and simulation environment that is capable of representing a net-centric enterprise. An overview of the Air Force Research Laboratory's (AFRL) Airborne Networking Component Architecture Simulation Environment (AN-CASE) is provide as well as a discussion on how it is being used to assess technologies for the purpose of experimenting with new infrastructure mechanisms that enhance the scalability and reliability of the distributed mission operations environment.
From Modelling to Execution of Enterprise Integration Scenarios: The GENIUS Tool
NASA Astrophysics Data System (ADS)
Scheibler, Thorsten; Leymann, Frank
One of the predominant problems IT companies are facing today is Enterprise Application Integration (EAI). Most of the infrastructures built to tackle integration issues are proprietary because no standards exist for how to model, develop, and actually execute integration scenarios. EAI patterns gain importance for non-technical business users to ease and harmonize the development of EAI scenarios. These patterns describe recurring EAI challenges and propose possible solutions in an abstract way. Therefore, one can use those patterns to describe enterprise architectures in a technology neutral manner. However, patterns are documentation only used by developers and systems architects to decide how to implement an integration scenario manually. Thus, patterns are not theoretical thought to stand for artefacts that will immediately be executed. This paper presents a tool supporting a method how EAI patterns can be used to generate executable artefacts for various target platforms automatically using a model-driven development approach, hence turning patterns into something executable. Therefore, we introduce a continuous tool chain beginning at the design phase and ending in executing an integration solution in a completely automatically manner. For evaluation purposes we introduce a scenario demonstrating how the tool is utilized for modelling and actually executing an integration scenario.
Issues in implementing services for a wireless web-enabled digital camera
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Sampat, Nitin; Fisher, Yoram; Canosa, John; Noel, Nicholas
2001-05-01
The competition in the exploding digital photography market has caused vendors to explore new ways to increase their return on investment. A common view among industry analysts is that increasingly it will be services provided by these cameras, and not the cameras themselves, that will provide the revenue stream. These services will be coupled to e- Appliance based Communities. In addition, the rapidly increasing need to upload images to the Internet for photo- finishing services as well as the need to download software upgrades to the camera is driving many camera OEMs to evaluate the benefits of using the wireless web to extend their enterprise systems. Currently, creating a viable e- appliance such as a digital camera coupled with a wireless web service requires more than just a competency in product development. This paper will evaluate the system implications in the deployment of recurring revenue services and enterprise connectivity of a wireless, web-enabled digital camera. These include, among other things, an architectural design approach for services such as device management, synchronization, billing, connectivity, security, etc. Such an evaluation will assist, we hope, anyone designing or connecting a digital camera to the enterprise systems.
Architecture Knowledge for Evaluating Scalable Databases
2015-01-16
problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly
The potential of social enterprise to enhance health and well-being: a model and systematic review.
Roy, Michael J; Donaldson, Cam; Baker, Rachel; Kerr, Susan
2014-12-01
In recent years civil society organisations, associations, institutions and groups have become increasingly involved at various levels in the governance of healthcare systems around the world. In the UK, particularly in the context of recent reform of the National Health Service in England, social enterprise - that part of the third sector engaged in trading - has come to the fore as a potential model of state-sponsored healthcare delivery. However, to date, there has been no review of evidence on the outcomes of social enterprise involvement in healthcare, nor in the ability of social enterprise to address health inequalities more widely through action on the social determinants of health. Following the development of an initial conceptual model, this systematic review identifies and synthesises evidence from published empirical research on the impact of social enterprise activity on health outcomes and their social determinants. Ten health and social science databases were searched with no date delimiters set. Inclusion and exclusion criteria were applied prior to data extraction and quality appraisal. Heterogeneity in the outcomes assessed precluded meta-analysis/meta-synthesis and so the results are therefore presented in narrative form. Five studies met the inclusion criteria. The included studies provide limited evidence that social enterprise activity can impact positively on mental health, self-reliance/esteem and health behaviours, reduce stigmatization and build social capital, all of which can contribute to overall health and well-being. No empirical research was identified that examined social enterprise as an alternative mode of healthcare delivery. Due to the limited evidence available, we discuss the relationship between the evidence found and other literature not included in the review. There is a clear need for research to better understand and evidence causal mechanisms and to explore the impact of social enterprise activity, and wider civil society actors, upon a range of intermediate and long-term public health outcomes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Staccini, Pascal M.; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius
2002-01-01
Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation. PMID:12463921
Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius
2002-01-01
Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.
A Role for Semantic Web Technologies in Patient Record Data Collection
NASA Astrophysics Data System (ADS)
Ogbuji, Chimezie
Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.
Another HISA--the new standard: health informatics--service architecture.
Klein, Gunnar O; Sottile, Pier Angelo; Endsleff, Frederik
2007-01-01
In addition to the meaning as Health Informatics Society of Australia, HISA is the acronym used for the new European Standard: Health Informatics - Service Architecture. This EN 12967 standard has been developed by CEN - the federation of 29 national standards bodies in Europe. This standard defines the essential elements of a Service Oriented Architecture and a methodology for localization particularly useful for large healthcare organizations. It is based on the Open Distributed Processing (ODP) framework from ISO 10746 and contains the following parts: Part 1: Enterprise viewpoint. Part 2: Information viewpoint. Part 3: Computational viewpoint. This standard is now also the starting point for the consideration for an International standard in ISO/TC 215. The basic principles with a set of health specific middleware services as a common platform for various applications for regional health information systems, or large integrated hospital information systems, are well established following a previous prestandard. Examples of large scale deployments in Sweden, Denmark and Italy are described.
Gichoya, Judy; Pearce, Chris; Wickramasinghe, Nilmini
2013-01-01
Kenya ranks among the twenty-two countries that collectively contribute about 80% of the world's Tuberculosis cases; with a 50-200 fold increased risk of tuberculosis in HIV infected persons versus non-HIV hosts. Contemporaneously, there is an increase in mobile penetration and its use to support healthcare throughout Africa. Many are skeptical that such m-health solutions are unsustainable and not scalable. We seek to design a scalable, pervasive m-health solution for Tuberculosis care to become a use case for sustainable and scalable health IT in limited resource settings. We combine agile design principles and user-centered design to develop the architecture needed for this initiative. Furthermore, the architecture runs on multiple devices integrated to deliver functionality critical for successful Health IT implementation in limited resource settings. It is anticipated that once fully implemented, the proposed m-health solution will facilitate superior monitoring and management of Tuberculosis and thereby reduce the alarming statistic regarding this disease in this region.
Utilizing IHE-based Electronic Health Record systems for secondary use.
Holzer, K; Gall, W
2011-01-01
Due to the increasing adoption of Electronic Health Records (EHRs) for primary use, the number of electronic documents stored in such systems will soar in the near future. In order to benefit from this development in secondary fields such as medical research, it is important to define requirements for the secondary use of EHR data. Furthermore, analyses of the extent to which an IHE (Integrating the Healthcare Enterprise)-based architecture would fulfill these requirements could provide further information on upcoming obstacles for the secondary use of EHRs. A catalog of eight core requirements for secondary use of EHR data was deduced from the published literature, the risk analysis of the IHE profile MPQ (Multi-Patient Queries) and the analysis of relevant questions. The IHE-based architecture for cross-domain, patient-centered document sharing was extended to a cross-patient architecture. We propose an IHE-based architecture for cross-patient and cross-domain secondary use of EHR data. Evaluation of this architecture concerning the eight core requirements revealed positive fulfillment of six and the partial fulfillment of two requirements. Although not regarded as a primary goal in modern electronic healthcare, the re-use of existing electronic medical documents in EHRs for research and other fields of secondary application holds enormous potential for the future. Further research in this respect is necessary.
Towards a Global Names Architecture: The future of indexing scientific names
Pyle, Richard L.
2016-01-01
Abstract For more than 250 years, the taxonomic enterprise has remained almost unchanged. Certainly, the tools of the trade have improved: months-long journeys aboard sailing ships have been reduced to hours aboard jet airplanes; advanced technology allows humans to access environments that were once utterly inaccessible; GPS has replaced crude maps; digital hi-resolution imagery provides far more accurate renderings of organisms that even the best commissioned artists of a century ago; and primitive candle-lit microscopes have been replaced by an array of technologies ranging from scanning electron microscopy to DNA sequencing. But the basic paradigm remains the same. Perhaps the most revolutionary change of all – which we are still in the midst of, and which has not yet been fully realized – is the means by which taxonomists manage and communicate the information of their trade. The rapid evolution in recent decades of computer database management software, and of information dissemination via the Internet, have both dramatically improved the potential for streamlining the entire taxonomic process. Unfortunately, the potential still largely exceeds the reality. The vast majority of taxonomic information is either not yet digitized, or digitized in a form that does not allow direct and easy access. Moreover, the information that is easily accessed in digital form is not yet seamlessly interconnected. In an effort to bring reality closer to potential, a loose affiliation of major taxonomic resources, including GBIF, the Encyclopedia of Life, NBII, Catalog of Life, ITIS, IPNI, ICZN, Index Fungorum, and many others have been crafting a “Global Names Architecture” (GNA). The intention of the GNA is not to replace any of the existing taxonomic data initiatives, but rather to serve as a dynamic index to interconnect them in a way that streamlines the entire taxonomic enterprise: from gathering specimens in the field, to publication of new taxa and related data. PMID:26877664
2003-09-01
BLANK xv LIST OF ACRONYMS ABC Activity Based Costing ADO ActiveX Data Object ASP Application Server Page BPR Business Process Re...processes uses people and systems (hardware, software, machinery, etc.) and that these people and systems contain the “corporate” knowledge of the...server architecture was also a high maintenance item. Data was no longer contained on one mainframe but was distributed throughout the enterprise
2012-04-18
enterprise architecture and transition plan, and improved investment control processes. This statement is primarily based on GAO’s prior work...business system investments , they had not yet performed the key step of validating assessment results. GAO has made prior recommendations to address...be prepared no later than March 1, 1997. See 31 U.S.C. § 3515. 2An agency’s general fund accounts are those accounts in the U.S. Treasury holding
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
Executable Architectures for Modeling Command and Control Processes
2006-06-01
of introducing new NCES capabilities (such as the Federated Search ) to the ‘To Be’ model. 2 Table of Contents 1 INTRODUCTION...Conventional Method for SME Discovery ToBe.JCAS.3.2 Send Alert and/or Request OR AND ToBe.JCAS.3.4 Employ Federated Search for CAS-related Info JCAS.1.3.6.13...instant messaging, web browser, etc. • Federated Search – this capability provides a way to search enterprise contents across various search-enabled
Department of Defense Joint Technical Architecture, Version 6.0. Volume 2
2003-10-03
information needs of our warfighters and the DoD enterprise. Information assurance will be integral to the GIG, and data management strategy initiatives will...IT investment strategy , and integration JTA Version 6.0, Final 3 October 2003 Vol. II–iv Executive Summaryoversight. DoDD 8100.1 establishes the GIG as... Strategy (May 9, 2003). In addition, numerous standards have been marked sunset, indicating deletion from the JTA on a future date to be determined by a
Assessment of DoD Enterprise Resource Planning Business Systems
2011-02-01
activities, and processes to the organizational units that execute them • Architecture standards, such as application of the BPMN • Recommended...Recommendation: • Create and use style guides for the development of BPMN based process models. The style guide would probably include specifications such as...o All processes must have ‘entry points’ or ‘triggers’ in the form of BPMN Events o All processes must have ‘outcomes’ also in the form of BPMN
2008-06-01
p, is the value that satisfies the following equation: xnx (1) c x qp x n − = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− 0 1 γ where γ = confidence level...zero failures is therefore, using the above equation, where γ = 0.5, c =0: (2) xnx x qp x n − = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− 0 0 5.01 which reduces
Evidence and diagnostic reporting in the IHE context.
Loef, Cor; Truyen, Roel
2005-05-01
Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.
A novel governance system for enterprise information services.
Shabot, M M; Polaschek, J X; Duncan, R G; Langberg, M L; Jones, D T
1999-01-01
The authors created a novel system for governing the enterprise information services (IS) of a large health care system. The governance organization is comprised of key members of the attending medical staff, hospital and health system administration, and the IS department. A method for defining the requirements and business case for proposed new systems was developed for use by departments requesting new or expanded information services. A Technology Architecture Guideline document was developed and approved to provide a framework for supported hardware and software technologies. IS policies are approved by the main governance council. All project proposals are reviewed by specialized governance committees and, if approved, are launched for further development. Fully developed proposals are reviewed, approved and prioritized for funding by the governance council. This novel organization provides the methodology and structure for enlightened peer review and funding for well developed IS project proposals.
Verification of Security Policy Enforcement in Enterprise Systems
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Stoller, Scott D.
Many security requirements for enterprise systems can be expressed in a natural way as high-level access control policies. A high-level policy may refer to abstract information resources, independent of where the information is stored; it controls both direct and indirect accesses to the information; it may refer to the context of a request, i.e., the request’s path through the system; and its enforcement point and enforcement mechanism may be unspecified. Enforcement of a high-level policy may depend on the system architecture and the configurations of a variety of security mechanisms, such as firewalls, host login permissions, file permissions, DBMS access control, and application-specific security mechanisms. This paper presents a framework in which all of these can be conveniently and formally expressed, a method to verify that a high-level policy is enforced, and an algorithm to determine a trusted computing base for each resource.
A methodology aimed at fostering and sustaining the development processes of an IE-based industry
NASA Astrophysics Data System (ADS)
Corallo, Angelo; Errico, Fabrizio; de Maggio, Marco; Giangreco, Enza
In the current competitive scenario, where business relationships are fundamental in building successful business models and inter/intra organizational business processes are progressively digitalized, an end-to-end methodology is required that is capable of guiding business networks through the Internetworked Enterprise (IE) paradigm: a new and innovative organizational model able to leverage Internet technologies to perform real-time coordination of intra and inter-firm activities, to create value by offering innovative and personalized products/services and reduce transaction costs. This chapter presents the TEKNE project Methodology of change that guides business networks, by means of a modular and flexible approach, towards the IE techno-organizational paradigm, taking into account the competitive environment of the network and how this environment influences its strategic, organizational and technological levels. Contingency, the business model, enterprise architecture and performance metrics are the key concepts that form the cornerstone of this methodological framework.
CIMOSA process classification for business process mapping in non-manufacturing firms: A case study
NASA Astrophysics Data System (ADS)
Latiffianti, Effi; Siswanto, Nurhadi; Wiratno, Stefanus Eko; Saputra, Yudha Andrian
2017-11-01
A business process mapping is one important means to enable an enterprise to effectively manage the value chain. One of widely used approaches to classify business process for mapping purpose is Computer Integrated Manufacturing System Open Architecture (CIMOSA). CIMOSA was initially designed for Computer Integrated Manufacturing (CIM) system based enterprises. This paper aims to analyze the use of CIMOSA process classification for business process mapping in the firms that do not fall within the area of CIM. Three firms of different business area that have used CIMOSA process classification were observed: an airline firm, a marketing and trading firm for oil and gas products, and an industrial estate management firm. The result of the research has shown that CIMOSA can be used in non-manufacturing firms with some adjustment. The adjustment includes addition, reduction, or modification of some processes suggested by CIMOSA process classification as evidenced by the case studies.
Climbing the ladder: capability maturity model integration level 3
NASA Astrophysics Data System (ADS)
Day, Bryce; Lutteroth, Christof
2011-02-01
This article details the attempt to form a complete workflow model for an information and communication technologies (ICT) company in order to achieve a capability maturity model integration (CMMI) maturity rating of 3. During this project, business processes across the company's core and auxiliary sectors were documented and extended using modern enterprise modelling tools and a The Open Group Architectural Framework (TOGAF) methodology. Different challenges were encountered with regard to process customisation and tool support for enterprise modelling. In particular, there were problems with the reuse of process models, the integration of different project management methodologies and the integration of the Rational Unified Process development process framework that had to be solved. We report on these challenges and the perceived effects of the project on the company. Finally, we point out research directions that could help to improve the situation in the future.
A novel governance system for enterprise information services.
Shabot, M. M.; Polaschek, J. X.; Duncan, R. G.; Langberg, M. L.; Jones, D. T.
1999-01-01
The authors created a novel system for governing the enterprise information services (IS) of a large health care system. The governance organization is comprised of key members of the attending medical staff, hospital and health system administration, and the IS department. A method for defining the requirements and business case for proposed new systems was developed for use by departments requesting new or expanded information services. A Technology Architecture Guideline document was developed and approved to provide a framework for supported hardware and software technologies. IS policies are approved by the main governance council. All project proposals are reviewed by specialized governance committees and, if approved, are launched for further development. Fully developed proposals are reviewed, approved and prioritized for funding by the governance council. This novel organization provides the methodology and structure for enlightened peer review and funding for well developed IS project proposals. PMID:10566433
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
1998-07-01
The U.S. Department of Veterans Affairs is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low- cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. DICOM has played critical roles in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system components from commercial and non- commercial sources to work together to provide functional cost-effective solutions (see Figure 1). Two approaches are used to acquire and handle images within the radiology department. At some VA Medical Centers, DICOM is used to interface a commercial Picture Archiving and Communications System (PACS) to the VistA HIS. At other medical centers, DICOM is used to interface the image producing modalities directly to the image acquisition and display capabilities of VistA itself. Both of these approaches use a small set of DICOM services that has been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back.
Motiva Enterprises, LLC Low Sulfur Gasoline (LSG) Project - Related Emission Increase Methodology
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Astrophysics Data System (ADS)
Benachenhou, D.
2009-04-01
Information-technology departments in large enterprises spend 40% of budget on information integration-combining information from different data sources into a coherent form. IDC, a market-intelligence firm, estimates that the market for data integration and access software (which includes the key enabling technology for information integration) was about 2.5 billion in 2007, and is expected to grow to 3.8 billion in 2012. This is only the cost estimate for structured or traditional database information integration. Just imagine the market for transforming text into structured information and subsequent fusion with traditional databases.
Implementing a Dynamic Database-Driven Course Using LAMP
ERIC Educational Resources Information Center
Laverty, Joseph Packy; Wood, David; Turchek, John
2011-01-01
This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.
Medical Data GRIDs as approach towards secure cross enterprise document sharing (based on IHE XDS).
Wozak, Florian; Ammenwerth, Elske; Breu, Micheal; Penz, Robert; Schabetsberger, Thomas; Vogl, Raimund; Wurz, Manfred
2006-01-01
Quality and efficiency of health care services is expected to be improved by the electronic processing and trans-institutional availability of medical data. A prototype architecture based on the IHE-XDS profile is currently being developed. Due to legal and organizational requirements specific adaptations to the IHE-XDS profile have been made. In this work the services of the health@net reference architecture are described in details, which have been developed with focus on compliance to both, the IHE-XDS profile and the legal situation in Austria. We expect to gain knowledge about the development of a shared electronic health record using Medical Data Grids as an Open Source reference implementation and how proprietary Hospital Information systems can be integrated in this environment.
A security architecture for health information networks.
Kailar, Rajashekar; Muralidhar, Vinod
2007-10-11
Health information network security needs to balance exacting security controls with practicality, and ease of implementation in today's healthcare enterprise. Recent work on 'nationwide health information network' architectures has sought to share highly confidential data over insecure networks such as the Internet. Using basic patterns of health network data flow and trust models to support secure communication between network nodes, we abstract network security requirements to a core set to enable secure inter-network data sharing. We propose a minimum set of security controls that can be implemented without needing major new technologies, but yet realize network security and privacy goals of confidentiality, integrity and availability. This framework combines a set of technology mechanisms with environmental controls, and is shown to be sufficient to counter commonly encountered network security threats adequately.
A Security Architecture for Health Information Networks
Kailar, Rajashekar
2007-01-01
Health information network security needs to balance exacting security controls with practicality, and ease of implementation in today’s healthcare enterprise. Recent work on ‘nationwide health information network’ architectures has sought to share highly confidential data over insecure networks such as the Internet. Using basic patterns of health network data flow and trust models to support secure communication between network nodes, we abstract network security requirements to a core set to enable secure inter-network data sharing. We propose a minimum set of security controls that can be implemented without needing major new technologies, but yet realize network security and privacy goals of confidentiality, integrity and availability. This framework combines a set of technology mechanisms with environmental controls, and is shown to be sufficient to counter commonly encountered network security threats adequately. PMID:18693862
Improving TOGAF ADM 9.1 Migration Planning Phase by ITIL V3 Service Transition
NASA Astrophysics Data System (ADS)
Hanum Harani, Nisa; Akhmad Arman, Arry; Maulana Awangga, Rolly
2018-04-01
Modification planning of business transformation involving technological utilization required a system of transition and migration planning process. Planning of system migration activity is the most important. The migration process is including complex elements such as business re-engineering, transition scheme mapping, data transformation, application development, individual involvement by computer and trial interaction. TOGAF ADM is the framework and method of enterprise architecture implementation. TOGAF ADM provides a manual refer to the architecture and migration planning. The planning includes an implementation solution, in this case, IT solution, but when the solution becomes an IT operational planning, TOGAF could not handle it. This paper presents a new model framework detail transitions process of integration between TOGAF and ITIL. We evaluated our models in field study inside a private university.
ERIC Educational Resources Information Center
Johnson, Catherine A.
2015-01-01
Introduction: This paper presents a review of research framed within the concept of social capital and published by library and information science researchers. Method: Ninety-nine papers fitting the criteria of having a specific library and information science orientation were identified from two periodical databases: "Library and…
Assurance of Fault Management: Risk-Significant Adverse Condition Awareness
NASA Technical Reports Server (NTRS)
Fitz, Rhonda
2016-01-01
Fault Management (FM) systems are ranked high in risk-based assessment of criticality within flight software, emphasizing the importance of establishing highly competent domain expertise to provide assurance for NASA projects, especially as spaceflight systems continue to increase in complexity. Insight into specific characteristics of FM architectures seen embedded within safety- and mission-critical software systems analyzed by the NASA Independent Verification Validation (IVV) Program has been enhanced with an FM Technical Reference (TR) suite. Benefits are aimed beyond the IVV community to those that seek ways to efficiently and effectively provide software assurance to reduce the FM risk posture of NASA and other space missions. The identification of particular FM architectures, visibility, and associated IVV techniques provides a TR suite that enables greater assurance that critical software systems will adequately protect against faults and respond to adverse conditions. The role FM has with regard to overall asset protection of flight software systems is being addressed with the development of an adverse condition (AC) database encompassing flight software vulnerabilities.Identification of potential off-nominal conditions and analysis to determine how a system responds to these conditions are important aspects of hazard analysis and fault management. Understanding what ACs the mission may face, and ensuring they are prevented or addressed is the responsibility of the assurance team, which necessarily should have insight into ACs beyond those defined by the project itself. Research efforts sponsored by NASAs Office of Safety and Mission Assurance defined terminology, categorized data fields, and designed a baseline repository that centralizes and compiles a comprehensive listing of ACs and correlated data relevant across many NASA missions. This prototype tool helps projects improve analysis by tracking ACs, and allowing queries based on project, mission type, domain component, causal fault, and other key characteristics. The repository has a firm structure, initial collection of data, and an interface established for informational queries, with plans for integration within the Enterprise Architecture at NASA IVV, enabling support and accessibility across the Agency. The development of an improved workflow process for adaptive, risk-informed FM assurance is currently underway.
A Roadmap for caGrid, an Enterprise Grid Architecture for Biomedical Research
Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Hong, Neil Chue
2012-01-01
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG™) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities. PMID:18560123
Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A
2008-02-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).
Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.
2008-01-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259
A roadmap for caGrid, an enterprise Grid architecture for biomedical research.
Saltz, Joel; Hastings, Shannon; Langella, Stephen; Oster, Scott; Kurc, Tahsin; Payne, Philip; Ferreira, Renato; Plale, Beth; Goble, Carole; Ervin, David; Sharma, Ashish; Pan, Tony; Permar, Justin; Brezany, Peter; Siebenlist, Frank; Madduri, Ravi; Foster, Ian; Shanbhag, Krishnakant; Mead, Charlie; Chue Hong, Neil
2008-01-01
caGrid is a middleware system which combines the Grid computing, the service oriented architecture, and the model driven architecture paradigms to support development of interoperable data and analytical resources and federation of such resources in a Grid environment. The functionality provided by caGrid is an essential and integral component of the cancer Biomedical Informatics Grid (caBIG) program. This program is established by the National Cancer Institute as a nationwide effort to develop enabling informatics technologies for collaborative, multi-institutional biomedical research with the overarching goal of accelerating translational cancer research. Although the main application domain for caGrid is cancer research, the infrastructure provides a generic framework that can be employed in other biomedical research and healthcare domains. The development of caGrid is an ongoing effort, adding new functionality and improvements based on feedback and use cases from the community. This paper provides an overview of potential future architecture and tooling directions and areas of improvement for caGrid and caGrid-like systems. This summary is based on discussions at a roadmap workshop held in February with participants from biomedical research, Grid computing, and high performance computing communities.
Data Architecture in an Open Systems Environment.
ERIC Educational Resources Information Center
Bernbom, Gerald; Cromwell, Dennis
1993-01-01
The conceptual basis for structured data architecture, and its integration with open systems technology at Indiana University, are described. Key strategic goals guiding these efforts are discussed: commitment to improved data access; migration to relational database technology, and deployment of a high-speed, multiprotocol network; and…
CDD/SPARCLE: functional classification of proteins via subfamily domain architectures.
Marchler-Bauer, Aron; Bo, Yu; Han, Lianyi; He, Jane; Lanczycki, Christopher J; Lu, Shennan; Chitsaz, Farideh; Derbyshire, Myra K; Geer, Renata C; Gonzales, Noreen R; Gwadz, Marc; Hurwitz, David I; Lu, Fu; Marchler, Gabriele H; Song, James S; Thanki, Narmada; Wang, Zhouxi; Yamashita, Roxanne A; Zhang, Dachuan; Zheng, Chanjuan; Geer, Lewis Y; Bryant, Stephen H
2017-01-04
NCBI's Conserved Domain Database (CDD) aims at annotating biomolecular sequences with the location of evolutionarily conserved protein domain footprints, and functional sites inferred from such footprints. An archive of pre-computed domain annotation is maintained for proteins tracked by NCBI's Entrez database, and live search services are offered as well. CDD curation staff supplements a comprehensive collection of protein domain and protein family models, which have been imported from external providers, with representations of selected domain families that are curated in-house and organized into hierarchical classifications of functionally distinct families and sub-families. CDD also supports comparative analyses of protein families via conserved domain architectures, and a recent curation effort focuses on providing functional characterizations of distinct subfamily architectures using SPARCLE: Subfamily Protein Architecture Labeling Engine. CDD can be accessed at https://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids
NASA Technical Reports Server (NTRS)
Nugent, Richard O.; Tucker, Richard W.
1988-01-01
MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.
Integrating a local database into the StarView distributed user interface
NASA Technical Reports Server (NTRS)
Silberberg, D. P.
1992-01-01
A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.
Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning
2007-01-01
The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.
Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel
2016-10-01
The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.
2005-05-01
Earned Value, Enterprise Architecture, Entropy, Markov Models, Perron - Frobenius Theorem 1. INTRODUCTION: THE PROBLEM CONTEXT For knowledge-intensive...trust. Academy of Management Review, 20, 709-734. McEvily, B., Perrone , V. & Zaheer, A. (2003). Trust as an organizing principle. Organization...dE(t)/dt < 0 Constant or increasing estimate variability for less capable organizations. That is, [2] dE(t)/dt > 0 4.1. The Perron