Sample records for disa computing services

  1. Paradigm Shift: Can TQM Save DoD’s Procurement Process?

    DTIC Science & Technology

    1992-11-25

    From DISA to Services Apocalypse Now by B Brewin. November 2,1992 Federal Computer Week, p 1 (1) Plan (2) Transfer resources and organize (3...Schuster. Argyris, C. (1965). Organization and innovation. Homewood IL Irwin. Brewin, B. (1992, November 2). DISA to services: apocalypse now . Federal

  2. Alleviating Bandwidth Constraints by Implementing Quality of Service on Teleport Site Connections

    DTIC Science & Technology

    2008-02-19

    adequate 1 SkillSoft, "Implementing Quality of Service," DISA eLearning Portal, <https://hr.disa.mil...training/ elearning /index.html> (19 February 2008), QoS overview. Cited hereafter as Skillsoft. 4 bandwidth, the router’s QoS mechanism is passive...nation’s warfighter. Hence, he tasked DISA to create Net- Centric Implementation Documents (NCID) that relate Global

  3. Defense Information Systems Agency (DISA) GIG Convergence Master Plan 2012 (GCMP 2012). Volume 1

    DTIC Science & Technology

    2012-08-02

    mapping, the complete DISA technical baseline, and the GIG Technical Guidance ( GTG ). The Department of Defense (DoD), as part of its IT Effectiveness...Guidance ( GTG ) to guide their development of service offerings, which are added to the technical baseline when approved by the CEP. 1.2. Background...corresponding hyperlinks; the complete set of GTPs is called the GIG Technical Guidance ( GTG ). These appendices also contain linkages between the GCMP

  4. Disa vaccines for Bluetongue: A novel vaccine approach for insect-borne diseases

    USDA-ARS?s Scientific Manuscript database

    Bluetongue virus (BTV) lacking functional NS3/NS3a protein is named Disabled Infectious Single Animal (DISA) vaccine. The BT DISA vaccine platform is broadly applied by exchange of serotype specific proteins. BT DISA vaccines are produced in standard cell lines in established production facilities, ...

  5. Environmental Mission Impact Assessment

    DTIC Science & Technology

    2008-01-01

    System Agency’s (DISA) Federated Search service. The mission impacts can be generated for a general rectangular area, or generated for routes, route...that respond to queries (format- ted according to DISA’s Federated Search specifi- FIGURE 2 EVIS service-oriented architecture design, illustrating the

  6. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    DTIC Science & Technology

    2010-03-19

    Television tuners, including tuner cards for use in computers, shall be equipped with secondary audio program playback circuitry. (c) All training...Shelf CSS Cascading Style Sheets DII Defense Information Infrastructure DISA Defense Information Systems Agency DoD Department of Defense

  7. 32 CFR 287.4 - Duties of the FOIA officer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Directorate for Freedom of Information and Security Review (DFOISR) Washington Headquarters Services, reference FOIA issues. (e) Ensure the cooperation of DISA with DFOISR in fulfilling the responsibilities of...

  8. Access Control of Web- and Java-Based Applications

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  9. Communications Security: A Timeless Requirement While Conducting Warfare

    DTIC Science & Technology

    2012-04-10

    services remotely to connect to the Global Information Grid ( GIG ). The GIG is the essential gateway to the Internet that DISA uses to allow service...nations that 16 thrive off free market and global economies . These hostile actors, such as Al Qaida or Hezbollah, do not posses the

  10. DisA and c-di-AMP act at the intersection between DNA-damage response and stress homeostasis in exponentially growing Bacillus subtilis cells.

    PubMed

    Gándara, Carolina; Alonso, Juan C

    2015-03-01

    Bacillus subtilis contains two vegetative diadenylate cyclases, DisA and CdaA, which produce cyclic di-AMP (c-di-AMP), and one phosphodiesterase, GdpP, that degrades it into a linear di-AMP. We report here that DisA and CdaA contribute to elicit repair of DNA damage generated by alkyl groups and H2O2, respectively, during vegetative growth. disA forms an operon with radA (also termed sms) that encodes a protein distantly related to RecA. Among different DNA damage agents tested, only methyl methane sulfonate (MMS) affected disA null strain viability, while radA showed sensitivity to all of them. A strain lacking both disA and radA was as sensitive to MMS as the most sensitive single parent (epistasis). Low c-di-AMP levels (e.g. by over-expressing GdpP) decreased the ability of cells to repair DNA damage caused by MMS and in less extent by H2O2, while high levels of c-di-AMP (absence of GdpP or expression of sporulation-specific diadenylate cyclase, CdaS) increased cell survival. Taken together, our results support the idea that c-di-AMP is a crucial signalling molecule involved in DNA repair with DisA and CdaA contributing to modulate different DNA damage responses during exponential growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Bluetongue Disabled Infectious Single Animal (DISA) vaccine: Studies on the optimal route and dose in sheep.

    PubMed

    van Rijn, Piet A; Daus, Franz J; Maris-Veldhuis, Mieke A; Feenstra, Femke; van Gennip, René G P

    2017-01-05

    Bluetongue (BT) is a disease of ruminants caused by bluetongue virus (BTV) transmitted by biting midges of the Culicoides genus. Outbreaks have been controlled successfully by vaccination, however, currently available BT vaccines have several shortcomings. Recently, we have developed BT Disabled Infectious Single Animal (DISA) vaccines based on live-attenuated BTV without expression of dispensable non-structural NS3/NS3a protein. DISA vaccines are non-pathogenic replicating vaccines, do not cause viremia, enable DIVA and are highly protective. NS3/NS3a protein is involved in virus release, cytopathogenic effect and suppression of Interferon-I induction, suggesting that the vaccination route can be of importance. A standardized dose of DISA vaccine for serotype 8 has successfully been tested by subcutaneous vaccination. We show that 10 and 100times dilutions of this previously tested dose did not reduce the VP7 humoral response. Further, the vaccination route of DISA vaccine strongly determined the induction of VP7 directed antibodies (Abs). Intravenous vaccination induced high and prolonged humoral response but is not practical in field situations. VP7 seroconversion was stronger by intramuscular vaccination than by subcutaneous vaccination. For both vaccination routes and for two different DISA vaccine backbones, IgM Abs were rapidly induced but declined after 14days post vaccination (dpv), whereas the IgG response was slower. Interestingly, intramuscular vaccination resulted in an initial peak followed by a decline up to 21dpv and then increased again. This second increase is a steady and continuous increase of IgG Abs. These results indicate that intramuscular vaccination is the optimal route. The protective dose of DISA vaccine has not been determined yet, but it is expected to be significantly lower than of currently used BT vaccines. Therefore, in addition to the advantages of improved safety and DIVA compatibility, the novel DISA vaccines will be cost-competitive to commercially available live attenuated and inactivated vaccines for Bluetongue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. An Assessment, Survey, and Systems Engineering Design of Information Sharing and Discovery Systems in a Network-Centric Environment

    DTIC Science & Technology

    2009-12-01

    type of information available through DISA search tools: Centralized Search, Federated Search , and Enterprise Search (Defense Information Systems... Federated Search , and Enterprise 41 Search services. Likewise, EFD and GCDS support COIs in discovering information by making information

  13. 32 CFR 316.2 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DEFENSE INFORMATION SYSTEMS AGENCY PRIVACY PROGRAM § 316.2 Applicability. This part applies to Headquarters, Defense Information Systems Agency (DISA) and DISA field activities. [40 FR 55535, Nov. 28, 1975...

  14. Strategic Rebalance of the Three Component Air Force

    DTIC Science & Technology

    2013-03-01

    protect its ability to regenerate capabilities that might be needed to meet future, unforeseen demands, maintaining intellectual capital and rank structure...93 Air Reserve Personnel Center, “Continuum of Service,” myPers, https:// gum - crm.csd.disa.mil/app/answers/detail/a_id/19204/kw/continuum/p/16

  15. Pretrained Individual Manpower Study.

    DTIC Science & Technology

    1981-02-01

    Air Force prgram are described below. 7-23 -aid ~ ~ ~ ~ ~ Z -" - . . .. ---- - " . .. r l. .. Mobilization Pc-sitions FurRetitees Mobilization...of training and prevent to rltfyin .,fflcial. I certify that I performed the training described in Section I1 and ill statement, are :rut’ md cor...DOCUMENTATION AND INDICATE PERCENTAGE OF DISABILITY.) b. D YOU HAVE ANY OTHER PHYSICAL DISA8ILITY, SERVICE OR NON-SERVICE CONNECTED. THAT WOULD PREVENT YOU

  16. End-to-End Service Oriented Architectures (SOA) Security Project

    DTIC Science & Technology

    2012-02-01

    ActivClientforCommonAccessCards/ [CAC3] http://koji.fedoraproject.org/ koji /packageinfo?packageID=5 [CAC4] https://help.ubuntu.com/community...Ubuntu10.4-LTS-32.html [CAC8] http://dodpki.c3pki.chamb.disa.mil/rootca.html [CAC9] http://koji.fedoraproject.org/ koji /packageinfo?packageID=5 [CAC10

  17. Application of bluetongue Disabled Infectious Single Animal (DISA) vaccine for different serotypes by VP2 exchange or incorporation of chimeric VP2.

    PubMed

    Feenstra, Femke; Pap, Janny S; van Rijn, Piet A

    2015-02-04

    Bluetongue is a disease of ruminants caused by the bluetongue virus (BTV). Bluetongue outbreaks can be controlled by vaccination, however, currently available vaccines have several drawbacks. Further, there are at least 26 BTV serotypes, with low cross protection. A next-generation vaccine based on live-attenuated BTV without expression of non-structural proteins NS3/NS3a, named Disabled Infectious Single Animal (DISA) vaccine, was recently developed for serotype 8 by exchange of the serotype determining outer capsid protein VP2. DISA vaccines are replicating vaccines but do not cause detectable viremia, and induce serotype specific protection. Here, we exchanged VP2 of laboratory strain BTV1 for VP2 of European serotypes 2, 4, 8 and 9 using reverse genetics, without observing large effects on virus growth. Exchange of VP2 from serotype 16 and 25 was however not possible. Therefore, chimeric VP2 proteins of BTV1 containing possible immunogenic regions of these serotypes were studied. BTV1, expressing 1/16 chimeric VP2 proteins was functional in virus replication in vitro and contained neutralizing epitopes of both serotype 1 and 16. For serotype 25 this approach failed. We combined VP2 exchange with the NS3/NS3a negative phenotype in BTV1 as previously described for serotype 8 DISA vaccine. DISA vaccine with 1/16 chimeric VP2 containing amino acid region 249-398 of serotype 16 raised antibodies in sheep neutralizing both BTV1 and BTV16. This suggests that DISA vaccine could be protective for both parental serotypes present in chimeric VP2. We here demonstrate the application of the BT DISA vaccine platform for several serotypes and further extend the application for serotypes that are unsuccessful in single VP2 exchange. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. DoD Component Privacy Impact Assessments

    Science.gov Websites

    of Defense Chief Information Officer Home About DoD CIO Bios Organization DCIO C4&IIC DCIO IE Resources Activity (DHRA) Defense Manpower Data Center Defense Information Systems Agency (DISA) Defense ) Washington Headquarters Services (WHS) DoD Chief Information Officer DoD Use of Third-Party Websites and

  19. Environmental Assessment of Installation Development at Scott Air Force Base, Illinois

    DTIC Science & Technology

    2012-08-01

    3-27  3-3. Unemployment Percentages, 2001 to 2011... bank system to service the proposed DISA facility and for future development at the former Cardinal Creek MFH neighborhood. Due to the sensitivity...of this information, the location of the communication duct banks is not shown on Figures 2-1 and 2-2. Floodplain, ERP, QD 221,760 No change

  20. 32 CFR 316.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DEFENSE INFORMATION SYSTEMS AGENCY PRIVACY PROGRAM § 316.4 Definitions. Add to the definitions contained in 32 CFR 310.6 the following: System Manager: The DISA official who is responsible for policies and procedures governing a DISA System of Record. His title and duty address will be found in the paragraph...

  1. Stonix, Version 0.x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-05-13

    STONIX is a program for configuring UNIX and Linux computer operating systems. It applies configurations based on the guidance from publicly accessible resources such as: NSA Guides, DISA STIGs, the Center for Internet Security (CIS), USGCB and vendor security documentation. STONIX is written in the Python programming language using the QT4 and PyQT4 libraries to provide a GUI. The code is designed to be easily extensible and customizable.

  2. Within-Host Competition between Two Entomopathogenic Fungi and a Granulovirus in Diatraea saccharalis (Lepidoptera: Crambidae).

    PubMed

    Pauli, Giuliano; Moura Mascarin, Gabriel; Eilenberg, Jørgen; Delalibera Júnior, Italo

    2018-06-13

    We provide insights into how the interactions of two entomopathogenic fungi and a virus play a role in virulence, disease development, and pathogen reproduction for an economically important insect crop pest, the sugarcane borer Diatraea saccharalis (Fabricius) (Lepidoptera: Crambidae). In our model system, we highlight the antagonistic effects of the co-inoculation of Beauveria bassiana and granulovirus (DisaGV) on virulence, compared to their single counterparts. By contrast, combinations of Metarhizium anisopliae and B. bassiana , or M. anisopliae and DisaGV, have resulted in additive effects against the insect. Intriguingly, most cadavers that were derived from dual or triple infections, produced signs/symptoms of only one species after the death of the infected host. In the combination of fungi and DisaGV, there was a trend where a higher proportion of viral infection bearing conspicuous symptoms occurred, except when the larvae were inoculated with M. anisopliae and DisaGV at the two highest inoculum rates. Co-infections with B. bassiana and M. anisopliae did not affect pathogen reproduction, since the sporulation from co-inoculated larvae did not differ from their single counterparts.

  3. Dynamic Cross Domain Information Sharing - A Concept Paper on Flexible Adaptive Policy Management

    DTIC Science & Technology

    2010-10-01

    no read-up, no write-down” rule of the classical Bell-La Padula [1] model is becoming unten- able because of the increasing need to seamlessly handle...Elliott Bell, "Looking Back at the Bell-La Padula Model," , Washington, DC, USA, 2005. [2] (2009, Jan.) DISA NCES Website. [Online]. http://www.disa.mil

  4. Defense Information Systems Agency and Defense Logistics Agency Information Technology Contracts Awarded Without Competition Were Generally Justified

    DTIC Science & Technology

    2015-07-29

    9 DISA Contracting Personnel Need to Improve When Synopsizing Noncompetitive IT...Conducted__________________________________________________ 30 Appendix F. Synopses Needed Improvements _____________________________________________ 36 iv │ DODIG-2015-152 Contents...exception under FAR 5.202 applies. We discuss this in the “DISA Contracting Personnel Need to Improve When Synopsizing Noncompetitive IT Contracts

  5. DOD Information Technology Standard Guidance (ITSG) Version 3.1

    DTIC Science & Technology

    1997-04-07

    from NGSBs later (e.g., OSFs Motif specification became the basis for IEEE 1295 . 1). Most consortia specifications are available now, do not overlap...Illumination) CIM Center for Information Management (DISA) CINC Conumnder in Chief CIS CASE Integration Services CJCS Chairman of the Joint Chiefs of...Compound Text Encoding CUA Common User Access DAC Discretionary Access Controls DAD Draft Addendum (ISO) DAM Draft Amendment (ISO) DAP Document

  6. Federated Access to Cyber Observables for Detection of Targeted Attacks

    DTIC Science & Technology

    2014-10-01

    each manages. The DQNs also utilize an intelligent information ex- traction capability for automatically suggesting mappings from text found in audit ...Harmelen, and others, “OWL web ontology language overview,” W3C Recomm., vol. 10, no. 2004–03, p. 10, 2004. [4] D. Miller and B. Pearson , Security...Online]. Available: http://www.disa.mil/Services/Information- Assurance /HBS/HBSS. [21] S. Zanikolas and R. Sakellariou, “A taxonomy of grid

  7. The evolution of floral nectaries in Disa (Orchidaceae: Disinae): recapitulation or diversifying innovation?

    PubMed

    Hobbhahn, Nina; Johnson, Steven D; Bytebier, Benny; Yeung, Edward C; Harder, Lawrence D

    2013-11-01

    The Orchidaceae have a history of recurring convergent evolution in floral function as nectar production has evolved repeatedly from an ancestral nectarless state. However, orchids exhibit considerable diversity in nectary type, position and morphology, indicating that this convergence arose from alternative adaptive solutions. Using the genus Disa, this study asks whether repeated evolution of floral nectaries involved recapitulation of the same nectary type or diversifying innovation. Epidermis morphology of closely related nectar-producing and nectarless species is also compared in order to identify histological changes that accompanied the gain or loss of nectar production. The micromorphology of nectaries and positionally equivalent tissues in nectarless species was examined with light and scanning electron microscopy. This information was subjected to phylogenetic analyses to reconstruct nectary evolution and compare characteristics of nectar-producing and nectarless species. Two nectary types evolved in Disa. Nectar exudation by modified stomata in floral spurs evolved twice, whereas exudation by a secretory epidermis evolved six times in different perianth segments. The spur epidermis of nectarless species exhibited considerable micromorphological variation, including strongly textured surfaces and non-secreting stomata in some species. Epidermis morphology of nectar-producing species did not differ consistently from that of rewardless species at the magnifications used in this study, suggesting that transitions from rewardlessness to nectar production are not necessarily accompanied by visible morphological changes but only require sub-cellular modification. Independent nectary evolution in Disa involved both repeated recapitulation of secretory epidermis, which is present in the sister genus Brownleea, and innovation of stomatal nectaries. These contrasting nectary types and positional diversity within types imply weak genetic, developmental or physiological constraints in ancestral, nectarless Disa. Such functional convergence generated by morphologically diverse solutions probably also underlies the extensive diversity of nectary types and positions in the Orchidaceae.

  8. Network Application Server Using Extensible Mark-Up Language (XML) to Support Distributed Databases and 3D Environments

    DTIC Science & Technology

    2001-12-01

    diides.ncr.disa.mil/xmlreg/user/index.cfm] [ Deitel ] Deitel , H., Deitel , P., Java How to Program 3rd Edition, Prentice Hall, 1999. [DL99...presentation, and data) of information and the programming functionality. The Web framework addressed ability to provide a framework for the distribution...BLANK v ABSTRACT Advances in computer communication technology and an increased awareness of how enhanced information access can lead to improved

  9. 2009 Defense Supply Center Columbus Land and Maritime Supply Chains: Business Conference and Exhibition

    DTIC Science & Technology

    2009-08-19

    DSN: 388-7453 CSCASSIG@CSD.DISA.MIL  DFAS eCommerce web site http://www.dfas.mil/contractorpay/electroniccommerce.html  DFAS Customer Service...M-ATV is a separate category within the MRAP family of vehicles. ►Mission: Small-unit combat operations in highly restricted rural , mountainous...vehicles. ►Mission: Small-unit combat operations in highly restricted rural , mountainous and urban environments. ► Troop Transport: Carry up to five

  10. 2009 Defense Supply Center Columbus Land and Maritime Supply Chains Business Conference and Exhibition

    DTIC Science & Technology

    2009-08-19

    DSN: 388-7453 CSCASSIG@CSD.DISA.MIL  DFAS eCommerce web site http://www.dfas.mil/contractorpay/electroniccommerce.html  DFAS Customer Service...M-ATV is a separate category within the MRAP family of vehicles. ►Mission: Small-unit combat operations in highly restricted rural , mountainous...vehicles. ►Mission: Small-unit combat operations in highly restricted rural , mountainous and urban environments. ► Troop Transport: Carry up to five

  11. The Audit Opinion of the DISA FY 2011 Working Capital Fund Financial Statements Was Not Adequately Supported

    DTIC Science & Technology

    2013-04-26

    President’s Council on Integrity and Efficiency MD&A Management Discussion and Analysis MFR Memorandum for Record NoF Notification of...memorandums for record ( MFRs ) would have a material impact on the financial statements and ultimately Acuity’s opinion, • perform adequate completeness...the deficiencies identified by DISA in its FBWT MFRs would impact the reliability of the financial statements and ultimately Acuity’s opinion

  12. [Caesarean section among seven public hospitals at Lima: trend analysis during 2001-2008 period].

    PubMed

    Quispe, Antonio M; Santivañez-Pimentel, Alvaro; Leyton-Valencia, Imelda; Pomasunco, Denis

    2010-03-01

    To analyze the trend of the monthly "caesarean section rate" (CSR) at the DISA V Lima-Ciudad Hospitals during the period 2001 - 2008. Ecological study that aim to analyze the monthly reports of all DISA V Lima-Ciudad Hospitals that attends childbirths, and by analyzing the trend of theirs monthly caesarean section ratio or monthly CSR (TCM = total caesarean births in a month * 100/total number of newborns in the same month ) to determine their characteristic patterns. Of the 7 hospitals studied, it was found that between 2001 and 2008, TCM average was 36.9% ± 9.1% (range: 16.5%-71.4%). From 2001 (33.5% ± 6.9%) to 2008 (39.7% ± 8.3%) years TCM increased 6.9% ± 7.0% on average, having registered a increase of 7.7% ± 6.4% at the year 2007 (43.5% ± 9.8%). Analyzing the TCM trend was found that most hospitals have a significant increase between 2004 and 2005 years. Analyzing the TCM trend was found that it tends to increase in April (37,9% ± 9,7%) and September (40.2% ± 8.9%), cycle that characterize most DISA V Lima-Ciudad Hospitals. TCM of the DISA V Lima-Ciudad Hospitals length exceeds the limit recommended by WHO and, during the period 2001-2008, has had a significant trend to increase.

  13. Defense Information Systems Agency Technical Integration Support (DISA- TIS). MUMPS Study.

    DTIC Science & Technology

    1993-01-01

    usable in DoD, MUMPS must continue to improve in its support of DoD and OSE standards such as SQL , X-Windows, POSIX, PHIGS, etc. MUMPS and large AlSs...Language ( SQL ), X-Windows, and Graphical Kernel Services (GKS)) 2.2.2.3 FIPS Adoption by NIST The National Institute of Standards and Technology (NIST...many of the performance tuning mechanisms that must be performed explicitly with other systems. The VA looks forward to the SQL binding (1993 ANS) that

  14. Development of a depth-integrated sample arm (DISA) to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, William R.; ,; Roger T. Bannerman,

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  15. Collecting a better water-quality sample: Reducing vertical stratification bias in open and closed channels

    USGS Publications Warehouse

    Selbig, William R.

    2017-01-01

    Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.

  16. FY96 Support to the Defense Information Systems Agency (DISA), Center for Standards (CFS) for continuing improvement of the DoD HCI Style Guide. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avery, L.W.; Donohoo, D.T.; Sanchez, J.A.

    1996-09-30

    PNNL successfully completed the three tasks: Task 1 - This task provided DISA with an updated set of design checklists that can be used to measure compliance with the Style Guide. These checklists are in Microsoft{reg_sign}Word 6.0 format. Task 2 - This task provided a discussion of two basic models for using the Style Guide and the Design Checklist, as a compliance tool and as a design tool.

  17. DisAp-dependent striated fiber elongation is required to organize ciliary arrays

    PubMed Central

    Galati, Domenico F.; Bonney, Stephanie; Kronenberg, Zev; Clarissa, Christina; Yandell, Mark; Elde, Nels C.; Jerka-Dziadosz, Maria; Giddings, Thomas H.; Frankel, Joseph

    2014-01-01

    Cilia-organizing basal bodies (BBs) are microtubule scaffolds that are visibly asymmetrical because they have attached auxiliary structures, such as striated fibers. In multiciliated cells, BB orientation aligns to ensure coherent ciliary beating, but the mechanisms that maintain BB orientation are unclear. For the first time in Tetrahymena thermophila, we use comparative whole-genome sequencing to identify the mutation in the BB disorientation mutant disA-1. disA-1 abolishes the localization of the novel protein DisAp to T. thermophila striated fibers (kinetodesmal fibers; KFs), which is consistent with DisAp’s similarity to the striated fiber protein SF-assemblin. We demonstrate that DisAp is required for KFs to elongate and to resist BB disorientation in response to ciliary forces. Newly formed BBs move along KFs as they approach their cortical attachment sites. However, because they contain short KFs that are rotated, BBs in disA-1 cells display aberrant spacing and disorientation. Therefore, DisAp is a novel KF component that is essential for force-dependent KF elongation and BB orientation in multiciliary arrays. PMID:25533842

  18. Maritime domain awareness community of interest net centric information sharing

    NASA Astrophysics Data System (ADS)

    Andress, Mark; Freeman, Brian; Rhiddlehover, Trey; Shea, John

    2007-04-01

    This paper highlights the approach taken by the Maritime Domain Awareness (MDA) Community of Interest (COI) in establishing an approach to data sharing that seeks to overcome many of the obstacles to sharing both within the federal government and with international and private sector partners. The approach uses the DOD Net Centric Data Strategy employed through Net Centric Enterprise Services (NCES) Service Oriented Architecture (SOA) foundation provided by Defense Information Systems Agency (DISA), but is unique in that the community is made up of more than just Defense agencies. For the first pilot project, the MDA COI demonstrated how four agencies from DOD, the Intelligence Community, Department of Homeland Security (DHS), and Department of Transportation (DOT) could share Automatic Identification System (AIS) data in a common format using shared enterprise service components.

  19. Cyber Security and Reliability in a Digital Cloud

    DTIC Science & Technology

    2013-01-01

    a higher utilization of servers, lower professional support staff needs, economies of scale for the physical facility, and the flexibility to locate...as  a  system,  the  DoD  can  achieve  the  economies  of scale typically associated with large data centers.  Recommendation 3: The DoD CIO and DISA...providers will help set  standards for secure cloud computing across the  economy .  Recommendation 7: The DoD CIO and DISA should participate in the

  20. 77 FR 56630 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-13

    ... public is to make these submissions available for public viewing on the Internet at http://www.... Decentralized locations: DISA Field Activities World-wide. Official mailing addresses are published as an...

  1. 76 FR 49455 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-10

    ... System name: Sensitive Compartmented Info (SCI) Posn/Pers Accountability System (February 22, 1993, 58 FR 10562). Reason: DISA does not upload or input PII into the Sensitive Compartmented Info (SCI) Posn/Pers...

  2. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    USGS Publications Warehouse

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  3. 32 CFR 287.1 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF INFORMATION ACT PROGRAM DEFENSE INFORMATION SYSTEMS AGENCY FREEDOM OF INFORMATION ACT PROGRAM § 287.1 Purpose. This part assigns responsibilities for the Freedom of Information Act (FOIA) Program for DISA. ...

  4. 32 CFR 287.2 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Applicability. This part applies to DISA and the Office of the Manager, National Communications System (OMNCS). ... 32 National Defense 2 2012-07-01 2012-07-01 false Applicability. 287.2 Section 287.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF...

  5. 32 CFR 287.2 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Applicability. This part applies to DISA and the Office of the Manager, National Communications System (OMNCS). ... 32 National Defense 2 2013-07-01 2013-07-01 false Applicability. 287.2 Section 287.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF...

  6. 32 CFR 287.2 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Applicability. This part applies to DISA and the Office of the Manager, National Communications System (OMNCS). ... 32 National Defense 2 2010-07-01 2010-07-01 false Applicability. 287.2 Section 287.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF...

  7. 32 CFR 287.2 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Applicability. This part applies to DISA and the Office of the Manager, National Communications System (OMNCS). ... 32 National Defense 2 2011-07-01 2011-07-01 false Applicability. 287.2 Section 287.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF...

  8. 32 CFR 287.2 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Applicability. This part applies to DISA and the Office of the Manager, National Communications System (OMNCS). ... 32 National Defense 2 2014-07-01 2014-07-01 false Applicability. 287.2 Section 287.2 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF...

  9. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling.

    PubMed

    Selbig, William R; Bannerman, Roger T

    2011-04-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water.

  10. Development of a depth-integrated sample arm to reduce solids stratification bias in stormwater sampling

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    A new depth-integrated sample arm (DISA) was developed to improve the representation of solids in stormwater, both organic and inorganic, by collecting a water quality sample from multiple points in the water column. Data from this study demonstrate the idea of vertical stratification of solids in storm sewer runoff. Concentrations of suspended sediment in runoff were statistically greater using a fixed rather than multipoint collection system. Median suspended sediment concentrations measured at the fixed location (near the pipe invert) were approximately double those collected using the DISA. In general, concentrations and size distributions of suspended sediment decreased with increasing vertical distance from the storm sewer invert. Coarser particles tended to dominate the distribution of solids near the storm sewer invert as discharge increased. In contrast to concentration and particle size, organic material, to some extent, was distributed homogenously throughout the water column, likely the result of its low specific density, which allows for thorough mixing in less turbulent water. ?? 2010 Publishing Technology.

  11. EPIDEMIOLOGY STUDY OF BIRTH DEFECTS AND DISINFECTION BYPRODUCTS

    EPA Science Inventory

    Birth defects are the leading cause of infant mortality in the US, accounting for more than 20% of all infant deaths. In addition, birth defects are the fifth leading cause of years of potential lief life lost and contribute substantially to childhood morbidity and long-term disa...

  12. Department of Defense Goal Security Architecture (DGSA) Transition Plan. Version 1.0

    DTIC Science & Technology

    1995-01-30

    explain the use of the policy representation methods. Responsible Organizatins : DISA CFS or other Government standards organization. Inter-task...institutions, (2) DoD training contractors, (3) component and agency E&T representatives, and (4) Government and industry INFOSEC leadership . The short-term

  13. 76 FR 64902 - Membership of the Performance Review Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-19

    ...: Defense Information Systems Agency, Department of Defense. ACTION: Notice. SUMMARY: This notice announces... Systems Agency (DISA). The publication of PRB membership is required by 5 U.S.C. 4314(c)(4). The... appraisals and makes recommendations regarding performance ratings and performance scores to the Director...

  14. 32 CFR 287.8 - Appeal rights.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Appeal rights. 287.8 Section 287.8 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) FREEDOM OF... rights. All appeals should be addressed to the Director, DISA, and be postmarked no later than 60 days...

  15. 32 CFR 316.5 - Policy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 2 2010-07-01 2010-07-01 false Policy. 316.5 Section 316.5 National Defense Department of Defense (Continued) OFFICE OF THE SECRETARY OF DEFENSE (CONTINUED) PRIVACY PROGRAM DEFENSE INFORMATION SYSTEMS AGENCY PRIVACY PROGRAM § 316.5 Policy. It is the policy of DISA: (a) To preserve the...

  16. Atomic Engineering of Superconductors by Design

    DTIC Science & Technology

    2014-10-23

    Kumah, J. H. Ngai, E. D. Specht , D. A. Arena, F. J. Walker, C. H. Ahn. Phase diagram of compressively strained nickelate thin films, Applied Physics...1940 (2014)10.1002/adma.201304256). 3. A. S. Disa, D. P. Kumah, J. H. Ngai, E. D. Specht , D. A. Arena, F. J. Walker, C. H. Ahn, Phase diagram of

  17. Is there a correlation between the change in the interscrew angle of the eight-plate and the delta joint orientation angles?

    PubMed

    Marangoz, Salih; Buyukdogan, Kadir; Karahan, Sevilay

    2017-01-01

    It is known that the screws of the eight-plate hemiepiphysiodesis construct diverge as growth occurs through the physis. Our objective was to investigate whether there is a correlation between the amount of change of the joint orientation angle (JOA) and that of the interscrew angle (ISA) of the eight-plate hemiepiphysiodesis construct before and after correction. After the institutional review board approval, medical charts and X-rays of all patients operated for either genu valgum or genu varum with eight-plate hemiepiphysiodesis were analyzed retrospectively. All consecutive patients at various ages with miscellaneous diagnoses were included. JOA and ISA were measured before and after correction. After review of the X-rays, statistical analyses were performed which included Pearson correlation coefficient and regression analyses. There were 53 segments of 30 patients included in the study. Eighteen were males, and 12 were females. Mean age at surgery was 9.1 (range 3-17). Mean follow-up time was 21.5 (range, 7-46) months. The diagnoses were diverse. A strong correlation was found between the delta JOA (d-JOA) and delta ISA (d-ISA) of the eight-plate hemiepiphysiodesis construct (r = 0.759 (0.615-0.854, 95%CI), p < 0.001). This correlation was independent of the age and gender of the patient. There is a strong correlation between the d-ISA and the d-JOA. The d-ISA follows the d-JOA at a predictable amount through formulas which regression analysis yielded. This study confirms the clinical observation of the diverging angle between the screws is in correlation with the correction of the JOA. Level IV, Therapeutic study. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  18. Understanding Return on Investment for Data Center Consolidation

    DTIC Science & Technology

    2013-09-01

    Channel over Ethernet FDCCI Federal Data Center Consolidation Initiative GAO Government Accountability Office GDA Government Directed Actions GIG ...to judge how each stakeholder group will benefit from it. Such measures as lower risk, greater control, better economies of scale, better utilization...NMS product by Kratos Networks called Neural Star to manage the Global Information Grid ( GIG ) (Kratos, 2013). DISA uses Neural Star as the primary

  19. Occupationally Based Disaster Medicine

    DTIC Science & Technology

    2011-01-01

    and thai bener r ’~tI h~ may re~ult from the a signmem of a prcvi- (luslv trained and drillc:d disa ler Lrc::almen l Icam that can Lake uvcr in al l...they wi ll sup >rv ise tho~e teams and are rt’ ~ pon~ible, ill ong wilh (lccupillional he lth personnel, for Ihe a fet y a nd heallh ()f th e tea

  20. Efficient Transmission of DoD PKI Certificates in Tactical Networks

    DTIC Science & Technology

    2010-01-01

    011 the COlll- mand line. To extract individual fields from cert ifica tes, we used the X.509 subset of the C API included with OpenSSL (vl.O.0-bcta2...www.disa.mil/nces. [17J NSA Suite Il Cryptogt"aphy. http://vvv . nsa.gov/ial programs/suiteb_cryptography/index.shtml. [18J OpenSSL : The Open Source toolk

  1. Copper-Silicon Bronzes

    DTIC Science & Technology

    1933-05-11

    copper alloys which have good static properties are disa:cinting in their endurance properties. The silicide allo~rs that are given high tensile strength...works satisfactorily, but the best welds 4 have been obtained by using a flux cdmposed of 905 fused borax and i0. sodium fluoride., The flux is...properties re- main almost the same. Grain size increases with sil- icon. III A study of hardening copper by heat treating its alloys with silicides

  2. Focused Logistics, Joint Vision 2010: A Joint Logistics Roadmap

    DTIC Science & Technology

    2010-01-01

    AIS). AIT devices include bar codes for individual items, optical memory cards for multipacks and containers, radio frequency tags for containers and...Fortezza Card and Firewall technologies are being developed to prevent unau- thorized access. As for infrastructure, DISA has already made significant in...radio frequency tags and optical memory cards , to continuously update the JTAV database. By September 1998, DSS will be deployed in all wholesale

  3. Disabled infectious single animal (DISA) vaccine against Bluetongue by deletion of viroporin-like NS3/NS3a expression is effective, safe, and enables differentiation of infected from vaccinated animals (DIVA)

    USDA-ARS?s Scientific Manuscript database

    The prototype virus species of the genus Orbivirus (family Reoviridae) is bluetongue virus (BTV) consisting of at least 27 serotypes. Bluetongue is a noncontagious haemorrhagic disease of ruminants spread by competent species of Culicoides biting midges in large parts of the world leading to huge ec...

  4. Technological Lessons from the Fukushima Dai-Ichi Accident

    DTIC Science & Technology

    2016-06-01

    for human consumption . Fish from the area are now being assessed using a non-destructive testing regimen developed by Tohoku University. Monitoring...radioactivity limits for human consumption , even though much of the rice was grown in con- taminated soil. Fish were contaminated both by the initial event...a devastating earth- quake and tsunami. One of the many secondary effects of these disas- ters was a loss of control of the Fukushima Dai-Ichi nuclear

  5. STARS Conceptual Framework for Reuse Processes (CFRP). Volume 2: application Version 1.0

    DTIC Science & Technology

    1993-09-30

    Analysis and Design DISA/CIM process x OProcess [DIS93] Feature-Oriented Domain SEI process x Analysis ( FODA ) [KCH+90] JIAWG Object-Oriented Domain JIAWG...Domain Analysis ( FODA ) Feasibility Study. Technical Report CMU/S[1 ,N. I R 21. Soft- ware Engineering Institute, Carnegie Mellon University, Pittsburgh...Electronic Systems Center Air Force Materiel Command, USAF Hanscom AFB, MA 01731-5000 Prepared by: The Boeing Company , IBM, Unisys Corporation, Defense

  6. Symbolic Execution Over Native x86

    DTIC Science & Technology

    2012-06-01

    Disassembly to a Hello World Program Packed with the Ulti- mate Packer for eXecutables (UPX) (Taken from IDA Pro) . . . . . . . 7 Figure 2.3 A Simple Hello...Program Packed with the Ultimate Packer for eXecutables (UPX) (Taken from IDA Pro) operation details. However, the design of an IL language leads to...The Unofficial Guide to the World’s Most Popular Disas- sembler. No Starch Press, 2008. [13] Hex-Rays, “Interactive disassembler (ida) pro.” [Online

  7. The Department of Defense Net-Centric Data Strategy: Implementation Requires a Joint Community of Interest (COI) Working Group and Joint COI Oversight Council

    DTIC Science & Technology

    2007-05-17

    metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and usable to users...and provides the metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and...develop an enterprise- wide data sharing plan, establishment of mission area governance processes for CIOs, DISA development of federated search specifications

  8. Transfer of Movement Control in Motor Skill Learning

    DTIC Science & Technology

    1986-04-01

    e.g., fatigue, druas, " moods ," and so on), the effects disaDpearinq as soon as the variable is removed, these influences should probably not be...common example is the parameter of movement size’ revealed particularly well in handwriting . Consider one’s signature, written on a check versus on a...extensively early n this ntury using tasKs liKe handwriting , drawing and figure production, maze 1 earning, and the like (Cook, 193-4; Weig, 1932; see

  9. Component Provider’s and Tool Developer’s Handbook. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-03-25

    metrics [DISA93b]. " The Software Engineering Institute (SET) has developed a domain analysis process (Feature-Oriented Domain Analysis - FODA ) and is...and expresses the range of variability of these decisions. 3.2.2.3 Feature Oriented Domain Analysis Feature Oriented Domain Analysis ( FODA ) is a domain...documents created in this phase. From a purely profit-oriented business point of view, a company may develop its own analysis of a government or commercial

  10. Optimization and comparison of simultaneous and separate acquisition protocols for dual isotope myocardial perfusion SPECT.

    PubMed

    Ghaly, Michael; Links, Jonathan M; Frey, Eric C

    2015-07-07

    Dual-isotope simultaneous-acquisition (DISA) rest-stress myocardial perfusion SPECT (MPS) protocols offer a number of advantages over separate acquisition. However, crosstalk contamination due to scatter in the patient and interactions in the collimator degrade image quality. Compensation can reduce the effects of crosstalk, but does not entirely eliminate image degradations. Optimizing acquisition parameters could further reduce the impact of crosstalk. In this paper we investigate the optimization of the rest Tl-201 energy window width and relative injected activities using the ideal observer (IO), a realistic digital phantom population and Monte Carlo (MC) simulated Tc-99m and Tl-201 projections as a means to improve image quality. We compared performance on a perfusion defect detection task for Tl-201 acquisition energy window widths varying from 4 to 40 keV centered at 72 keV for a camera with a 9% energy resolution. We also investigated 7 different relative injected activities, defined as the ratio of Tc-99m and Tl-201 activities, while keeping the total effective dose constant at 13.5 mSv. For each energy window and relative injected activity, we computed the IO test statistics using a Markov chain Monte Carlo (MCMC) method for an ensemble of 1,620 triplets of fixed and reversible defect-present, and defect-absent noisy images modeling realistic background variations. The volume under the 3-class receiver operating characteristic (ROC) surface (VUS) was estimated and served as the figure of merit. For simultaneous acquisition, the IO suggested that relative Tc-to-Tl injected activity ratios of 2.6-5 and acquisition energy window widths of 16-22% were optimal. For separate acquisition, we observed a broad range of optimal relative injected activities from 2.6 to 12.1 and acquisition energy window of widths 16-22%. A negative correlation between Tl-201 injected activity and the width of the Tl-201 energy window was observed in these ranges. The results also suggested that DISA methods could potentially provide image quality as good as that obtained with separate acquisition protocols. We compared observer performance for the optimized protocols and the current clinical protocol using separate acquisition. The current clinical protocols provided better performance at a cost of injecting the patient with approximately double the injected activity of Tc-99m and Tl-201, resulting in substantially increased radiation dose.

  11. Applying Technology to Enhance Nursing Education in the Psychological Health and Traumatic Brain Injury Needs of Veterans and their Families

    DTIC Science & Technology

    2012-10-01

    mhslearn.csd.disa.mil/ilearn/en/learner/mhs/ portal /civilian_login.jsp Military Health System Portal 6. http://www.dcoe.health.mil/ Defense Centers of...forget. He is in rehab for a traumatic brain injury and is seen today for evaluation of hypertension . During the intake, Andrew casually says, I don t...statement: a. Did you know that hypertension can cause damage to blood vessels that keep organs healthy? b. Are you worried about side effects? There are

  12. Emergency Preparedness: Reports and Reflections of Local and County Emergency Managers

    DTIC Science & Technology

    1990-03-01

    than it might be. 34 VI. THE THREAT OF HAZARDS Que ’-tions 36 through 64, in turn, sought to ascertain the kinds of hazards local and county EMOs thought...hazard was defined (and the definition was included in the que ?4,onnaire) as one which (a) historically has affected the jurisdiction, (b) could result...Vo’cano 5.3 3.1 29 Tsunami 4.3 2.3 37 Prior invoQvements in disa. es also produce high (rank order) correlations with both threat perceptions and the

  13. Optimization and comparison of simultaneous and separate acquisition protocols for dual isotope myocardial perfusion SPECT

    PubMed Central

    Ghaly, Michael; Links, Jonathan M; Frey, Eric C

    2015-01-01

    Dual-isotope simultaneous-acquisition (DISA) rest-stress myocardial perfusion SPECT (MPS) protocols offer a number of advantages over separate acquisition. However, crosstalk contamination due to scatter in the patient and interactions in the collimator degrade image quality. Compensation can reduce the effects of crosstalk, but does not entirely eliminate image degradations. Optimizing acquisition parameters could further reduce the impact of crosstalk. In this paper we investigate the optimization of the rest Tl-201 energy window width and relative injected activities using the ideal observer (IO), a realistic digital phantom population and Monte Carlo (MC) simulated Tc-99m and Tl-201 projections as a means to improve image quality. We compared performance on a perfusion defect detection task for Tl-201 acquisition energy window widths varying from 4 to 40 keV centered at 72 keV for a camera with a 9% energy resolution. We also investigated 7 different relative injected activities, defined as the ratio of Tc-99m and Tl-201 activities, while keeping the total effective dose constant at 13.5 mSv. For each energy window and relative injected activity, we computed the IO test statistics using a Markov chain Monte Carlo (MCMC) method for an ensemble of 1,620 triplets of fixed and reversible defect-present, and defect-absent noisy images modeling realistic background variations. The volume under the 3-class receiver operating characteristic (ROC) surface (VUS) was estimated and served as the figure of merit. For simultaneous acquisition, the IO suggested that relative Tc-to-Tl injected activity ratios of 2.6–5 and acquisition energy window widths of 16–22% were optimal. For separate acquisition, we observed a broad range of optimal relative injected activities from 2.6 to 12.1 and acquisition energy window of widths 16–22%. A negative correlation between Tl-201 injected activity and the width of the Tl-201 energy window was observed in these ranges. The results also suggested that DISA methods could potentially provide image quality as good as that obtained with separate acquisition protocols. We compared observer performance for the optimized protocols and the current clinical protocol using separate acquisition. The current clinical protocols provided better performance at a cost of injecting the patient with approximately double the injected activity of Tc-99m and Tl-201, resulting in substantially increased radiation dose. PMID:26083239

  14. Investigation of Springing Responses on the Great Lakes Ore Carrier M/V STEWART J. CORT

    DTIC Science & Technology

    1980-12-01

    175k tons.6 Using these values one can write : JL@APBD - ACTflALIVIRTVAL (MALAST) (4.) BeALLAST &VAC TUAL U(L@ADN@) and 0.94 10 The shifting of theI’M...will have to write a routine to convert the floating-point num- bers into the other machine’s internal floating-point format. The CCI record is again...THE RESULTS AND WRITES W1l TO THE LINE PRINTER. C IT ALSO PUTS THE RESUL~rs IN A DISA FIL1E .C C WRITTEN BY JCD3 NOVEMBER 1970f C C C

  15. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  16. Characterizing the distribution of particles in urban stormwater: advancements through improved sampling technology

    USGS Publications Warehouse

    Selbig, William R.

    2014-01-01

    A new sample collection system was developed to improve the representation of sediment in stormwater by integrating the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of particle size distribution from urban source areas. Collector streets had the lowest median particle diameter of 8 μm, followed by parking lots, arterial streets, feeder streets, and residential and mixed land use (32, 43, 50, 80 and 95 μm, respectively). Results from this study suggest there is no single distribution of particles that can be applied uniformly to runoff in urban environments; however, integrating more of the entire water column during the sample collection can address some of the shortcomings of a fixed-point sampler by reducing variability and bias caused by the stratification of solids in a water column.

  17. Characteristics of inhomogeneous jets in confined swirling air flows

    NASA Technical Reports Server (NTRS)

    So, R. M. C.; Ahmed, S. A.

    1984-01-01

    An experimental program to study the characteristics of inhomogeneous jets in confined swirling flows to obtain detailed and accurate data for the evaluation and improvement of turbulent transport modeling for combustor flows is discussed. The work was also motivated by the need to investigate and quantify the influence of confinement and swirl on the characteristics of inhomogeneous jets. The flow facility was constructed in a simple way which allows easy interchange of different swirlers and the freedom to vary the jet Reynolds number. The velocity measurements were taken with a one color, one component DISA Model 55L laser-Doppler anemometer employing the forward scatter mode. Standard statistical methods are used to evaluate the various moments of the signals to give the flow characteristics. The present work was directed at the understanding of the velocity field. Therefore, only velocity and turbulence data of the axial and circumferential components are reported for inhomogeneous jets in confined swirling air flows.

  18. Models of Dynamic Relations Among Service Activities, System State and Service Quality on Computer and Network Systems

    DTIC Science & Technology

    2010-01-01

    Service quality on computer and network systems has become increasingly important as many conventional service transactions are moved online. Service quality of computer and network services can be measured by the performance of the service process in throughput, delay, and so on. On a computer and network system, competing service requests of users and associated service activities change the state of limited system resources which in turn affects the achieved service ...relations of service activities, system state and service

  19. Evaluation of wireless Local Area Networks

    NASA Astrophysics Data System (ADS)

    McBee, Charles L.

    1993-09-01

    This thesis is an in-depth evaluation of the current wireless Local Area Network (LAN) technologies. Wireless LAN's consist of three technologies: they are infrared light, microwave, and spread spectrum. When the first wireless LAN's were introduced, they were unfavorably labeled slow, expensive, and unreliable. The wireless LAN's of today are competitively priced, more secure, easier to install, and provide equal to or greater than the data throughput of unshielded twisted pair cable. Wireless LAN's are best suited for organizations that move office staff frequently, buildings that have historical significance, or buildings that have asbestos. Additionally, an organization may realize a cost savings of between $300 to $1,200 each time a node is moved. Current wireless LAN technologies have a positive effect on LAN standards being developed by the Defense Information System Agency (DISA). DoD as a whole is beginning to focus on wireless LAN's and mobile communications. If system managers want to remain successful, they need to stay abreast of this technology.

  20. Future of Department of Defense Cloud Computing Amid Cultural Confusion

    DTIC Science & Technology

    2013-03-01

    enterprise cloud - computing environment and transition to a public cloud service provider. Services have started the development of individual cloud - computing environments...endorsing cloud computing . It addresses related issues in matters of service culture changes and how strategic leaders will dictate the future of cloud ...through data center consolidation and individual Service provided cloud computing .

  1. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  2. 78 FR 57884 - Recent Trends in U.S. Services Trade, 2014 Annual Report

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-20

    ... on electronic services (audiovisual, computer, and telecommunication services). The Commission is... (audiovisual, computer, and telecommunication services). Under Commission investigation No. 332-345, the... 2014 report will focus on trade in electronic services (audiovisual, computer, and telecommunication...

  3. The effects of integrating service learning into computer science: an inter-institutional longitudinal study

    NASA Astrophysics Data System (ADS)

    Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang

    2015-07-01

    This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of the Students & Technology in Academia, Research, and Service (STARS) Alliance, an NSF-supported broadening participation in computing initiative that aims to diversify the computer science pipeline through innovative pedagogy and inter-institutional partnerships. The current paper describes how the STARS Alliance has expanded to diverse institutions, all using service learning as a vehicle for broadening participation in computing and enhancing attitudes and behaviors associated with student success. Results supported the STARS model of service learning for enhancing computing efficacy and computing commitment and for providing diverse students with many personal and professional development benefits.

  4. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  5. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Computer services for customers of subsidiary... (REGULATION Y) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for.... (b) The Board understood from the facts presented that the service company owns a computer which it...

  6. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 3 2012-01-01 2012-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  7. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  8. 12 CFR 225.118 - Computer services for customers of subsidiary banks.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Computer services for customers of subsidiary...) Regulations Financial Holding Companies Interpretations § 225.118 Computer services for customers of... understood from the facts presented that the service company owns a computer which it utilizes to furnish...

  9. The University of South Carolina: College and University Computing Environment.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1987

    1987-01-01

    Both academic and administrative computing as well as network and communications services for the university are provided and supported by the Computer Services Division. Academic services, administrative services, systems engineering and database administration, communications, networking services, operations, and library technologies are…

  10. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  11. A Hybrid Cloud Computing Service for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Yang, C. P.

    2016-12-01

    Cloud Computing is becoming a norm for providing computing capabilities for advancing Earth sciences including big Earth data management, processing, analytics, model simulations, and many other aspects. A hybrid spatiotemporal cloud computing service is bulit at George Mason NSF spatiotemporal innovation center to meet this demands. This paper will report the service including several aspects: 1) the hardware includes 500 computing services and close to 2PB storage as well as connection to XSEDE Jetstream and Caltech experimental cloud computing environment for sharing the resource; 2) the cloud service is geographically distributed at east coast, west coast, and central region; 3) the cloud includes private clouds managed using open stack and eucalyptus, DC2 is used to bridge these and the public AWS cloud for interoperability and sharing computing resources when high demands surfing; 4) the cloud service is used to support NSF EarthCube program through the ECITE project, ESIP through the ESIP cloud computing cluster, semantics testbed cluster, and other clusters; 5) the cloud service is also available for the earth science communities to conduct geoscience. A brief introduction about how to use the cloud service will be included.

  12. 5 CFR 838.441 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.441... Affecting Refunds of Employee Contributions Procedures for Computing the Amount Payable § 838.441 Computing lengths of service. (a) The smallest unit of time that OPM will calculate in computing a formula in a...

  13. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  14. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  15. 14 CFR 389.14 - Locating and copying records and documents.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Records Service (NARS) of the General Services Administration or by computer service bureaus. (1) The..., will furnish the tapes for a reasonable length of time to a computer service bureau chosen by the applicant subject to the Director's approval. The computer service bureau shall assume the liability for the...

  16. 76 FR 13665 - The Mega Life & Health Ins. Co., a Subsidiary of Healthmarkets, Inc., Including Workers Whose...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-14

    ... Computer Solutions and Software International, Inc., Dell Service Sales, Emdeon Business Services, KFORCE... workers from Computer Solutions and Software International, Inc., Dell Service Sales, Emdeon Business... from Computer Solutions and Software International, Inc., Dell Service Sales, Emdeon Business Services...

  17. xDSL connection monitor

    DOEpatents

    Horton, John J.

    2006-04-11

    A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.

  18. 5 CFR 838.242 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.242... Affecting Employee Annuities Procedures for Computing the Amount Payable § 838.242 Computing lengths of service. (a)(1) The smallest unit of time that OPM will calculate in computing a formula in a court order...

  19. Report of the Task Force on Computer Charging.

    ERIC Educational Resources Information Center

    Computer Co-ordination Group, Ottawa (Ontario).

    The objectives of the Task Force on Computer Charging as approved by the Committee of Presidents of Universities of Ontario were: (1) to identify alternative methods of costing computing services; (2) to identify alternative methods of pricing computing services; (3) to develop guidelines for the pricing of computing services; (4) to identify…

  20. Implementation of cloud computing in higher education

    NASA Astrophysics Data System (ADS)

    Asniar; Budiawan, R.

    2016-04-01

    Cloud computing research is a new trend in distributed computing, where people have developed service and SOA (Service Oriented Architecture) based application. This technology is very useful to be implemented, especially for higher education. This research is studied the need and feasibility for the suitability of cloud computing in higher education then propose the model of cloud computing service in higher education in Indonesia that can be implemented in order to support academic activities. Literature study is used as the research methodology to get a proposed model of cloud computing in higher education. Finally, SaaS and IaaS are cloud computing service that proposed to be implemented in higher education in Indonesia and cloud hybrid is the service model that can be recommended.

  1. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems

    PubMed Central

    Wu, Jun; Su, Zhou; Li, Jianhua

    2017-01-01

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943

  2. Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.

    PubMed

    Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua

    2017-07-30

    Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.

  3. Advanced information processing system: Local system services

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura; Alger, Linda; Whittredge, Roy; Stasiowski, Peter

    1989-01-01

    The Advanced Information Processing System (AIPS) is a multi-computer architecture composed of hardware and software building blocks that can be configured to meet a broad range of application requirements. The hardware building blocks are fault-tolerant, general-purpose computers, fault-and damage-tolerant networks (both computer and input/output), and interfaces between the networks and the computers. The software building blocks are the major software functions: local system services, input/output, system services, inter-computer system services, and the system manager. The foundation of the local system services is an operating system with the functions required for a traditional real-time multi-tasking computer, such as task scheduling, inter-task communication, memory management, interrupt handling, and time maintenance. Resting on this foundation are the redundancy management functions necessary in a redundant computer and the status reporting functions required for an operator interface. The functional requirements, functional design and detailed specifications for all the local system services are documented.

  4. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  5. 5 CFR 838.623 - Computing lengths of service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computing lengths of service. 838.623 Section 838.623 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE... Employee Annuities or Refunds of Employee Contributions Computation of Benefits § 838.623 Computing lengths...

  6. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    NASA Astrophysics Data System (ADS)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  7. Grids, virtualization, and clouds at Fermilab

    DOE PAGES

    Timm, S.; Chadwick, K.; Garzoglio, G.; ...

    2014-06-11

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less

  8. Grids, virtualization, and clouds at Fermilab

    NASA Astrophysics Data System (ADS)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  9. Service Mediation and Negotiation Bootstrapping as First Achievements Towards Self-adaptable Cloud Services

    NASA Astrophysics Data System (ADS)

    Brandic, Ivona; Music, Dejan; Dustdar, Schahram

    Nowadays, novel computing paradigms as for example Cloud Computing are gaining more and more on importance. In case of Cloud Computing users pay for the usage of the computing power provided as a service. Beforehand they can negotiate specific functional and non-functional requirements relevant for the application execution. However, providing computing power as a service bears different research challenges. On one hand dynamic, versatile, and adaptable services are required, which can cope with system failures and environmental changes. On the other hand, human interaction with the system should be minimized. In this chapter we present the first results in establishing adaptable, versatile, and dynamic services considering negotiation bootstrapping and service mediation achieved in context of the Foundations of Self-Governing ICT Infrastructures (FoSII) project. We discuss novel meta-negotiation and SLA mapping solutions for Cloud services bridging the gap between current QoS models and Cloud middleware and representing important prerequisites for the establishment of autonomic Cloud services.

  10. 75 FR 41522 - Novell, Inc., Including On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-16

    ... technical support for the production of computer software. The company reports that workers leased from Affiliated Computer Services, Inc., (ACS) were employed on-site at the Provo, Utah location of Novell, Inc... On-Site Leased Workers From Affiliated Computer Services, Inc., (ACS), Provo, UT; Amended...

  11. Architectural Implications of Cloud Computing

    DTIC Science & Technology

    2011-10-24

    Public Cloud Infrastructure-as-a- Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of...Twitter #SEIVirtualForum © 2011 Carnegie Mellon University Software -as-a- Service ( SaaS ) Model of software deployment in which a third-party...and System Solutions (RTSS) Program. Her current interests and projects are in service -oriented architecture (SOA), cloud computing, and context

  12. Security Assessment Simulation Toolkit (SAST) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meitzler, Wayne D.; Ouderkirk, Steven J.; Hughes, Chad O.

    2009-11-15

    The Department of Defense Technical Support Working Group (DoD TSWG) investment in the Pacific Northwest National Laboratory (PNNL) Security Assessment Simulation Toolkit (SAST) research planted a technology seed that germinated into a suite of follow-on Research and Development (R&D) projects culminating in software that is used by multiple DoD organizations. The DoD TSWG technology transfer goal for SAST is already in progress. The Defense Information Systems Agency (DISA), the Defense-wide Information Assurance Program (DIAP), the Marine Corps, Office Of Naval Research (ONR) National Center For Advanced Secure Systems Research (NCASSR) and Office Of Secretary Of Defense International Exercise Program (OSDmore » NII) are currently investing to take SAST to the next level. PNNL currently distributes the software to over 6 government organizations and 30 DoD users. For the past five DoD wide Bulwark Defender exercises, the adoption of this new technology created an expanding role for SAST. In 2009, SAST was also used in the OSD NII International Exercise and is currently scheduled for use in 2010.« less

  13. Utility Computing: Reality and Beyond

    NASA Astrophysics Data System (ADS)

    Ivanov, Ivan I.

    Utility Computing is not a new concept. It involves organizing and providing a wide range of computing-related services as public utilities. Much like water, gas, electricity and telecommunications, the concept of computing as public utility was announced in 1955. Utility Computing remained a concept for near 50 years. Now some models and forms of Utility Computing are emerging such as storage and server virtualization, grid computing, and automated provisioning. Recent trends in Utility Computing as a complex technology involve business procedures that could profoundly transform the nature of companies' IT services, organizational IT strategies and technology infrastructure, and business models. In the ultimate Utility Computing models, organizations will be able to acquire as much IT services as they need, whenever and wherever they need them. Based on networked businesses and new secure online applications, Utility Computing would facilitate "agility-integration" of IT resources and services within and between virtual companies. With the application of Utility Computing there could be concealment of the complexity of IT, reduction of operational expenses, and converting of IT costs to variable `on-demand' services. How far should technology, business and society go to adopt Utility Computing forms, modes and models?

  14. Cloud computing applications for biomedical science: A perspective.

    PubMed

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  15. Cloud computing applications for biomedical science: A perspective

    PubMed Central

    2018-01-01

    Biomedical research has become a digital data–intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research. PMID:29902176

  16. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  17. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  18. Ubiquitous Computing Services Discovery and Execution Using a Novel Intelligent Web Services Algorithm

    PubMed Central

    Choi, Okkyung; Han, SangYong

    2007-01-01

    Ubiquitous Computing makes it possible to determine in real time the location and situations of service requesters in a web service environment as it enables access to computers at any time and in any place. Though research on various aspects of ubiquitous commerce is progressing at enterprises and research centers, both domestically and overseas, analysis of a customer's personal preferences based on semantic web and rule based services using semantics is not currently being conducted. This paper proposes a Ubiquitous Computing Services System that enables a rule based search as well as semantics based search to support the fact that the electronic space and the physical space can be combined into one and the real time search for web services and the construction of efficient web services thus become possible.

  19. FORCEnet Net Centric Architecture - A Standards View

    DTIC Science & Technology

    2006-06-01

    SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION DATA MANAGEMENT APPLICATION...R V I C E P L A T F O R M S E R V I C E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM...E F R A M E W O R K USER-FACING SERVICES SHARED SERVICES NETWORKING/COMMUNICATIONS STORAGE COMPUTING PLATFORM DATA INTERCHANGE/INTEGRATION

  20. The Effects of Integrating Service Learning into Computer Science: An Inter-Institutional Longitudinal Study

    ERIC Educational Resources Information Center

    Payton, Jamie; Barnes, Tiffany; Buch, Kim; Rorrer, Audrey; Zuo, Huifang

    2015-01-01

    This study is a follow-up to one published in computer science education in 2010 that reported preliminary results showing a positive impact of service learning on student attitudes associated with success and retention in computer science. That paper described how service learning was incorporated into a computer science course in the context of…

  1. A Selective Bibliography of Building Environment and Service Systems with Particular Reference to Computer Applications. Computer Report CR20.

    ERIC Educational Resources Information Center

    Forwood, Bruce S.

    This bibliography has been produced as part of a research program attempting to develop a new approach to building environment and service systems design using computer-aided design techniques. As such it not only classifies available literature on the service systems themselves, but also contains sections on the application of computers and…

  2. Changing Pre-Service Mathematics Teachers' Beliefs about Using Computers for Teaching and Learning Mathematics: The Effect of Three Different Models

    ERIC Educational Resources Information Center

    Karatas, Ilhan

    2014-01-01

    This study examines the effect of three different computer integration models on pre-service mathematics teachers' beliefs about using computers in mathematics education. Participants included 104 pre-service mathematics teachers (36 second-year students in the Computer Oriented Model group, 35 fourth-year students in the Integrated Model (IM)…

  3. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    NASA Astrophysics Data System (ADS)

    Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil

    2018-03-01

    This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  4. 5 CFR 831.703 - Computation of annuities for part-time service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... during those periods of creditable service. Pre-April 7, 1986, average pay means the largest annual rate..., 1986, service is computed in accordance with 5 U.S.C. 8339 using the pre-April 7, 1986, average pay and... computed in accordance with 5 U.S.C. 8339 using the post-April 6, 1986, average pay and length of service...

  5. Computer Assisted Rehabilitation Service Delivery.

    ERIC Educational Resources Information Center

    West Virginia Rehabilitation Research and Training Center, Dunbar.

    This volume consisting of state of the art reviews, suggestions and guidelines for practitioners, and program descriptions deals with the current and potential applications of computers in the delivery of services for vocational rehabilitation (VR). Discussed first are current applications of computer technology in rehabilitative service delivery.…

  6. 38 CFR 3.15 - Computation of service.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Computation of service. 3.15 Section 3.15 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.15 Computation of service...

  7. 38 CFR 3.15 - Computation of service.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Computation of service. 3.15 Section 3.15 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.15 Computation of service...

  8. 38 CFR 3.15 - Computation of service.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Computation of service. 3.15 Section 3.15 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.15 Computation of service...

  9. 38 CFR 3.15 - Computation of service.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Computation of service. 3.15 Section 3.15 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.15 Computation of service...

  10. 38 CFR 3.15 - Computation of service.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Computation of service. 3.15 Section 3.15 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.15 Computation of service...

  11. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  12. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  13. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  14. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  15. 47 CFR 64.702 - Furnishing of enhanced services and customer-premises equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... separate operating, marketing, installation, and maintenance personnel, and utilize separate computer... available to the separate corporation any capacity or computer system component on its computer system or... Enhanced Services and Customer-Premises Equipment by Bell Operating Companies; Telephone Operator Services...

  16. Mobile Cloud Computing with SOAP and REST Web Services

    NASA Astrophysics Data System (ADS)

    Ali, Mushtaq; Fadli Zolkipli, Mohamad; Mohamad Zain, Jasni; Anwar, Shahid

    2018-05-01

    Mobile computing in conjunction with Mobile web services drives a strong approach where the limitations of mobile devices may possibly be tackled. Mobile Web Services are based on two types of technologies; SOAP and REST, which works with the existing protocols to develop Web services. Both the approaches carry their own distinct features, yet to keep the constraint features of mobile devices in mind, the better in two is considered to be the one which minimize the computation and transmission overhead while offloading. The load transferring of mobile device to remote servers for execution called computational offloading. There are numerous approaches to implement computational offloading a viable solution for eradicating the resources constraints of mobile device, yet a dynamic method of computational offloading is always required for a smooth and simple migration of complex tasks. The intention of this work is to present a distinctive approach which may not engage the mobile resources for longer time. The concept of web services utilized in our work to delegate the computational intensive tasks for remote execution. We tested both SOAP Web services approach and REST Web Services for mobile computing. Two parameters considered in our lab experiments to test; Execution Time and Energy Consumption. The results show that RESTful Web services execution is far better than executing the same application by SOAP Web services approach, in terms of execution time and energy consumption. Conducting experiments with the developed prototype matrix multiplication app, REST execution time is about 200% better than SOAP execution approach. In case of energy consumption REST execution is about 250% better than SOAP execution approach.

  17. CAD/CAE Integration Enhanced by New CAD Services Standard

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2002-01-01

    A Government-industry team led by the NASA Glenn Research Center has developed a computer interface standard for accessing data from computer-aided design (CAD) systems. The Object Management Group, an international computer standards organization, has adopted this CAD services standard. The new standard allows software (e.g., computer-aided engineering (CAE) and computer-aided manufacturing software to access multiple CAD systems through one programming interface. The interface is built on top of a distributed computing system called the Common Object Request Broker Architecture (CORBA). CORBA allows the CAD services software to operate in a distributed, heterogeneous computing environment.

  18. Analysis on the security of cloud computing

    NASA Astrophysics Data System (ADS)

    He, Zhonglin; He, Yuhua

    2011-02-01

    Cloud computing is a new technology, which is the fusion of computer technology and Internet development. It will lead the revolution of IT and information field. However, in cloud computing data and application software is stored at large data centers, and the management of data and service is not completely trustable, resulting in safety problems, which is the difficult point to improve the quality of cloud service. This paper briefly introduces the concept of cloud computing. Considering the characteristics of cloud computing, it constructs the security architecture of cloud computing. At the same time, with an eye toward the security threats cloud computing faces, several corresponding strategies are provided from the aspect of cloud computing users and service providers.

  19. A Formal Specification and Verification Method for the Prevention of Denial of Service in Ada Services

    DTIC Science & Technology

    1988-03-01

    Mechanism; Computer Security. 16. PRICE CODE 17. SECURITY CLASSIFICATION IS. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. UMrrATION OF ABSTRACT...denial of service. This paper assumes that the reader is a computer science or engineering professional working in the area of formal specification and...recovery from such events as deadlocks and crashes can be accounted for in the computation of the waiting time for each service in the service hierarchy

  20. Assessing Pre-Service Teachers' Computer Phobia Levels in Terms of Gender and Experience, Turkish Sample

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Karal, Hasan

    2009-01-01

    In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…

  1. Cloud Computing: An Overview

    NASA Astrophysics Data System (ADS)

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  2. Cloud Computing Fundamentals

    NASA Astrophysics Data System (ADS)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  3. Cloud Computing Value Chains: Understanding Businesses and Value Creation in the Cloud

    NASA Astrophysics Data System (ADS)

    Mohammed, Ashraf Bany; Altmann, Jörn; Hwang, Junseok

    Based on the promising developments in Cloud Computing technologies in recent years, commercial computing resource services (e.g. Amazon EC2) or software-as-a-service offerings (e.g. Salesforce. com) came into existence. However, the relatively weak business exploitation, participation, and adoption of other Cloud Computing services remain the main challenges. The vague value structures seem to be hindering business adoption and the creation of sustainable business models around its technology. Using an extensive analyze of existing Cloud business models, Cloud services, stakeholder relations, market configurations and value structures, this Chapter develops a reference model for value chains in the Cloud. Although this model is theoretically based on porter's value chain theory, the proposed Cloud value chain model is upgraded to fit the diversity of business service scenarios in the Cloud computing markets. Using this model, different service scenarios are explained. Our findings suggest new services, business opportunities, and policy practices for realizing more adoption and value creation paths in the Cloud.

  4. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  5. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  6. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  7. 14 CFR 1214.801 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  8. 14 CFR § 1214.801 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... customer's pro rata share of Shuttle services and used to compute the Shuttle charge factor. Means of... compute the customer's pro rata share of each element's services and used to compute the element charge... element charge factor. Parameters used in computation of the customer's flight price. Means of computing...

  9. Traditional vs. Innovative Uses of Computers among Mathematics Pre-Service Teachers in Serbia

    ERIC Educational Resources Information Center

    Teo, Timothy; Milutinovic, Verica; Zhou, Mingming; Bankovic, Dragic

    2017-01-01

    This study examined pre-service teachers' intentions to use computers in traditional and innovative teaching practices in primary mathematics classrooms. It extended the technology acceptance model (TAM) by adding as external variables pre-service teachers' experience with computers and their technological pedagogical content knowledge (TPCK).…

  10. Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support

    PubMed Central

    Camargo, João; Rochol, Juergen; Gerla, Mario

    2018-01-01

    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends. PMID:29364172

  11. Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.

    PubMed

    Rosário, Denis; Schimuneck, Matias; Camargo, João; Nobre, Jéferson; Both, Cristiano; Rochol, Juergen; Gerla, Mario

    2018-01-24

    A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends.

  12. The Education Value of Cloud Computing

    ERIC Educational Resources Information Center

    Katzan, Harry, Jr.

    2010-01-01

    Cloud computing is a technique for supplying computer facilities and providing access to software via the Internet. Cloud computing represents a contextual shift in how computers are provisioned and accessed. One of the defining characteristics of cloud software service is the transfer of control from the client domain to the service provider.…

  13. Cloud Computing. Technology Briefing. Number 1

    ERIC Educational Resources Information Center

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  14. 48 CFR 227.7206 - Contracts for architect-engineer services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7206 Contracts for architect-engineer services. Follow 227.7107 when contracting for architect-engineer services. ...-engineer services. 227.7206 Section 227.7206 Federal Acquisition Regulations System DEFENSE ACQUISITION...

  15. 48 CFR 227.7206 - Contracts for architect-engineer services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7206 Contracts for architect-engineer services. Follow 227.7107 when contracting for architect-engineer services. ...-engineer services. 227.7206 Section 227.7206 Federal Acquisition Regulations System DEFENSE ACQUISITION...

  16. 48 CFR 227.7206 - Contracts for architect-engineer services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7206 Contracts for architect-engineer services. Follow 227.7107 when contracting for architect-engineer services. ...-engineer services. 227.7206 Section 227.7206 Federal Acquisition Regulations System DEFENSE ACQUISITION...

  17. 48 CFR 227.7206 - Contracts for architect-engineer services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Rights in Computer Software and Computer Software Documentation 227.7206 Contracts for architect-engineer services. Follow 227.7107 when contracting for architect-engineer services. ...-engineer services. 227.7206 Section 227.7206 Federal Acquisition Regulations System DEFENSE ACQUISITION...

  18. 48 CFR 227.7206 - Contracts for architect-engineer services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-engineer services. 227.7206 Section 227.7206 Federal Acquisition Regulations System DEFENSE ACQUISITION... Rights in Computer Software and Computer Software Documentation 227.7206 Contracts for architect-engineer services. Follow 227.7107 when contracting for architect-engineer services. ...

  19. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-12-01

    proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as Amazon Web Services and Google Compute Engine means...IaaS trusted computing system: • Secure Bootstrapping – the system should enable the tenant to securely install an initial root secret into each cloud ...elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features, but none achieve all. Excalibur [31] sup

  20. Operations analysis (study 2.6). Volume 4: Computer specification; logistics of orbiting vehicle servicing (LOVES)

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The logistics of orbital vehicle servicing computer specifications was developed and a number of alternatives to improve utilization of the space shuttle and the tug were investigated. Preliminary results indicate that space servicing offers a potential for reducing future operational and program costs over ground refurbishment of satellites. A computer code which could be developed to simulate space servicing is presented.

  1. Information Services at the University of Calgary.

    ERIC Educational Resources Information Center

    Norris, Douglas

    The University of Calgary was the first university in Canada to combine its library, computer center, and audiovisual services into one unit. For a period of three years the Division of Information Services administered and coordinated library services, computer services, and communications media. The organizational structure, objectives, and the…

  2. Negotiating for Computer Services. Proceedings of the 1977 Clinic on Library Applications of Data Processing.

    ERIC Educational Resources Information Center

    Divilbiss, J. L., Ed.

    To help the librarian in negotiating with vendors of automated library services, nine authors have presented methods of dealing with a specific service or situation. Paper topics include computer services, network contracts, innovative service, data processing, automated circulation, a turn-key system, data base sharing, online data base services,…

  3. Opportunities and challenges of cloud computing to improve health care services.

    PubMed

    Kuo, Alex Mu-Hsing

    2011-09-21

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed.

  4. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  5. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  6. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  7. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  8. 39 CFR Appendix A to Part 265 - Fees for Computer Searches

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Fees for Computer Searches A Appendix A to Part 265 Postal Service UNITED STATES POSTAL SERVICE ORGANIZATION AND ADMINISTRATION RELEASE OF INFORMATION Pt. 265, App. A Appendix A to Part 265—Fees for Computer Searches When requested information must be...

  9. 39 CFR 963.6 - Computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Computation of time. 963.6 Section 963.6 Postal Service UNITED STATES POSTAL SERVICE PROCEDURES RULES OF PRACTICE IN PROCEEDINGS RELATIVE TO VIOLATIONS OF THE PANDERING ADVERTISEMENTS STATUTE, 39 U.S.C. 3008 § 963.6 Computation of time. A designated period...

  10. 77 FR 35432 - Privacy Act of 1974, Computer Matching Program: United States Postal Service and the Defense...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-13

    ... the Defense Manpower Data Center, Department of Defense AGENCY: Postal Service TM . ACTION: Notice of Computer Matching Program--United States Postal Service and the Defense Manpower Data Center, Department of... as the recipient agency in a computer matching program with the Defense Manpower Data Center (DMDC...

  11. The National Education Association's Educational Computer Service. An Assessment.

    ERIC Educational Resources Information Center

    Software Publishers Association, Washington, DC.

    The Educational Computer Service (ECS) of the National Education Association (NEA) evaluates and distributes educational software. An investigation of ECS was conducted by the Computer Education Committee of the Software Publishers Association (SPA) at the request of SPA members. The SPA found that the service, as it is presently structured, is…

  12. A study on strategic provisioning of cloud computing services.

    PubMed

    Whaiduzzaman, Md; Haque, Mohammad Nazmul; Rejaul Karim Chowdhury, Md; Gani, Abdullah

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models "everything-as-a-service." Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified.

  13. A Study on Strategic Provisioning of Cloud Computing Services

    PubMed Central

    Rejaul Karim Chowdhury, Md

    2014-01-01

    Cloud computing is currently emerging as an ever-changing, growing paradigm that models “everything-as-a-service.” Virtualised physical resources, infrastructure, and applications are supplied by service provisioning in the cloud. The evolution in the adoption of cloud computing is driven by clear and distinct promising features for both cloud users and cloud providers. However, the increasing number of cloud providers and the variety of service offerings have made it difficult for the customers to choose the best services. By employing successful service provisioning, the essential services required by customers, such as agility and availability, pricing, security and trust, and user metrics can be guaranteed by service provisioning. Hence, continuous service provisioning that satisfies the user requirements is a mandatory feature for the cloud user and vitally important in cloud computing service offerings. Therefore, we aim to review the state-of-the-art service provisioning objectives, essential services, topologies, user requirements, necessary metrics, and pricing mechanisms. We synthesize and summarize different provision techniques, approaches, and models through a comprehensive literature review. A thematic taxonomy of cloud service provisioning is presented after the systematic review. Finally, future research directions and open research issues are identified. PMID:25032243

  14. 42 CFR 441.182 - Maintenance of effort: Computation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES Inpatient Psychiatric Services for Individuals Under Age 21 in Psychiatric Facilities or Programs § 441.182 Maintenance of effort: Computation. (a) For expenditures for inpatient psychiatric services... total State Medicaid expenditures in the current quarter for inpatient psychiatric services and...

  15. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    NASA Technical Reports Server (NTRS)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  16. Cloud Computing

    DTIC Science & Technology

    2009-11-12

    Service (IaaS) Software -as-a- Service ( SaaS ) Cloud Computing Types Platform-as-a- Service (PaaS) Based on Type of Capability Based on access Based...Mellon University Software -as-a- Service ( SaaS ) Application-specific capabilities, e.g., service that provides customer management Allows organizations...as a Service ( SaaS ) Model of software deployment in which a provider licenses an application to customers for use as a service on

  17. Cloud Based Educational Systems and Its Challenges and Opportunities and Issues

    ERIC Educational Resources Information Center

    Paul, Prantosh Kr.; Lata Dangwal, Kiran

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…

  18. A Novel Market-Oriented Dynamic Collaborative Cloud Service Platform

    NASA Astrophysics Data System (ADS)

    Hassan, Mohammad Mehedi; Huh, Eui-Nam

    In today's world the emerging Cloud computing (Weiss, 2007) offer a new computing model where resources such as computing power, storage, online applications and networking infrastructures can be shared as "services" over the internet. Cloud providers (CPs) are incentivized by the profits to be made by charging consumers for accessing these services. Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating costs associated with "in-house" provision of these services.

  19. School Data Processing Services in Texas. A Cooperative Approach. [Revised.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin. Management Information Center.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  20. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESC). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions; each of the five…

  1. School Data Processing Services in Texas: A Cooperative Approach.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    The Texas plan for computer services provides services to public school districts through a statewide network of 20 regional Education Service Centers (ESO). Each of the three Multi-Regional Processing Centers (MRPCs) operates a large computer facility providing school district services within from three to eight ESC regions each of the five…

  2. A Framework for Safe Composition of Heterogeneous SOA Services in a Pervasive Computing Environment with Resource Constraints

    ERIC Educational Resources Information Center

    Reyes Alamo, Jose M.

    2010-01-01

    The Service Oriented Computing (SOC) paradigm, defines services as software artifacts whose implementations are separated from their specifications. Application developers rely on services to simplify the design, reduce the development time and cost. Within the SOC paradigm, different Service Oriented Architectures (SOAs) have been developed.…

  3. A service-oriented data access control model

    NASA Astrophysics Data System (ADS)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  4. 75 FR 7582 - Access by EPA Contractors to Information Claimed as Confidential Business Information (CBI...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-22

    ... subcontractor, Computer Sciences Corporation (CSC), of the same address, provide IT support services related to... Vista Computer Services, under Contract Number 68-W3-0032. From October 1, 1998 until March 31, 2004, the contractor was Vista Computer Services, under Contract Number 68-W-98-230. From April 1, 2004...

  5. Greek Pre-Service Teachers' Intentions to Use Computers as In-Service Teachers

    ERIC Educational Resources Information Center

    Fokides, Emmanuel

    2017-01-01

    The study examines the factors affecting Greek pre-service teachers' intention to use computers when they become practicing teachers. Four variables (perceived usefulness, perceived ease of use, self-efficacy, and attitude toward use) as well as behavioral intention to use computers were used so as to build a research model that extended the…

  6. Pre-Service English Language Teachers' Perceptions of Computer Self-Efficacy and General Self-Efficacy

    ERIC Educational Resources Information Center

    Topkaya, Ece Zehir

    2010-01-01

    The primary aim of this study is to investigate pre-service English language teachers' perceptions of computer self-efficacy in relation to different variables. Secondarily, the study also explores the relationship between pre-service English language teachers' perceptions of computer self-efficacy and their perceptions of general self-efficacy.…

  7. 5 CFR 831.703 - Computation of annuities for part-time service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Computation of annuities for part-time... part-time service. (a) Purpose. The computational method in this section shall be used to determine the annuity for an employee who has part-time service on or after April 7, 1986. (b) Definitions. In this...

  8. 75 FR 28296 - Denso Manufacturing of Michigan Including On-Site Leased Workers From Adecco Employment Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ...., Anchor Staffing, Capitol Software Systems, Donohue Computer Services, Historic Northside Family Practice, Scripture and Associates, Summit Software Services DD, Tacworldwide Companies, Talent Trax, Tek Systems...., Anchor Staffing, Capitol Software Systems, Donohue Computer Services, Historic Northside Family Practice...

  9. Cloud Computing with iPlant Atmosphere.

    PubMed

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  10. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  11. Opportunities and Challenges of Cloud Computing to Improve Health Care Services

    PubMed Central

    2011-01-01

    Cloud computing is a new way of delivering computing resources and services. Many managers and experts believe that it can improve health care services, benefit health care research, and change the face of health information technology. However, as with any innovation, cloud computing should be rigorously evaluated before its widespread adoption. This paper discusses the concept and its current place in health care, and uses 4 aspects (management, technology, security, and legal) to evaluate the opportunities and challenges of this computing model. Strategic planning that could be used by a health organization to determine its direction, strategy, and resource allocation when it has decided to migrate from traditional to cloud-based health services is also discussed. PMID:21937354

  12. Self-Organized Service Negotiation for Collaborative Decision Making

    PubMed Central

    Zhang, Bo; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM. PMID:25243228

  13. Self-organized service negotiation for collaborative decision making.

    PubMed

    Zhang, Bo; Huang, Zhenhua; Zheng, Ziming

    2014-01-01

    This paper proposes a self-organized service negotiation method for CDM in intelligent and automatic manners. It mainly includes three phases: semantic-based capacity evaluation for the CDM sponsor, trust computation of the CDM organization, and negotiation selection of the decision-making service provider (DMSP). In the first phase, the CDM sponsor produces the formal semantic description of the complex decision task for DMSP and computes the capacity evaluation values according to participator instructions from different DMSPs. In the second phase, a novel trust computation approach is presented to compute the subjective belief value, the objective reputation value, and the recommended trust value. And in the third phase, based on the capacity evaluation and trust computation, a negotiation mechanism is given to efficiently implement the service selection. The simulation experiment results show that our self-organized service negotiation method is feasible and effective for CDM.

  14. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    USDA-ARS?s Scientific Manuscript database

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  15. Scheduling multimedia services in cloud computing environment

    NASA Astrophysics Data System (ADS)

    Liu, Yunchang; Li, Chunlin; Luo, Youlong; Shao, Yanling; Zhang, Jing

    2018-02-01

    Currently, security is a critical factor for multimedia services running in the cloud computing environment. As an effective mechanism, trust can improve security level and mitigate attacks within cloud computing environments. Unfortunately, existing scheduling strategy for multimedia service in the cloud computing environment do not integrate trust mechanism when making scheduling decisions. In this paper, we propose a scheduling scheme for multimedia services in multi clouds. At first, a novel scheduling architecture is presented. Then, We build a trust model including both subjective trust and objective trust to evaluate the trust degree of multimedia service providers. By employing Bayesian theory, the subjective trust degree between multimedia service providers and users is obtained. According to the attributes of QoS, the objective trust degree of multimedia service providers is calculated. Finally, a scheduling algorithm integrating trust of entities is proposed by considering the deadline, cost and trust requirements of multimedia services. The scheduling algorithm heuristically hunts for reasonable resource allocations and satisfies the requirement of trust and meets deadlines for the multimedia services. Detailed simulated experiments demonstrate the effectiveness and feasibility of the proposed trust scheduling scheme.

  16. Problematics of different technical maintenance for computers

    NASA Technical Reports Server (NTRS)

    Dostalek, Z.

    1977-01-01

    Two modes of operations are used in the technical maintenance of computers: servicing provided by the equipment supplier, and that done by specially trained computer users. The advantages and disadvantages of both modes are discussed. Maintenance downtime is tabulated for two computers serviced by user employees over an eight year period.

  17. 75 FR 28252 - Notice of a Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-20

    ... GENERAL SERVICES ADMINISTRATION Notice of a Computer Matching Program AGENCY: General Services... providing notice of a proposed computer match. The purpose of this match is to identify individuals who are... providing notice of a proposed computer match. The purpose of this match is to identify individuals who are...

  18. 22 CFR 19.4 - Special rules for computing creditable service for purposes of payments to former spouses.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Special rules for computing creditable service for purposes of payments to former spouses. 19.4 Section 19.4 Foreign Relations DEPARTMENT OF STATE... DISABILITY SYSTEM § 19.4 Special rules for computing creditable service for purposes of payments to former...

  19. Resetting Educational Technology Coursework for Pre-Service Teachers: A Computational Thinking Approach to the Development of Technological Pedagogical Content Knowledge (TPACK)

    ERIC Educational Resources Information Center

    Mouza, Chrystalla; Yang, Hui; Pan, Yi-Cheng; Ozden, Sule Yilmaz; Pollock, Lori

    2017-01-01

    This study presents the design of an educational technology course for pre-service teachers specific to incorporating computational thinking in K-8 classroom settings. Subsequently, it examines how participation in the course influences pre-service teachers' dispositions and knowledge of computational thinking concepts and the ways in which such…

  20. Diagnosing Pre-Service Science Teachers' Understanding of Chemistry Concepts by Using Computer-Mediated Predict-Observe-Explain Tasks

    ERIC Educational Resources Information Center

    Sesn, Burcin Acar

    2013-01-01

    The purpose of this study was to investigate pre-service science teachers' understanding of surface tension, cohesion and adhesion forces by using computer-mediated predict-observe-explain tasks. 22 third-year pre-service science teachers participated in this study. Three computer-mediated predict-observe-explain tasks were developed and applied…

  1. Measuring the Effect of Gender on Computer Attitudes among Pre-Service Teachers: A Multiple Indicators, Multiple Causes (MIMIC) Modeling

    ERIC Educational Resources Information Center

    Teo, Timothy

    2010-01-01

    Purpose: The purpose of this paper is to examine the effect of gender on pre-service teachers' computer attitudes. Design/methodology/approach: A total of 157 pre-service teachers completed a survey questionnaire measuring their responses to four constructs which explain computer attitude. These were administered during the teaching term where…

  2. Establishing a Cloud Computing Success Model for Hospitals in Taiwan.

    PubMed

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services.

  3. Establishing a Cloud Computing Success Model for Hospitals in Taiwan

    PubMed Central

    Lian, Jiunn-Woei

    2017-01-01

    The purpose of this study is to understand the critical quality-related factors that affect cloud computing success of hospitals in Taiwan. In this study, private cloud computing is the major research target. The chief information officers participated in a questionnaire survey. The results indicate that the integration of trust into the information systems success model will have acceptable explanatory power to understand cloud computing success in the hospital. Moreover, information quality and system quality directly affect cloud computing satisfaction, whereas service quality indirectly affects the satisfaction through trust. In other words, trust serves as the mediator between service quality and satisfaction. This cloud computing success model will help hospitals evaluate or achieve success after adopting private cloud computing health care services. PMID:28112020

  4. NEXUS - Resilient Intelligent Middleware

    NASA Astrophysics Data System (ADS)

    Kaveh, N.; Hercock, R. Ghanea

    Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.

  5. Cloud computing basics for librarians.

    PubMed

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. Copyright © Taylor & Francis Group, LLC

  6. 46 CFR 9.8 - Broken periods.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY PROCEDURES APPLICABLE TO THE PUBLIC EXTRA COMPENSATION FOR OVERTIME SERVICES § 9.8 Broken periods. In computing extra compensation where the services rendered are in broken... with the waiting time and computed as continuous service. ...

  7. Effects of distance from models on the fitness of floral mimics.

    PubMed

    Duffy, K J; Johnson, S D

    2017-05-01

    Rewardless plants can attract pollinators by mimicking floral traits of rewarding heterospecific plants. This should result in the pollination success of floral mimics being dependent on the relative abundance of their models, as pollinator abundance and conditioning on model signals should be higher in the vicinity of the models. However, the attraction of pollinators to signals of the models may be partially innate, such that spatial isolation of mimics from model species may not strongly affect pollination success of mimics. We tested whether pollination rates and fruit set of the rewardless orchid Disa pulchra were influenced by proximity and abundance of its rewarding model species, Watsonia lepida. Pollination success of the orchid increased with proximity to the model species, while fruit set of the orchid increased with local abundance of the model species. Orchids that were experimentally translocated outside the model population experienced reduced pollinaria removal and increased pollinator-mediated self-pollination. These results confirm predictions that the pollination success of floral mimics should be dependent on the proximity and abundance of model taxa, and thus highlight the importance of ecological facilitation among species involved in mimicry systems. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  8. Smart learning services based on smart cloud computing.

    PubMed

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user's behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)--smart pull, smart prospect, smart content, and smart push--concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users' needs by collecting and analyzing users' behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users' behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users.

  9. Smart Learning Services Based on Smart Cloud Computing

    PubMed Central

    Kim, Svetlana; Song, Su-Mi; Yoon, Yong-Ik

    2011-01-01

    Context-aware technologies can make e-learning services smarter and more efficient since context-aware services are based on the user’s behavior. To add those technologies into existing e-learning services, a service architecture model is needed to transform the existing e-learning environment, which is situation-aware, into the environment that understands context as well. The context-awareness in e-learning may include the awareness of user profile and terminal context. In this paper, we propose a new notion of service that provides context-awareness to smart learning content in a cloud computing environment. We suggest the elastic four smarts (E4S)—smart pull, smart prospect, smart content, and smart push—concept to the cloud services so smart learning services are possible. The E4S focuses on meeting the users’ needs by collecting and analyzing users’ behavior, prospecting future services, building corresponding contents, and delivering the contents through cloud computing environment. Users’ behavior can be collected through mobile devices such as smart phones that have built-in sensors. As results, the proposed smart e-learning model in cloud computing environment provides personalized and customized learning services to its users. PMID:22164048

  10. Bioinformatics clouds for big data manipulation.

    PubMed

    Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  11. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    NASA Astrophysics Data System (ADS)

    Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.

    2011-12-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  12. Monitoring techniques and alarm procedures for CMS services and sites in WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.

    2012-01-01

    The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less

  13. Distributed geospatial model sharing based on open interoperability standards

    USGS Publications Warehouse

    Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin

    2009-01-01

    Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.

  14. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-12-01

    simultaneous cloud nodes. 1. INTRODUCTION The proliferation and popularity of infrastructure-as-a- service (IaaS) cloud computing services such as...Amazon Web Services and Google Compute Engine means more cloud tenants are hosting sensitive, private, and business critical data and applications in the...thousands of IaaS resources as they are elastically instantiated and terminated. Prior cloud trusted computing solutions address a subset of these features

  15. Meeting the Educational and Therapeutic Needs of Short Term Hospitalized Children with the Aid of the Computer.

    ERIC Educational Resources Information Center

    Ciminero, Sandra Elser

    The acute care pediatric/adolescent unit at Saint Joseph Hospital in Chicago, Illinois offers patients computer services consisting of recreation, general education, newspaper, and patient education components. To gather information concerning patients' experience with computers and to assess the effectiveness of computer services, data were…

  16. 25 CFR 20.313 - How will the Bureau compute financial assistance payments?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false How will the Bureau compute financial assistance payments? 20.313 Section 20.313 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR HUMAN SERVICES... will the Bureau compute financial assistance payments? (a) The social services worker will compute...

  17. An Introductory Course on Service-Oriented Computing for High Schools

    ERIC Educational Resources Information Center

    Tsai, W. T.; Chen, Yinong; Cheng, Calvin; Sun, Xin; Bitter, Gary; White, Mary

    2008-01-01

    Service-Oriented Computing (SOC) is a new computing paradigm that has been adopted by major computer companies as well as government agencies such as the Department of Defense for mission-critical applications. SOC is being used for developing Web and electronic business applications, as well as robotics, gaming, and scientific applications. Yet,…

  18. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  19. Using computers for planning and evaluating nursing in the health care services.

    PubMed

    Emuziene, Vilma

    2009-01-01

    This paper describes that the nurses attitudes, using and motivation towards the computer usage significantly influenced by area of nursing/health care service. Today most of the nurses traditionally document patient information in a medical record using pen and paper. Most nursing administrators not currently involved with computer applications in their settings are interested in exploring whether technology could help them with the day-to-day and long - range tasks of planning and evaluating nursing services. The results of this investigation showed that respondents (nurses), as specialists and nursing informatics, make their activity well: they had "positive" attitude towards computers and "good" or "average" computer skills. The nurses overall computer attitude did influence by the age of the nurses, by sex, by professional qualification. Younger nurses acquire informatics skills while in nursing school and are more accepting of computer advancements. The knowledge about computer among nurses who don't have any training in computers' significantly differs, who have training and using the computer once a week or everyday. In the health care services often are using the computers and the automated data systems, data for the statistical information (visit information, patient information) and billing information. In nursing field often automated data systems are using for statistical information, billing information, information about the vaccination, patient assessment and patient classification.

  20. Heterogeneous concurrent computing with exportable services

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy

    1995-01-01

    Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.

  1. 15 CFR 280.206 - Filing and service of papers other than charging letter.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE... delivery service, or by facsimile. (d) Certificate of service. A certificate of service signed by the party... charging letter, filed and served on parties. (e) Computing period of time. In computing any period of time...

  2. 15 CFR 280.206 - Filing and service of papers other than charging letter.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Commerce and Foreign Trade NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY, DEPARTMENT OF COMMERCE... delivery service, or by facsimile. (d) Certificate of service. A certificate of service signed by the party... charging letter, filed and served on parties. (e) Computing period of time. In computing any period of time...

  3. Computer Concepts for VTAE Food Service. Final Report.

    ERIC Educational Resources Information Center

    Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.

    A project was conducted to determine the computer application competencies needed by a graduate of Wisconsin Vocational Technical Adult Education (VTAE) food service programs. Surveys were conducted of food service graduates and their employers as well as of major companies by the food service coordinators of the VTAE districts in the state; a…

  4. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing.

    PubMed

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-03-06

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.

  5. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing

    PubMed Central

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-01-01

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305

  6. Identity-Based Authentication for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Li, Hongwei; Dai, Yuanshun; Tian, Ling; Yang, Haomiao

    Cloud computing is a recently developed new technology for complex systems with massive-scale services sharing among numerous users. Therefore, authentication of both users and services is a significant issue for the trust and security of the cloud computing. SSL Authentication Protocol (SAP), once applied in cloud computing, will become so complicated that users will undergo a heavily loaded point both in computation and communication. This paper, based on the identity-based hierarchical model for cloud computing (IBHMCC) and its corresponding encryption and signature schemes, presented a new identity-based authentication protocol for cloud computing and services. Through simulation testing, it is shown that the authentication protocol is more lightweight and efficient than SAP, specially the more lightweight user side. Such merit of our model with great scalability is very suited to the massive-scale cloud.

  7. 78 FR 25482 - Notice of Revised Determination on Reconsideration

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-01

    ...-PROGRESSIVE SOFTWARE COMPUTING, QUALITY TESTING SERVICES, INC., RAILROAD CONSTRUCTION CO. OF SOUTH JERSEY, INC..., LP, PSCI- Progressive Software Computing, Quality Testing Services, Inc., Railroad Construction Co..., ANDERSON CONSTRUCTION SERVICES, BAKER PETROLITE, BAKERCORP, BELL-FAST FIRE PROTECTION INC., BOLTTECH INC...

  8. Traffic information computing platform for big data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Zongtao, E-mail: ztduan@chd.edu.cn; Li, Ying, E-mail: ztduan@chd.edu.cn; Zheng, Xibin, E-mail: ztduan@chd.edu.cn

    Big data environment create data conditions for improving the quality of traffic information service. The target of this article is to construct a traffic information computing platform for big data environment. Through in-depth analysis the connotation and technology characteristics of big data and traffic information service, a distributed traffic atomic information computing platform architecture is proposed. Under the big data environment, this type of traffic atomic information computing architecture helps to guarantee the traffic safety and efficient operation, more intelligent and personalized traffic information service can be used for the traffic information users.

  9. Global Software Development with Cloud Platforms

    NASA Astrophysics Data System (ADS)

    Yara, Pavan; Ramachandran, Ramaseshan; Balasubramanian, Gayathri; Muthuswamy, Karthik; Chandrasekar, Divya

    Offshore and outsourced distributed software development models and processes are facing challenges, previously unknown, with respect to computing capacity, bandwidth, storage, security, complexity, reliability, and business uncertainty. Clouds promise to address these challenges by adopting recent advances in virtualization, parallel and distributed systems, utility computing, and software services. In this paper, we envision a cloud-based platform that addresses some of these core problems. We outline a generic cloud architecture, its design and our first implementation results for three cloud forms - a compute cloud, a storage cloud and a cloud-based software service- in the context of global distributed software development (GSD). Our ”compute cloud” provides computational services such as continuous code integration and a compile server farm, ”storage cloud” offers storage (block or file-based) services with an on-line virtual storage service, whereas the on-line virtual labs represent a useful cloud service. We note some of the use cases for clouds in GSD, the lessons learned with our prototypes and identify challenges that must be conquered before realizing the full business benefits. We believe that in the future, software practitioners will focus more on these cloud computing platforms and see clouds as a means to supporting a ecosystem of clients, developers and other key stakeholders.

  10. Bioinformatics clouds for big data manipulation

    PubMed Central

    2012-01-01

    Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475

  11. Understanding Pre-Service Teachers' Computer Attitudes: Applying and Extending the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Teo, T.; Lee, C. B.; Chai, C. S.

    2008-01-01

    Computers are increasingly widespread, influencing many aspects of our social and work lives. As we move into a technology-based society, it is important that classroom experiences with computers are made available for all students. The purpose of this study is to examine pre-service teachers' attitudes towards computers. This study extends the…

  12. Applying service learning to computer science: attracting and engaging under-represented students

    NASA Astrophysics Data System (ADS)

    Dahlberg, Teresa; Barnes, Tiffany; Buch, Kim; Bean, Karen

    2010-09-01

    This article describes a computer science course that uses service learning as a vehicle to accomplish a range of pedagogical and BPC (broadening participation in computing) goals: (1) to attract a diverse group of students and engage them in outreach to younger students to help build a diverse computer science pipeline, (2) to develop leadership and team skills using experiential techniques, and (3) to develop student attitudes associated with success and retention in computer science. First, we describe the course and how it was designed to incorporate good practice in service learning. We then report preliminary results showing a positive impact of the course on all pedagogical goals and discuss the implications of the results for broadening participation in computing.

  13. Availability of software services for a hospital information system.

    PubMed

    Sakamoto, N

    1998-03-01

    Hospital information systems (HISs) are becoming more important and covering more parts in daily hospital operations as order-entry systems become popular and electronic charts are introduced. Thus, HISs today need to be able to provide necessary services for hospital operations for a 24-h day, 365 days a year. The provision of services discussed here does not simply mean the availability of computers, in which all that matters is that the computer is functioning. It means the provision of necessary information for hospital operations by the computer software, and we will call it the availability of software services. HISs these days are mostly client-server systems. To increase availability of software services in these systems, it is not enough to just use system structures that are highly reliable in existing host-centred systems. Four main components which support availability of software services are network systems, client computers, server computers, and application software. In this paper, we suggest how to structure these four components to provide the minimum requested software services even if a part of the system stops to function. The network system should be double-protected in stratus using Asynchronous Transfer Mode (ATM) as its base network. Client computers should be fat clients with as much application logic as possible, and reference information which do not require frequent updates (master files, for example) should be replicated in clients. It would be best if all server computers could be double-protected. However, if that is physically impossible, one database file should be made accessible by several server computers. Still, at least the basic patients' information and the latest clinical records should be double-protected physically. Application software should be tested carefully before introduction. Different versions of the application software should always be kept and managed in case the new version has problems. If a hospital information system is designed and developed with these points in mind, it's availability of software services should increase greatly.

  14. Investigation of the computer experiences and attitudes of pre-service mathematics teachers: new evidence from Turkey.

    PubMed

    Birgin, Osman; Catlioğlu, Hakan; Gürbüz, Ramazan; Aydin, Serhat

    2010-10-01

    This study aimed to investigate the experiences of pre-service mathematics (PSM) teachers with computers and their attitudes toward them. The Computer Attitude Scale, Computer Competency Survey, and Computer Use Information Form were administered to 180 Turkish PSM teachers. Results revealed that most PSM teachers used computers at home and at Internet cafes, and that their competency was generally intermediate and upper level. The study concludes that PSM teachers' attitudes about computers differ according to their years of study, computer ownership, level of computer competency, frequency of computer use, computer experience, and whether they had attended a computer-aided instruction course. However, computer attitudes were not affected by gender.

  15. Implementation of Service Learning and Civic Engagement for Students of Computer Information Systems through a Course Project at the Hashemite University

    ERIC Educational Resources Information Center

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2015-01-01

    Service learning methodologies provide students of information systems with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study which involves integrating a service learning project into an undergraduate Computer Information Systems course entitled…

  16. Implementation of Service Learning and Civic Engagement for Computer Information Systems Students through a Course Project at the Hashemite University

    ERIC Educational Resources Information Center

    Al-Khasawneh, Ahmad; Hammad, Bashar K.

    2013-01-01

    Service learning methodologies provide information systems students with the opportunity to create and implement systems in real-world, public service-oriented social contexts. This paper presents a case study of integrating a service learning project into an undergraduate Computer Information Systems course titled "Information Systems"…

  17. Service-Oriented Architecture for NVO and TeraGrid Computing

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Miller, Craig; Williams, Roy; Steenberg, Conrad; Graham, Matthew

    2008-01-01

    The National Virtual Observatory (NVO) Extensible Secure Scalable Service Infrastructure (NESSSI) is a Web service architecture and software framework that enables Web-based astronomical data publishing and processing on grid computers such as the National Science Foundation's TeraGrid. Characteristics of this architecture include the following: (1) Services are created, managed, and upgraded by their developers, who are trusted users of computing platforms on which the services are deployed. (2) Service jobs can be initiated by means of Java or Python client programs run on a command line or with Web portals. (3) Access is granted within a graduated security scheme in which the size of a job that can be initiated depends on the level of authentication of the user.

  18. Computer Simulation of Human Service Program Evaluations.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; Davis, James E.

    1985-01-01

    Describes uses of computer simulations for the context of human service program evaluation. Presents simple mathematical models for most commonly used human service outcome evaluation designs (pretest-posttest randomized experiment, pretest-posttest nonequivalent groups design, and regression-discontinuity design). Translates models into single…

  19. Generic Divide and Conquer Internet-Based Computing

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J. (Technical Monitor); Radenski, Atanas

    2003-01-01

    The growth of Internet-based applications and the proliferation of networking technologies have been transforming traditional commercial application areas as well as computer and computational sciences and engineering. This growth stimulates the exploration of Peer to Peer (P2P) software technologies that can open new research and application opportunities not only for the commercial world, but also for the scientific and high-performance computing applications community. The general goal of this project is to achieve better understanding of the transition to Internet-based high-performance computing and to develop solutions for some of the technical challenges of this transition. In particular, we are interested in creating long-term motivation for end users to provide their idle processor time to support computationally intensive tasks. We believe that a practical P2P architecture should provide useful service to both clients with high-performance computing needs and contributors of lower-end computing resources. To achieve this, we are designing dual -service architecture for P2P high-performance divide-and conquer computing; we are also experimenting with a prototype implementation. Our proposed architecture incorporates a master server, utilizes dual satellite servers, and operates on the Internet in a dynamically changing large configuration of lower-end nodes provided by volunteer contributors. A dual satellite server comprises a high-performance computing engine and a lower-end contributor service engine. The computing engine provides generic support for divide and conquer computations. The service engine is intended to provide free useful HTTP-based services to contributors of lower-end computing resources. Our proposed architecture is complementary to and accessible from computational grids, such as Globus, Legion, and Condor. Grids provide remote access to existing higher-end computing resources; in contrast, our goal is to utilize idle processor time of lower-end Internet nodes. Our project is focused on a generic divide and conquer paradigm and on mobile applications of this paradigm that can operate on a loose and ever changing pool of lower-end Internet nodes.

  20. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package

    PubMed Central

    2012-01-01

    Background Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. Results In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Conclusions Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org. PMID:23281941

  1. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    PubMed

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  2. University of Arizona: College and University Systems Environment.

    ERIC Educational Resources Information Center

    CAUSE/EFFECT, 1985

    1985-01-01

    The University of Arizona has begun to reorganize campus computing. Six working groups were formed to address six areas of computing: academic computing, library automation, administrative data processing and information systems, writing and graphics, video and audio services, and outreach and public service. (MLW)

  3. Where the Cloud Meets the Commons

    ERIC Educational Resources Information Center

    Ipri, Tom

    2011-01-01

    Changes presented by cloud computing--shared computing services, applications, and storage available to end users via the Internet--have the potential to seriously alter how libraries provide services, not only remotely, but also within the physical library, specifically concerning challenges facing the typical desktop computing experience.…

  4. Corporate Involvement in C AI

    ERIC Educational Resources Information Center

    Baker, Justine C.

    1978-01-01

    Historic perspective of computer manufacturers and their contribution to CAI. Corporate CAI products and services are mentioned, as is a forecast for educational involvement by computer corporations. A chart of major computer corporations shows gross sales, net earnings, products and services offered, and other corporate information. (RAO)

  5. 78 FR 15730 - Privacy Act of 1974; Computer Matching Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... 1974; Computer Matching Program AGENCY: U.S. Citizenship and Immigration Services, Department of... Matching Program between the Department of Homeland Security, U.S. Citizenship and Immigration Services and... computer matching program between the Department of Homeland Security, U.S. Citizenship and Immigration...

  6. The Fabric for Frontier Experiments Project at Fermilab

    NASA Astrophysics Data System (ADS)

    Kirby, Michael

    2014-06-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.

  7. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  8. The Internet--Flames, Firewalls and the Future. Proceedings for the 1995 Conference of the Council for Higher Education Computing Services (CHECS) (Roswell, New Mexico, November 8-10, 1995).

    ERIC Educational Resources Information Center

    Suiter, Martha, Ed.

    This set of proceedings assembles papers presented at the 1995 Council for Higher Education Computing Services (CHECS) conference, held at the New Mexico Military Institute in Roswell, New Mexico. CHECS members are higher education computing services organizations within the state of New Mexico. The main focus of the conference was the Internet…

  9. A survey of computer search service costs in the academic health sciences library.

    PubMed Central

    Shirley, S

    1978-01-01

    The Norris Medical Library, University of Southern California, has recently completed an extensive survey of costs involved in the provision of computer search services beyond vendor charges for connect time and printing. In this survey costs for such items as terminal depreciation, repair contract, personnel time, and supplies are analyzed. Implications of this cost survey are discussed in relation to planning and price setting for computer search services. PMID:708953

  10. A Simple Technique for Securing Data at Rest Stored in a Computing Cloud

    NASA Astrophysics Data System (ADS)

    Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai

    "Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.

  11. Cloud Computing Security Issue: Survey

    NASA Astrophysics Data System (ADS)

    Kamal, Shailza; Kaur, Rajpreet

    2011-12-01

    Cloud computing is the growing field in IT industry since 2007 proposed by IBM. Another company like Google, Amazon, and Microsoft provides further products to cloud computing. The cloud computing is the internet based computing that shared recourses, information on demand. It provides the services like SaaS, IaaS and PaaS. The services and recourses are shared by virtualization that run multiple operation applications on cloud computing. This discussion gives the survey on the challenges on security issues during cloud computing and describes some standards and protocols that presents how security can be managed.

  12. The Role of Computer-Aided Instruction in Science Courses and the Relevant Misconceptions of Pre-Service Teachers

    ERIC Educational Resources Information Center

    Aksakalli, Ayhan; Turgut, Umit; Salar, Riza

    2016-01-01

    This research aims to investigate the ways in which pre-service physics teachers interact with computers, which, as an indispensable means of today's technology, are of major value in education and training, and to identify any misconceptions said teachers may have about computer-aided instruction. As part of the study, computer-based physics…

  13. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  14. Service-oriented Software Defined Optical Networks for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Liu, Yuze; Li, Hui; Ji, Yuefeng

    2017-10-01

    With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.g., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). This paper proposes a new service-oriented software defined optical network architecture, including a resource layer, a service abstract layer, a control layer and an application layer. We then dwell on the corresponding service providing method. Different service ID is used to identify the service a device can offer. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit different services based on the service ID in the service-oriented software defined optical network.

  15. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  16. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  17. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Method of computing coverage. 80.771 Section 80.771 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method...

  18. Computing and information services at the Jet Propulsion Laboratory - A management approach to a diversity of needs

    NASA Technical Reports Server (NTRS)

    Felberg, F. H.

    1984-01-01

    The Jet Propulsion Laboratory, a research and development organization with about 5,000 employees, presents a complicated set of requirements for an institutional system of computing and informational services. The approach taken by JPL in meeting this challenge is one of controlled flexibility. A central communications network is provided, together with selected computing facilities for common use. At the same time, staff members are given considerable discretion in choosing the mini- and microcomputers that they believe will best serve their needs. Consultation services, computer education, and other support functions are also provided.

  19. Department of Defense Use of Commercial Cloud Computing Capabilities and Services

    DTIC Science & Technology

    2015-11-01

    models (Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service ( SaaS )), and four deployment models (Public...NIST defines three main models for cloud computing: IaaS, PaaS, and SaaS . These models help differentiate the implementation responsibilities that fall...and SaaS . 3. Public, Private, Community, and Hybrid Clouds Cloud services come in different forms, depending on the customer’s specific needs

  20. Engaging Students in a Service-Learning Community through Computer-Mediated Communication

    ERIC Educational Resources Information Center

    Bair, Beth Teagarden

    2017-01-01

    In 2015, a university in rural Maryland offered an undergraduate service-learning leadership course, which collaborated with a service-learning community of practice. This interdisciplinary leadership course initiated and sustained personal and critical reflection and social interactions by integrating Computer-Medicated Communication (CMC)…

  1. iSERVO: Implementing the International Solid Earth Research Virtual Observatory by Integrating Computational Grid and Geographical Information Web Services

    NASA Astrophysics Data System (ADS)

    Aktas, Mehmet; Aydin, Galip; Donnellan, Andrea; Fox, Geoffrey; Granat, Robert; Grant, Lisa; Lyzenga, Greg; McLeod, Dennis; Pallickara, Shrideep; Parker, Jay; Pierce, Marlon; Rundle, John; Sayar, Ahmet; Tullis, Terry

    2006-12-01

    We describe the goals and initial implementation of the International Solid Earth Virtual Observatory (iSERVO). This system is built using a Web Services approach to Grid computing infrastructure and is accessed via a component-based Web portal user interface. We describe our implementations of services used by this system, including Geographical Information System (GIS)-based data grid services for accessing remote data repositories and job management services for controlling multiple execution steps. iSERVO is an example of a larger trend to build globally scalable scientific computing infrastructures using the Service Oriented Architecture approach. Adoption of this approach raises a number of research challenges in millisecond-latency message systems suitable for internet-enabled scientific applications. We review our research in these areas.

  2. Bigdata Driven Cloud Security: A Survey

    NASA Astrophysics Data System (ADS)

    Raja, K.; Hanifa, Sabibullah Mohamed

    2017-08-01

    Cloud Computing (CC) is a fast-growing technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Recently, it has been observed that massive growth in the scale of data or big data generated through cloud computing. CC consists of a front-end, includes the users’ computers and software required to access the cloud network, and back-end consists of various computers, servers and database systems that create the cloud. In SaaS (Software as-a-Service - end users to utilize outsourced software), PaaS (Platform as-a-Service-platform is provided) and IaaS (Infrastructure as-a-Service-physical environment is outsourced), and DaaS (Database as-a-Service-data can be housed within a cloud), where leading / traditional cloud ecosystem delivers the cloud services become a powerful and popular architecture. Many challenges and issues are in security or threats, most vital barrier for cloud computing environment. The main barrier to the adoption of CC in health care relates to Data security. When placing and transmitting data using public networks, cyber attacks in any form are anticipated in CC. Hence, cloud service users need to understand the risk of data breaches and adoption of service delivery model during deployment. This survey deeply covers the CC security issues (covering Data Security in Health care) so as to researchers can develop the robust security application models using Big Data (BD) on CC (can be created / deployed easily). Since, BD evaluation is driven by fast-growing cloud-based applications developed using virtualized technologies. In this purview, MapReduce [12] is a good example of big data processing in a cloud environment, and a model for Cloud providers.

  3. The Role of Networks in Cloud Computing

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Devine, Mac

    The confluence of technology advancements and business developments in Broadband Internet, Web services, computing systems, and application software over the past decade has created a perfect storm for cloud computing. The "cloud model" of delivering and consuming IT functions as services is poised to fundamentally transform the IT industry and rebalance the inter-relationships among end users, enterprise IT, software companies, and the service providers in the IT ecosystem (Armbrust et al., 2009; Lin, Fu, Zhu, & Dasmalchi, 2009).

  4. --No Title--

    Science.gov Websites

    interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and

  5. Cloud computing for comparative genomics with windows azure platform.

    PubMed

    Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.

  6. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2012-10-01 2012-10-01 false Computers and data processing equipment (account...

  7. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2013-10-01 2013-10-01 false Computers and data processing equipment (account...

  8. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2011-10-01 2011-10-01 false Computers and data processing equipment (account...

  9. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2014-10-01 2014-10-01 false Computers and data processing equipment (account...

  10. 49 CFR 1242.46 - Computers and data processing equipment (account XX-27-46).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... REPORTS SEPARATION OF COMMON OPERATING EXPENSES BETWEEN FREIGHT SERVICE AND PASSENGER SERVICE FOR RAILROADS 1 Operating Expenses-Equipment § 1242.46 Computers and data processing equipment (account XX-27-46... 49 Transportation 9 2010-10-01 2010-10-01 false Computers and data processing equipment (account...

  11. 75 FR 54162 - Privacy Act of 1974

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... Program A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L. 100-503), amended the... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare and Medicaid Services [CMS Computer Match No. 2010-01; HHS Computer Match No. 1006] Privacy Act of 1974 AGENCY: Department of Health and...

  12. Cloud Computing for Comparative Genomics with Windows Azure Platform

    PubMed Central

    Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.

    2012-01-01

    Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609

  13. Computer Aided Reference Services in the Academic Library: Experiences in Organizing and Operating an Online Reference Service.

    ERIC Educational Resources Information Center

    Hoover, Ryan E.

    1979-01-01

    Summarizes the development of the Computer-Aided Reference Services (CARS) division of the University of Utah Libraries' reference department. Development, organizational structure, site selection, equipment, management, staffing and training considerations, promotion and marketing, budget and pricing, record keeping, statistics, and evaluation…

  14. Evaluating the Usage of Cloud-Based Collaboration Services through Teamwork

    ERIC Educational Resources Information Center

    Qin, Li; Hsu, Jeffrey; Stern, Mel

    2016-01-01

    With the proliferation of cloud computing for both organizational and educational use, cloud-based collaboration services are transforming how people work in teams. The authors investigated the determinants of the usage of cloud-based collaboration services including teamwork quality, computer self-efficacy, and prior experience, as well as its…

  15. 76 FR 5833 - Amended Certification Regarding Eligibility to Apply for Worker Adjustment Assistance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-02

    ..., INSTAMATION, INC., DYNAMIC METHODS, COLLEGIATE, CORNELIUS PROFESSIONAL SERVICES, CIBER, UC4 AND ENVISIONS... the supply of computer systems design and support services for colleges and universities. New... subject firm and the supply of computer systems design and support services for the subject firm. The...

  16. 31 CFR 29.105 - Computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... annuity computation purposes— (i) The service of a participant under the Police and Firefighters Plan who... pay (LWOP) that is creditable service. (1) Under the Police and Firefighters Plan, credit is allowed...'s credit under a formal leave system; and (ii) The service of a participant under the Teachers Plan...

  17. 31 CFR 29.105 - Computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... annuity computation purposes— (i) The service of a participant under the Police and Firefighters Plan who... pay (LWOP) that is creditable service. (1) Under the Police and Firefighters Plan, credit is allowed...'s credit under a formal leave system; and (ii) The service of a participant under the Teachers Plan...

  18. 31 CFR 29.105 - Computation of time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... annuity computation purposes— (i) The service of a participant under the Police and Firefighters Plan who... pay (LWOP) that is creditable service. (1) Under the Police and Firefighters Plan, credit is allowed...'s credit under a formal leave system; and (ii) The service of a participant under the Teachers Plan...

  19. 31 CFR 29.105 - Computation of time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... annuity computation purposes— (i) The service of a participant under the Police and Firefighters Plan who... pay (LWOP) that is creditable service. (1) Under the Police and Firefighters Plan, credit is allowed...'s credit under a formal leave system; and (ii) The service of a participant under the Teachers Plan...

  20. In-Service Science Teachers' Attitude towards Information Communication Technology

    ERIC Educational Resources Information Center

    Kibirige, I.

    2011-01-01

    The purpose of this study is to determine the attitude of in-service science teachers towards information communication technology (ICT) in education. The study explores the relationship between in-service teachers and four independent variables: their attitudes toward computers; their cultural perception of computers; their perceived computer…

  1. 76 FR 58044 - Amended Certification Regarding Eligibility To Apply for Worker Adjustment Assistance; The Mega...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-19

    ...., including on-site leased workers from Computer Solutions and Software International, Inc., Dell Service... Insphere Insurance Solutions, Inc., Including On-Site Leased Workers From Computer Solutions and Software International, Inc., Dell Service Sales, Emdeon Business Services, KFORCE, Microsoft, Pariveda Solutions, Inc...

  2. A service brokering and recommendation mechanism for better selecting cloud services.

    PubMed

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  3. 5 CFR 847.905 - How is the present value of an immediate annuity with credit for NAFI service computed?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false How is the present value of an immediate....905 How is the present value of an immediate annuity with credit for NAFI service computed? (a) OPM will determine the present value of the immediate annuity including service credit for NAFI service by...

  4. Expeditionary Oblong Mezzanine

    DTIC Science & Technology

    2016-03-01

    Operating System OSI Open Systems Interconnection OS X Operating System Ten PDU Power Distribution Unit POE Power Over Ethernet xvii SAAS ...providing infrastructure as a service (IaaS) and software as a service ( SaaS ) cloud computing technologies. IaaS is a way of providing computing services...such as servers, storage, and network equipment services (Mell & Grance, 2009). SaaS is a means of providing software and applications as an on

  5. Integrating Grid Services into the Cray XT4 Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NERSC; Cholia, Shreyas; Lin, Hwa-Chun Wendy

    2009-05-01

    The 38640 core Cray XT4"Franklin" system at the National Energy Research Scientific Computing Center (NERSC) is a massively parallel resource available to Department of Energy researchers that also provides on-demand grid computing to the Open Science Grid. The integration of grid services on Franklin presented various challenges, including fundamental differences between the interactive and compute nodes, a stripped down compute-node operating system without dynamic library support, a shared-root environment and idiosyncratic application launching. Inour work, we describe how we resolved these challenges on a running, general-purpose production system to provide on-demand compute, storage, accounting and monitoring services through generic gridmore » interfaces that mask the underlying system-specific details for the end user.« less

  6. Network and computing infrastructure for scientific applications in Georgia

    NASA Astrophysics Data System (ADS)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  7. Development process of in-service training intended for teachers to perform teaching of mathematics with computer algebra systems

    NASA Astrophysics Data System (ADS)

    Ardıç, Mehmet Alper; Işleyen, Tevfik

    2018-01-01

    In this study, we deal with the development process of in-service training activities designed in order for mathematics teachers of secondary education to realize teaching of mathematics, utilizing computer algebra systems. In addition, the results obtained from the researches carried out during and after the in-service training were summarized. Last section focuses on suggestions any teacher can use to carry out activities aimed at using computer algebra systems in teaching environments.

  8. A General Cross-Layer Cloud Scheduling Framework for Multiple IoT Computer Tasks.

    PubMed

    Wu, Guanlin; Bao, Weidong; Zhu, Xiaomin; Zhang, Xiongtao

    2018-05-23

    The diversity of IoT services and applications brings enormous challenges to improving the performance of multiple computer tasks' scheduling in cross-layer cloud computing systems. Unfortunately, the commonly-employed frameworks fail to adapt to the new patterns on the cross-layer cloud. To solve this issue, we design a new computer task scheduling framework for multiple IoT services in cross-layer cloud computing systems. Specifically, we first analyze the features of the cross-layer cloud and computer tasks. Then, we design the scheduling framework based on the analysis and present detailed models to illustrate the procedures of using the framework. With the proposed framework, the IoT services deployed in cross-layer cloud computing systems can dynamically select suitable algorithms and use resources more effectively to finish computer tasks with different objectives. Finally, the algorithms are given based on the framework, and extensive experiments are also given to validate its effectiveness, as well as its superiority.

  9. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  10. Distance Learning and Cloud Computing: "Just Another Buzzword or a Major E-Learning Breakthrough?"

    ERIC Educational Resources Information Center

    Romiszowski, Alexander J.

    2012-01-01

    "Cloud computing is a model for the enabling of ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and other services) that can be rapidly provisioned and released with minimal management effort or service provider interaction." This…

  11. Computing Services Planning, Downsizing, and Organization at the University of Alberta.

    ERIC Educational Resources Information Center

    Beltrametti, Monica

    1993-01-01

    In a six-month period, the University of Alberta (Canada) campus computing services department formulated a strategic plan, and downsized and reorganized to meet financial constraints and respond to changing technology, especially distributed computing. The new department is organized to react more effectively to trends in technology and user…

  12. 12 CFR 567.12 - Purchased credit card relationships, servicing assets, intangible assets (other than purchased...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and core capital. (b) Computation of core and tangible capital. (1) Purchased credit card relationships may be included (that is, not deducted) in computing core capital in accordance with the... restrictions in this section, mortgage servicing assets may be included in computing core and tangible capital...

  13. 75 FR 18251 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA-2009-0066] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS))--Match 1305 AGENCY: Social Security... INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988 (Public Law (Pub. L.) 100-503...

  14. 75 FR 62623 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Internal Revenue Service (IRS...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-12

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0015] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Internal Revenue Service (IRS))--Match Number 1016 AGENCY: Social Security... regarding protections for such persons. The Privacy Act, as amended, regulates the use of computer matching...

  15. Pre-Service Teachers, Computers, and ICT Courses: A Troubled Relationship

    ERIC Educational Resources Information Center

    Fokides, Emmanuel

    2016-01-01

    The study presents the results of a four-year long survey among pre-service teachers, examining factors which influence their knowledge and skills on computers, as well as factors which contribute to shaping their perceived computer competency. Participants were seven hundred fifty-four senior students, at the Department of Primary School…

  16. Modelling the Influences of Beliefs on Pre-Service Teachers' Attitudes towards Computer Use

    ERIC Educational Resources Information Center

    Teo, Timothy

    2012-01-01

    The purpose of this study is to examine the pre-service teachers' attitudes toward computers use. The impact of five variables (perceived usefulness, perceived ease of use, subjective norm, facilitating conditions, and technological complexity) on attitude towards computer was assessed. Data were collected from 230 preservice teachers through…

  17. Using Google Applications as Part of Cloud Computing to Improve Knowledge and Teaching Skills of Faculty Members at the University of Bisha, Bisha, Saudi Arabia

    ERIC Educational Resources Information Center

    Alshihri, Bandar A.

    2017-01-01

    Cloud computing is a recent computing paradigm that has been integrated into the educational system. It provides numerous opportunities for delivering a variety of computing services in a way that has not been experienced before. The Google Company is among the top business companies that afford their cloud services by launching a number of…

  18. The Fabric for Frontier Experiments Project at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Michael

    2014-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less

  19. A cloud computing based 12-lead ECG telemedicine service

    PubMed Central

    2012-01-01

    Background Due to the great variability of 12-lead ECG instruments and medical specialists’ interpretation skills, it remains a challenge to deliver rapid and accurate 12-lead ECG reports with senior cardiologists’ decision making support in emergency telecardiology. Methods We create a new cloud and pervasive computing based 12-lead Electrocardiography (ECG) service to realize ubiquitous 12-lead ECG tele-diagnosis. Results This developed service enables ECG to be transmitted and interpreted via mobile phones. That is, tele-consultation can take place while the patient is on the ambulance, between the onsite clinicians and the off-site senior cardiologists, or among hospitals. Most importantly, this developed service is convenient, efficient, and inexpensive. Conclusions This cloud computing based ECG tele-consultation service expands the traditional 12-lead ECG applications onto the collaboration of clinicians at different locations or among hospitals. In short, this service can greatly improve medical service quality and efficiency, especially for patients in rural areas. This service has been evaluated and proved to be useful by cardiologists in Taiwan. PMID:22838382

  20. A cloud computing based 12-lead ECG telemedicine service.

    PubMed

    Hsieh, Jui-Chien; Hsu, Meng-Wei

    2012-07-28

    Due to the great variability of 12-lead ECG instruments and medical specialists' interpretation skills, it remains a challenge to deliver rapid and accurate 12-lead ECG reports with senior cardiologists' decision making support in emergency telecardiology. We create a new cloud and pervasive computing based 12-lead Electrocardiography (ECG) service to realize ubiquitous 12-lead ECG tele-diagnosis. This developed service enables ECG to be transmitted and interpreted via mobile phones. That is, tele-consultation can take place while the patient is on the ambulance, between the onsite clinicians and the off-site senior cardiologists, or among hospitals. Most importantly, this developed service is convenient, efficient, and inexpensive. This cloud computing based ECG tele-consultation service expands the traditional 12-lead ECG applications onto the collaboration of clinicians at different locations or among hospitals. In short, this service can greatly improve medical service quality and efficiency, especially for patients in rural areas. This service has been evaluated and proved to be useful by cardiologists in Taiwan.

  1. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  2. Cloud Computing E-Communication Services in the University Environment

    ERIC Educational Resources Information Center

    Babin, Ron; Halilovic, Branka

    2017-01-01

    The use of cloud computing services has grown dramatically in post-secondary institutions in the last decade. In particular, universities have been attracted to the low-cost and flexibility of acquiring cloud software services from Google, Microsoft and others, to implement e-mail, calendar and document management and other basic office software.…

  3. Effects of Educational Beliefs on Attitudes towards Using Computer Technologies

    ERIC Educational Resources Information Center

    Onen, Aysem Seda

    2012-01-01

    This study, aiming to determine the relationship between pre-service teachers' beliefs about education and their attitudes towards utilizing computers and internet, is a descriptive study in scanning model. The sampling of the study consisted of 270 pre-service teachers. The potential relationship between the beliefs of pre-service teachers about…

  4. Measuring and Supporting Pre-Service Teachers' Self-Efficacy towards Computers, Teaching, and Technology Integration

    ERIC Educational Resources Information Center

    Killi, Carita; Kauppinen, Merja; Coiro, Julie; Utriainen, Jukka

    2016-01-01

    This paper reports on two studies designed to examine pre-service teachers' self-efficacy beliefs. Study I investigated the measurement properties of a self-efficacy beliefs questionnaire comprising scales for computer self-efficacy, teacher self-efficacy, and self-efficacy towards technology integration. In Study I, 200 pre-service teachers…

  5. 78 FR 1735 - Airworthiness Directives; Honeywell International Inc. Air Data Pressure Transducers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-09

    ... reviewed Honeywell Alert Service Bulletin ADM/ADC/ADAHRS-34-A01, dated November 6, 2012. This service...), air data computers, air data attitude heading reference systems, and digital air data computers... fails. Honeywell Service Bulletin ACM/ADC/ADAHRS-34-A01, dated November 6, 2012, specifies to refer to...

  6. A Compilation of Information on Computer Applications in Nutrition and Food Service.

    ERIC Educational Resources Information Center

    Casbergue, John P.

    Compiled is information on the application of computer technology to nutrition food service. It is designed to assist dieticians and nutritionists interested in applying electronic data processing to food service and related industries. The compilation is indexed by subject area. Included for each subject area are: (1) bibliographic references,…

  7. 47 CFR 69.113 - Non-premium charges for MTS-WATS equivalent services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Non-premium charges for MTS-WATS equivalent... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.113 Non-premium charges for MTS-WATS equivalent services. (a) Charges that are computed in accordance with this section shall be...

  8. 47 CFR 69.113 - Non-premium charges for MTS-WATS equivalent services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Non-premium charges for MTS-WATS equivalent... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.113 Non-premium charges for MTS-WATS equivalent services. (a) Charges that are computed in accordance with this section shall be...

  9. 47 CFR 69.113 - Non-premium charges for MTS-WATS equivalent services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Non-premium charges for MTS-WATS equivalent... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.113 Non-premium charges for MTS-WATS equivalent services. (a) Charges that are computed in accordance with this section shall be...

  10. 47 CFR 69.113 - Non-premium charges for MTS-WATS equivalent services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Non-premium charges for MTS-WATS equivalent... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.113 Non-premium charges for MTS-WATS equivalent services. (a) Charges that are computed in accordance with this section shall be...

  11. 47 CFR 69.113 - Non-premium charges for MTS-WATS equivalent services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Non-premium charges for MTS-WATS equivalent... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges § 69.113 Non-premium charges for MTS-WATS equivalent services. (a) Charges that are computed in accordance with this section shall be...

  12. Utilization of KSC Present Broadband Communications Data System For Digital Video Services

    NASA Technical Reports Server (NTRS)

    Andrawis, Alfred S.

    2001-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  13. 24 CFR 908.108 - Cost.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... computer hardware or software, or both, the cost of contracting for those services, or the cost of... operating budget. At the HA's option, the cost of the computer software may include service contracts to...

  14. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen J.; Gati, Frank; Yuko, James R.; Motil, Brian J.; Lumpkin, Forrest E.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module showed that thermal protection is necessary because of significant heating from the plume.

  15. A service based adaptive U-learning system using UX.

    PubMed

    Jeong, Hwa-Young; Yi, Gangman

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques.

  16. A Service Based Adaptive U-Learning System Using UX

    PubMed Central

    Jeong, Hwa-Young

    2014-01-01

    In recent years, traditional development techniques for e-learning systems have been changing to become more convenient and efficient. One new technology in the development of application systems includes both cloud and ubiquitous computing. Cloud computing can support learning system processes by using services while ubiquitous computing can provide system operation and management via a high performance technical process and network. In the cloud computing environment, a learning service application can provide a business module or process to the user via the internet. This research focuses on providing the learning material and processes of courses by learning units using the services in a ubiquitous computing environment. And we also investigate functions that support users' tailored materials according to their learning style. That is, we analyzed the user's data and their characteristics in accordance with their user experience. We subsequently applied the learning process to fit on their learning performance and preferences. Finally, we demonstrate how the proposed system outperforms learning effects to learners better than existing techniques. PMID:25147832

  17. Fog computing job scheduling optimization based on bees swarm

    NASA Astrophysics Data System (ADS)

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  18. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    PubMed Central

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI). PMID:25170937

  19. The UK Human Genome Mapping Project online computing service.

    PubMed

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  20. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  1. Investigating Pre-Service Early Childhood Teachers' Attitudes towards the Computer Based Education in Science Activities

    ERIC Educational Resources Information Center

    Yilmaz, Nursel; Alici, Sule

    2011-01-01

    The purpose of this study was to investigate pre-service early childhood teachers' attitudes towards using Computer Based Education (CBE) while implementing science activities. More specifically, the present study examined the effect of different variables such as gender, year in program, experience in preschool, owing a computer, and the…

  2. Bootstrapping and Maintaining Trust in the Cloud

    DTIC Science & Technology

    2016-03-16

    of infrastructure-as-a- service (IaaS) cloud computing services such as Ama- zon Web Services, Google Compute Engine, Rackspace, et. al. means that...Implementation We implemented keylime in ∼3.2k lines of Python in four components: registrar, node, CV, and tenant. The registrar offers a REST-based web ...bootstrap key K. It provides an unencrypted REST-based web service for these two functions. As described earlier, the pro- tocols for exchanging data

  3. Chaos and the Marketing of Computing Services on Campus.

    ERIC Educational Resources Information Center

    May, James H.

    1989-01-01

    In an age of chaos and uncertainty in computing services delivery, the best marketing strategy that can be adopted is concern for user constituencies and the long range solutions to their problems. (MLW)

  4. 31 CFR 29.342 - Computed annuity exceeds the statutory maximum.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) In cases in which the total computed annuity exceeds the statutory maximum: (1) Federal Benefit... sufficient service as of June 30, 1997, to reach the statutory maximum benefit, but has sufficient service at...

  5. QUARTERLY TECHNICAL PROGRESS REPORT, JULY, AUGUST, SEPTEMBER 1967.

    DTIC Science & Technology

    Contents: Circuit research program; Hardware systems research; Computer system software research; Illinois pattern recognition computer: ILLIAC II... service , use, and program development; IBM 7094/1401 service , use, and program development; Problem specifications; General laboratory information.

  6. 75 FR 11917 - Chrysler LLC, Technology Center, Including On-Site Leased Workers from Aerotek, Ajilon, Altair...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-12

    ..., Cer-Cad Engineering Resources, Computer Consultants of America, Computer Engrg Services, Compuware..., Automated Analysis Corp/Belcan, Bartech Group, CAE Tech, CDI Information Services, CER-CAD Engineering...

  7. Adopting Cloud Computing in the Pakistan Navy

    DTIC Science & Technology

    2015-06-01

    administrative aspect is required to operate optimally, provide synchronized delivery of cloud services, and integrate multi-provider cloud environment...AND ABBREVIATIONS ANSI American National Standards Institute AWS Amazon web services CIA Confidentiality Integrity Availability CIO Chief...also adopted cloud computing as an integral component of military operations conducted either locally or remotely. With the use of 2 cloud services

  8. Developing a computer security training program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1990-01-01

    We all know that training can empower the computer protection program. However, pushing computer security information outside the computer security organization into the rest of the company is often labeled as an easy project or a dungeon full of dragons. Used in part or whole, the strategy offered in this paper may help the developer of a computer security training program ward off dragons and create products and services. The strategy includes GOALS (what the result of training will be), POINTERS (tips to ensure survival), and STEPS (products and services as a means to accomplish the goals).

  9. Security Risks of Cloud Computing and Its Emergence as 5th Utility Service

    NASA Astrophysics Data System (ADS)

    Ahmad, Mushtaq

    Cloud Computing is being projected by the major cloud services provider IT companies such as IBM, Google, Yahoo, Amazon and others as fifth utility where clients will have access for processing those applications and or software projects which need very high processing speed for compute intensive and huge data capacity for scientific, engineering research problems and also e- business and data content network applications. These services for different types of clients are provided under DASM-Direct Access Service Management based on virtualization of hardware, software and very high bandwidth Internet (Web 2.0) communication. The paper reviews these developments for Cloud Computing and Hardware/Software configuration of the cloud paradigm. The paper also examines the vital aspects of security risks projected by IT Industry experts, cloud clients. The paper also highlights the cloud provider's response to cloud security risks.

  10. AceCloud: Molecular Dynamics Simulations in the Cloud.

    PubMed

    Harvey, M J; De Fabritiis, G

    2015-05-26

    We present AceCloud, an on-demand service for molecular dynamics simulations. AceCloud is designed to facilitate the secure execution of large ensembles of simulations on an external cloud computing service (currently Amazon Web Services). The AceCloud client, integrated into the ACEMD molecular dynamics package, provides an easy-to-use interface that abstracts all aspects of interaction with the cloud services. This gives the user the experience that all simulations are running on their local machine, minimizing the learning curve typically associated with the transition to using high performance computing services.

  11. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  12. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  13. Reviews on Security Issues and Challenges in Cloud Computing

    NASA Astrophysics Data System (ADS)

    An, Y. Z.; Zaaba, Z. F.; Samsudin, N. F.

    2016-11-01

    Cloud computing is an Internet-based computing service provided by the third party allowing share of resources and data among devices. It is widely used in many organizations nowadays and becoming more popular because it changes the way of how the Information Technology (IT) of an organization is organized and managed. It provides lots of benefits such as simplicity and lower costs, almost unlimited storage, least maintenance, easy utilization, backup and recovery, continuous availability, quality of service, automated software integration, scalability, flexibility and reliability, easy access to information, elasticity, quick deployment and lower barrier to entry. While there is increasing use of cloud computing service in this new era, the security issues of the cloud computing become a challenges. Cloud computing must be safe and secure enough to ensure the privacy of the users. This paper firstly lists out the architecture of the cloud computing, then discuss the most common security issues of using cloud and some solutions to the security issues since security is one of the most critical aspect in cloud computing due to the sensitivity of user's data.

  14. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    PubMed

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  15. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    PubMed Central

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-01-01

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet. PMID:24225647

  16. The Ethics of Cloud Computing.

    PubMed

    de Bruin, Boudewijn; Floridi, Luciano

    2017-02-01

    Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacentres (e.g., Amazon). It considers the cloud services providers leasing 'space in the cloud' from hosting companies (e.g., Dropbox, Salesforce). And it examines the business and private 'clouders' using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (e.g., banks, law firms, hospitals etc. storing client data in the cloud) will have to follow rather more stringent regulations.

  17. The services-oriented architecture: ecosystem services as a framework for diagnosing change in social ecological systems

    Treesearch

    Philip A. Loring; F. Stuart Chapin; S. Craig Gerlach

    2008-01-01

    Computational thinking (CT) is a way to solve problems and understand complex systems that draws on concepts fundamental to computer science and is well suited to the challenges that face researchers of complex, linked social-ecological systems. This paper explores CT's usefulness to sustainability science through the application of the services-oriented...

  18. Problem-Based Learning Environment in Basic Computer Course: Pre-Service Teachers' Achievement and Key Factors for Learning

    ERIC Educational Resources Information Center

    Efendioglu, Akin

    2015-01-01

    This experimental study aims to determine pre-service teachers' achievements and key factors that affect the learning process with regard to problem-based learning (PBL) and lecture-based computer course (LBCC) conditions. The research results showed that the pre-service teachers in the PBL group had significantly higher achievement scores than…

  19. U.S. Forest Service's Power-IT-Down Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Case study describes the U.S. Forest Service's Power-IT-Down Program, which strongly encouraged employees to shut off their computers when leaving the office. The U.S. Forest Service first piloted the program on a voluntary basis in one region then implemented it across the agency's 43,000 computers as a joint effort by the Chief Information Office and Sustainable Operations department.

  20. A Geospatial Information Grid Framework for Geological Survey.

    PubMed

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper.

  1. A Geospatial Information Grid Framework for Geological Survey

    PubMed Central

    Wu, Liang; Xue, Lei; Li, Chaoling; Lv, Xia; Chen, Zhanlong; Guo, Mingqiang; Xie, Zhong

    2015-01-01

    The use of digital information in geological fields is becoming very important. Thus, informatization in geological surveys should not stagnate as a result of the level of data accumulation. The integration and sharing of distributed, multi-source, heterogeneous geological information is an open problem in geological domains. Applications and services use geological spatial data with many features, including being cross-region and cross-domain and requiring real-time updating. As a result of these features, desktop and web-based geographic information systems (GISs) experience difficulties in meeting the demand for geological spatial information. To facilitate the real-time sharing of data and services in distributed environments, a GIS platform that is open, integrative, reconfigurable, reusable and elastic would represent an indispensable tool. The purpose of this paper is to develop a geological cloud-computing platform for integrating and sharing geological information based on a cloud architecture. Thus, the geological cloud-computing platform defines geological ontology semantics; designs a standard geological information framework and a standard resource integration model; builds a peer-to-peer node management mechanism; achieves the description, organization, discovery, computing and integration of the distributed resources; and provides the distributed spatial meta service, the spatial information catalog service, the multi-mode geological data service and the spatial data interoperation service. The geological survey information cloud-computing platform has been implemented, and based on the platform, some geological data services and geological processing services were developed. Furthermore, an iron mine resource forecast and an evaluation service is introduced in this paper. PMID:26710255

  2. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  3. A global distributed storage architecture

    NASA Technical Reports Server (NTRS)

    Lionikis, Nemo M.; Shields, Michael F.

    1996-01-01

    NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.

  4. Cloud computing in pharmaceutical R&D: business risks and mitigations.

    PubMed

    Geiger, Karl

    2010-05-01

    Cloud computing provides information processing power and business services, delivering these services over the Internet from centrally hosted locations. Major technology corporations aim to supply these services to every sector of the economy. Deploying business processes 'in the cloud' requires special attention to the regulatory and business risks assumed when running on both hardware and software that are outside the direct control of a company. The identification of risks at the correct service level allows a good mitigation strategy to be selected. The pharmaceutical industry can take advantage of existing risk management strategies that have already been tested in the finance and electronic commerce sectors. In this review, the business risks associated with the use of cloud computing are discussed, and mitigations achieved through knowledge from securing services for electronic commerce and from good IT practice are highlighted.

  5. Gidzenko in Service Module with laptop computers

    NASA Image and Video Library

    2001-03-30

    ISS-01-E-5070 (December 2000) --- Astronaut Yuri P. Gidzenko, Expedition One Soyuz commander, works with computers in the Zvezda or Service Module aboard the Earth-orbiting International Space Station (ISS). The picture was taken with a digital still camera.

  6. An Annotated Partial List of Science-Related Computer Bulletin Board Systems.

    ERIC Educational Resources Information Center

    Journal of Student Research, 1990

    1990-01-01

    A list of science-related computer bulletin board systems is presented. Entries include geographic area, phone number, and a short explanation of services. Also included are the addresses and phone numbers of selected commercial services. (KR)

  7. GeoBrain Computational Cyber-laboratory for Earth Science Studies

    NASA Astrophysics Data System (ADS)

    Deng, M.; di, L.

    2009-12-01

    Computational approaches (e.g., computer-based data visualization, analysis and modeling) are critical for conducting increasingly data-intensive Earth science (ES) studies to understand functions and changes of the Earth system. However, currently Earth scientists, educators, and students have met two major barriers that prevent them from being effectively using computational approaches in their learning, research and application activities. The two barriers are: 1) difficulties in finding, obtaining, and using multi-source ES data; and 2) lack of analytic functions and computing resources (e.g., analysis software, computing models, and high performance computing systems) to analyze the data. Taking advantages of recent advances in cyberinfrastructure, Web service, and geospatial interoperability technologies, GeoBrain, a project funded by NASA, has developed a prototype computational cyber-laboratory to effectively remove the two barriers. The cyber-laboratory makes ES data and computational resources at large organizations in distributed locations available to and easily usable by the Earth science community through 1) enabling seamless discovery, access and retrieval of distributed data, 2) federating and enhancing data discovery with a catalogue federation service and a semantically-augmented catalogue service, 3) customizing data access and retrieval at user request with interoperable, personalized, and on-demand data access and services, 4) automating or semi-automating multi-source geospatial data integration, 5) developing a large number of analytic functions as value-added, interoperable, and dynamically chainable geospatial Web services and deploying them in high-performance computing facilities, 6) enabling the online geospatial process modeling and execution, and 7) building a user-friendly extensible web portal for users to access the cyber-laboratory resources. Users can interactively discover the needed data and perform on-demand data analysis and modeling through the web portal. The GeoBrain cyber-laboratory provides solutions to meet common needs of ES research and education, such as, distributed data access and analysis services, easy access to and use of ES data, and enhanced geoprocessing and geospatial modeling capability. It greatly facilitates ES research, education, and applications. The development of the cyber-laboratory provides insights, lessons-learned, and technology readiness to build more capable computing infrastructure for ES studies, which can meet wide-range needs of current and future generations of scientists, researchers, educators, and students for their formal or informal educational training, research projects, career development, and lifelong learning.

  8. Making Spatial Statistics Service Accessible On Cloud Platform

    NASA Astrophysics Data System (ADS)

    Mu, X.; Wu, J.; Li, T.; Zhong, Y.; Gao, X.

    2014-04-01

    Web service can bring together applications running on diverse platforms, users can access and share various data, information and models more effectively and conveniently from certain web service platform. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtualized resources are provided as services. With the rampant growth of massive data and restriction of net, traditional web services platforms have some prominent problems existing in development such as calculation efficiency, maintenance cost and data security. In this paper, we offer a spatial statistics service based on Microsoft cloud. An experiment was carried out to evaluate the availability and efficiency of this service. The results show that this spatial statistics service is accessible for the public conveniently with high processing efficiency.

  9. Orion Service Module Reaction Control System Plume Impingement Analysis Using PLIMP/RAMP2

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Lumpkin, Forrest E., III; Gati, Frank; Yuko, James R.; Motil, Brian J.

    2009-01-01

    The Orion Crew Exploration Vehicle Service Module Reaction Control System engine plume impingement was computed using the plume impingement program (PLIMP). PLIMP uses the plume solution from RAMP2, which is the refined version of the reacting and multiphase program (RAMP) code. The heating rate and pressure (force and moment) on surfaces or components of the Service Module were computed. The RAMP2 solution of the flow field inside the engine and the plume was compared with those computed using GASP, a computational fluid dynamics code, showing reasonable agreement. The computed heating rate and pressure using PLIMP were compared with the Reaction Control System plume model (RPM) solution and the plume impingement dynamics (PIDYN) solution. RPM uses the GASP-based plume solution, whereas PIDYN uses the SCARF plume solution. Three sets of the heating rate and pressure solutions agree well. Further thermal analysis on the avionic ring of the Service Module was performed using MSC Patran/Pthermal. The obtained temperature results showed that thermal protection is necessary because of significant heating from the plume.

  10. Signal and image processing algorithm performance in a virtual and elastic computing environment

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  11. Climate Analytics as a Service. Chapter 11

    NASA Technical Reports Server (NTRS)

    Schnase, John L.

    2016-01-01

    Exascale computing, big data, and cloud computing are driving the evolution of large-scale information systems toward a model of data-proximal analysis. In response, we are developing a concept of climate analytics as a service (CAaaS) that represents a convergence of data analytics and archive management. With this approach, high-performance compute-storage implemented as an analytic system is part of a dynamic archive comprising both static and computationally realized objects. It is a system whose capabilities are framed as behaviors over a static data collection, but where queries cause results to be created, not found and retrieved. Those results can be the product of a complex analysis, but, importantly, they also can be tailored responses to the simplest of requests. NASA's MERRA Analytic Service and associated Climate Data Services API provide a real-world example of climate analytics delivered as a service in this way. Our experiences reveal several advantages to this approach, not the least of which is orders-of-magnitude time reduction in the data assembly task common to many scientific workflows.

  12. Outline of CS application experiments

    NASA Astrophysics Data System (ADS)

    Otsu, Y.; Kondoh, K.; Matsumoto, M.

    1985-09-01

    To promote and investigate the practical application of satellite use, CS application experiments for various social activity needs, including those of public services such as the National Police Agency and the Japanese National Railway, computer network services, news material transmissions, and advanced teleconference activities, were performed. Public service satellite communications systems were developed and tested. Based on results obtained, several public services have implemented CS-2 for practical disaster-back-up uses. Practical application computer network and enhanced video-conference experiments have also been performed.

  13. A System for Monitoring and Management of Computational Grids

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Biegel, Bryan (Technical Monitor)

    2002-01-01

    As organizations begin to deploy large computational grids, it has become apparent that systems for observation and control of the resources, services, and applications that make up such grids are needed. Administrators must observe the operation of resources and services to ensure that they are operating correctly and they must control the resources and services to ensure that their operation meets the needs of users. Users are also interested in the operation of resources and services so that they can choose the most appropriate ones to use. In this paper we describe a prototype system to monitor and manage computational grids and describe the general software framework for control and observation in distributed environments that it is based on.

  14. 47 CFR 73.6008 - Distance computations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Distance computations. 73.6008 Section 73.6008 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES RADIO BROADCAST SERVICES... reference points must be calculated in accordance with § 73.208(c) of this part. ...

  15. 5 CFR 841.602 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the Individual Retirement Record under consideration. Unexpended balance means the unrefunded amount... of this chapter; (b) Amounts deposited by an employee for periods of service (including military... service. Year of the computation means the calendar year when the unexpended balance is being computed. ...

  16. 11 CFR 9003.6 - Production of computer information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... legal and accounting services, including the allocation of payroll and overhead expenditures; (4..., ground services and facilities made available to media personnel, including records relating to how costs... explaining the computer system's software capabilities, such as user guides, technical manuals, formats...

  17. Annuity-estimating program

    NASA Technical Reports Server (NTRS)

    Jillie, D. W.

    1979-01-01

    Program computes benefits and other relevant factors for Federal Civil Service employees. Computed information includes retirement annuity, survivor annuity for each retirement annuity, highest average annual consecutive 3-year salary, length of service including credit for unused sick leave, amount of deposit and redeposit plus interest.

  18. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.

  19. 5 CFR 847.907 - How is the monthly annuity rate used to compute the present value of the deferred annuity without...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... compute the present value of the deferred annuity without credit for NAFI service determined? 847.907... the present value of the deferred annuity without credit for NAFI service determined? (a) The monthly annuity rate used to compute the present value of the deferred annuity under § 847.906 of this subpart for...

  20. A Cross-Cultural Examination of the Intention to Use Technology between Singaporean and Malaysian Pre-Service teachers: An Application of the Technology Acceptance Model (TAM)

    ERIC Educational Resources Information Center

    Teo, Timothy; Luan, Wong Su; Sing, Chai Ching

    2008-01-01

    As computers becomes more ubiquitous in our everyday lives, educational settings are being transformed where educators and students are expected to teach and learn, using computers (Lee, 2003). This study, therefore, explored pre-service teachers' self reported future intentions to use computers in Singapore and Malaysia. A survey methodology was…

  1. Computing the Expected Cost of an Appointment Schedule for Statistically Identical Customers with Probabilistic Service Times

    PubMed Central

    Dietz, Dennis C.

    2014-01-01

    A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070

  2. 28 CFR 0.75 - Policy functions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., automated information services, publication services, library services and any other Department-wide central...) Provide computer and digital telecommunications services on an equitable resource-sharing basis to all...

  3. 28 CFR 0.75 - Policy functions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., automated information services, publication services, library services and any other Department-wide central...) Provide computer and digital telecommunications services on an equitable resource-sharing basis to all...

  4. Implementing Computer Technology in the Rehabilitation Process.

    ERIC Educational Resources Information Center

    McCollum, Paul S., Ed.; Chan, Fong, Ed.

    1985-01-01

    This special issue contains seven articles, addressing rehabilitation in the information age, computer-assisted rehabilitation services, computer technology in rehabilitation counseling, computer-assisted career exploration and vocational decision making, computer-assisted assessment, computer enhanced employment opportunities for persons with…

  5. Acceptance of Cloud Services in Face-to-Face Computer-Supported Collaborative Learning: A Comparison between Single-User Mode and Multi-User Mode

    ERIC Educational Resources Information Center

    Wang, Chia-Sui; Huang, Yong-Ming

    2016-01-01

    Face-to-face computer-supported collaborative learning (CSCL) was used extensively to facilitate learning in classrooms. Cloud services not only allow a single user to edit a document, but they also enable multiple users to simultaneously edit a shared document. However, few researchers have compared student acceptance of such services in…

  6. 34 CFR 365.11 - How is the allotment of Federal funds for State independent living (IL) services computed?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true How is the allotment of Federal funds for State independent living (IL) services computed? 365.11 Section 365.11 Education Regulations of the Offices of the... EDUCATION STATE INDEPENDENT LIVING SERVICES How Does the Secretary Make a Grant to a State? § 365.11 How is...

  7. 34 CFR 365.11 - How is the allotment of Federal funds for State independent living (IL) services computed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false How is the allotment of Federal funds for State independent living (IL) services computed? 365.11 Section 365.11 Education Regulations of the Offices of the... EDUCATION STATE INDEPENDENT LIVING SERVICES How Does the Secretary Make a Grant to a State? § 365.11 How is...

  8. Investigating Pre-Service Early Childhood Teachers' Views and Intentions about Integrating and Using Computers in Early Childhood Settings: Compilation of an Instrument

    ERIC Educational Resources Information Center

    Nikolopoulou, Kleopatra; Gialamas, Vasilis

    2009-01-01

    This paper discusses the compilation of an instrument in order to investigate pre-service early childhood teachers' views and intentions about integrating and using computers in early childhood settings. For the purpose of this study a questionnaire was compiled and administered to 258 pre-service early childhood teachers (PECTs), in Greece. A…

  9. The ECE Pre-Service Teachers' Perception on Factors Affecting the Integration of Educational Computer Games in Two Conditions: Selecting versus Redesigning

    ERIC Educational Resources Information Center

    Sancar Tokmak, Hatice; Ozgelen, Sinan

    2013-01-01

    This case study aimed to examine early childhood education (ECE) pre-service teachers' perception on the factors affecting integration of educational computer games to their instruction in two areas: selecting and redesigning. Twenty-six ECE pre-service teachers participated in the study. The data was collected through open-ended questionnaires,…

  10. Research on the application in disaster reduction for using cloud computing technology

    NASA Astrophysics Data System (ADS)

    Tao, Liang; Fan, Yida; Wang, Xingling

    Cloud Computing technology has been rapidly applied in different domains recently, promotes the progress of the domain's informatization. Based on the analysis of the state of application requirement in disaster reduction and combining the characteristics of Cloud Computing technology, we present the research on the application of Cloud Computing technology in disaster reduction. First of all, we give the architecture of disaster reduction cloud, which consists of disaster reduction infrastructure as a service (IAAS), disaster reduction cloud application platform as a service (PAAS) and disaster reduction software as a service (SAAS). Secondly, we talk about the standard system of disaster reduction in five aspects. Thirdly, we indicate the security system of disaster reduction cloud. Finally, we draw a conclusion the use of cloud computing technology will help us to solve the problems for disaster reduction and promote the development of disaster reduction.

  11. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE PAGES

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; ...

    2017-10-01

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  12. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey

    Here, the Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a modelmore » does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.« less

  13. Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

    NASA Astrophysics Data System (ADS)

    Alandes, Maria; Andreeva, Julia; Anisenkov, Alexey; Bagliesi, Giuseppe; Belforte, Stephano; Campana, Simone; Dimou, Maria; Flix, Jose; Forti, Alessandra; di Girolamo, A.; Karavakis, Edward; Lammel, Stephan; Litmaath, Maarten; Sciaba, Andrea; Valassi, Andrea

    2017-10-01

    The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments.

  14. 5 CFR 846.304 - Computing FERS annuities for persons with CSRS service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) and (c). (d)(1) Except as specified in § 846.305, the average pay for computations under paragraphs (b... basic pay in effect over any 3 consecutive years of creditable service or, in the case of an annuity...

  15. 5 CFR 846.304 - Computing FERS annuities for persons with CSRS service.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) and (c). (d)(1) Except as specified in § 846.305, the average pay for computations under paragraphs (b... basic pay in effect over any 3 consecutive years of creditable service or, in the case of an annuity...

  16. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  17. Benefits of cloud computing for PACS and archiving.

    PubMed

    Koch, Patrick

    2012-01-01

    The goal of cloud-based services is to provide easy, scalable access to computing resources and IT services. The healthcare industry requires a private cloud that adheres to government mandates designed to ensure privacy and security of patient data while enabling access by authorized users. Cloud-based computing in the imaging market has evolved from a service that provided cost effective disaster recovery for archived data to fully featured PACS and vendor neutral archiving services that can address the needs of healthcare providers of all sizes. Healthcare providers worldwide are now using the cloud to distribute images to remote radiologists while supporting advanced reading tools, deliver radiology reports and imaging studies to referring physicians, and provide redundant data storage. Vendor managed cloud services eliminate large capital investments in equipment and maintenance, as well as staffing for the data center--creating a reduction in total cost of ownership for the healthcare provider.

  18. Development of stable Grid service at the next generation system of KEKCC

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.

    2017-10-01

    A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.

  19. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    NASA Astrophysics Data System (ADS)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  20. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  1. Cloud Service Selection Using Multicriteria Decision Analysis

    PubMed Central

    Anuar, Nor Badrul; Shiraz, Muhammad; Haque, Israat Tanzeena

    2014-01-01

    Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios. PMID:24696645

  2. Cloud service selection using multicriteria decision analysis.

    PubMed

    Whaiduzzaman, Md; Gani, Abdullah; Anuar, Nor Badrul; Shiraz, Muhammad; Haque, Mohammad Nazmul; Haque, Israat Tanzeena

    2014-01-01

    Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios.

  3. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, Tom; Yang, Xi

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less

  4. An Execution Service for Grid Computing

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Hu, Chaumin

    2004-01-01

    This paper describes the design and implementation of the IPG Execution Service that reliably executes complex jobs on a computational grid. Our Execution Service is part of the IPG service architecture whose goal is to support location-independent computing. In such an environment, once n user ports an npplicntion to one or more hardware/software platfrms, the user can describe this environment to the grid the grid can locate instances of this platfrm, configure the platfrm as required for the application, and then execute the application. Our Execution Service runs jobs that set up such environments for applications and executes them. These jobs consist of a set of tasks for executing applications and managing data. The tasks have user-defined starting conditions that allow users to specih complex dependencies including task to execute when tasks fail, afiequent occurrence in a large distributed system, or are cancelled. The execution task provided by our service also configures the application environment exactly as specified by the user and captures the exit code of the application, features that many grid execution services do not support due to dflculties interfacing to local scheduling systems.

  5. E-Governance and Service Oriented Computing Architecture Model

    NASA Astrophysics Data System (ADS)

    Tejasvee, Sanjay; Sarangdevot, S. S.

    2010-11-01

    E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.

  6. Cloud based emergency health care information service in India.

    PubMed

    Karthikeyan, N; Sukanesh, R

    2012-12-01

    A hospital is a health care organization providing patient treatment by expert physicians, surgeons and equipments. A report from a health care accreditation group says that miscommunication between patients and health care providers is the reason for the gap in providing emergency medical care to people in need. In developing countries, illiteracy is the major key root for deaths resulting from uncertain diseases constituting a serious public health problem. Mentally affected, differently abled and unconscious patients can't communicate about their medical history to the medical practitioners. Also, Medical practitioners can't edit or view DICOM images instantly. Our aim is to provide palm vein pattern recognition based medical record retrieval system, using cloud computing for the above mentioned people. Distributed computing technology is coming in the new forms as Grid computing and Cloud computing. These new forms are assured to bring Information Technology (IT) as a service. In this paper, we have described how these new forms of distributed computing will be helpful for modern health care industries. Cloud Computing is germinating its benefit to industrial sectors especially in medical scenarios. In Cloud Computing, IT-related capabilities and resources are provided as services, via the distributed computing on-demand. This paper is concerned with sprouting software as a service (SaaS) by means of Cloud computing with an aim to bring emergency health care sector in an umbrella with physical secured patient records. In framing the emergency healthcare treatment, the crucial thing considered necessary to decide about patients is their previous health conduct records. Thus a ubiquitous access to appropriate records is essential. Palm vein pattern recognition promises a secured patient record access. Likewise our paper reveals an efficient means to view, edit or transfer the DICOM images instantly which was a challenging task for medical practitioners in the past years. We have developed two services for health care. 1. Cloud based Palm vein recognition system 2. Distributed Medical image processing tools for medical practitioners.

  7. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    NASA Technical Reports Server (NTRS)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  8. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  9. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  10. Naval Open Architecture Machinery Control Systems for Next Generation Integrated Power Systems

    DTIC Science & Technology

    2012-05-01

    PORTABLE) OS / RTOS ADAPTATION MIDDLEWARE (FOR OS PORTABILITY) MACHINERY CONTROLLER FRAMEWORK MACHINERY CONTROL SYSTEM SERVICES POWER CONTROL SYSTEM...SERVICES SHIP SYSTEM SERVICES TTY 0 TTY N … OPERATING SYSTEM ( OS / RTOS ) COMPUTER HARDWARE UDP IP TCP RAW DEV 0 DEV N … POWER MANAGEMENT CONTROLLER...operating systems (DOS, Windows, Linux, OS /2, QNX, SCO Unix ...) COMPUTERS: ISA compatible motherboards, workstations and portables (Compaq, Dell

  11. A Web-based home welfare and care services support system using a pen type image sensor.

    PubMed

    Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Sato, Haruhiko; Hahn, Allen W; Caldwell, W Morton

    2003-01-01

    A long-term care insurance law for elderly persons was put in force two years ago in Japan. The Home Helpers, who are employed by hospitals, care companies or the welfare office, provide home welfare and care services for the elderly, such as cooking, bathing, washing, cleaning, shopping, etc. We developed a web-based home welfare and care services support system using wireless Internet mobile phones and Internet client computers, which employs a pen type image sensor. The pen type image sensor is used by the elderly people as the entry device for their care requests. The client computer sends the requests to the server computer in the Home Helper central office, and then the server computer automatically transfers them to the Home Helper's mobile phone. This newly-developed home welfare and care services support system is easily operated by elderly persons and enables Homes Helpers to save a significant amount of time and extra travel.

  12. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Klimentov, A.; Oleynik, D.; Petrosyan, A.; Atlas Collaboration

    2014-06-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produced petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we describe the ATLAS Grid Information System (AGIS), designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  13. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1996-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  14. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1997-12-09

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  15. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1999-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  16. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  17. Remote information service access system based on a client-server-service model

    DOEpatents

    Konrad, Allan M.

    1997-01-01

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.

  18. Lost in Cloud

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Shetye, Sandeep D.; Chilukuri, Sri; Sturken, Ian

    2012-01-01

    Cloud computing can reduce cost significantly because businesses can share computing resources. In recent years Small and Medium Businesses (SMB) have used Cloud effectively for cost saving and for sharing IT expenses. With the success of SMBs, many perceive that the larger enterprises ought to move into Cloud environment as well. Government agency s stove-piped environments are being considered as candidates for potential use of Cloud either as an enterprise entity or pockets of small communities. Cloud Computing is the delivery of computing as a service rather than as a product, whereby shared resources, software, and information are provided to computers and other devices as a utility over a network. Underneath the offered services, there exists a modern infrastructure cost of which is often spread across its services or its investors. As NASA is considered as an Enterprise class organization, like other enterprises, a shift has been occurring in perceiving its IT services as candidates for Cloud services. This paper discusses market trends in cloud computing from an enterprise angle and then addresses the topic of Cloud Computing for NASA in two possible forms. First, in the form of a public Cloud to support it as an enterprise, as well as to share it with the commercial and public at large. Second, as a private Cloud wherein the infrastructure is operated solely for NASA, whether managed internally or by a third-party and hosted internally or externally. The paper addresses the strengths and weaknesses of both paradigms of public and private Clouds, in both internally and externally operated settings. The content of the paper is from a NASA perspective but is applicable to any large enterprise with thousands of employees and contractors.

  19. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... maintenance and operation and engineering support services. (16) Exploratory drilling (other than part of...

  20. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... maintenance and operation and engineering support services. (16) Exploratory drilling (other than part of...

  1. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... construction). (17) Film processing. (18) Fire fighting and protection. (19) Fueling services. (20) Furniture...) Guard and watchman security service. (24) Inventory services. (25) Keypunching and keyverifying...

  2. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... construction). (17) Film processing. (18) Fire fighting and protection. (19) Fueling services. (20) Furniture...) Guard and watchman security service. (24) Inventory services. (25) Keypunching and keyverifying...

  3. 29 CFR 4.130 - Types of covered service contracts illustrated.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... services. (8) Chemical testing and analysis. (9) Clothing alteration and repair. (10) Computer services... construction). (17) Film processing. (18) Fire fighting and protection. (19) Fueling services. (20) Furniture...) Guard and watchman security service. (24) Inventory services. (25) Keypunching and keyverifying...

  4. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  5. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  6. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  7. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  8. 47 CFR 76.980 - Charges for customer changes.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....980 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES... charge for customer changes in service tiers effected solely by coded entry on a computer terminal or by... involve more than coded entry on a computer or other similarly simple method shall be based on actual cost...

  9. Education Technology Survey, 1995.

    ERIC Educational Resources Information Center

    Quality Education Data, Inc., Denver, CO.

    Primary research (in-depth telephone interviews) was conducted among elementary and secondary school educators in Spring 1995 to determine usage, attitudes, and barriers to usage for five electronic in-school services: Cable in the Classroom; computers, laserdisc or CD-ROM; Internet; online computer services such as America Online and Prodigy; and…

  10. Medical applications for high-performance computers in SKIF-GRID network.

    PubMed

    Zhuchkov, Alexey; Tverdokhlebov, Nikolay

    2009-01-01

    The paper presents a set of software services for massive mammography image processing by using high-performance parallel computers of SKIF-family which are linked into a service-oriented grid-network. An experience of a prototype system implementation in two medical institutions is also described.

  11. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...

  12. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...

  13. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...

  14. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...

  15. 14 CFR 13.85 - Filing, service and computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Filing, service and computation of time. 13.85 Section 13.85 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PROCEDURAL RULES INVESTIGATIVE AND ENFORCEMENT PROCEDURES Orders of Compliance Under the Hazardous Materials...

  16. The Effect of In-Service Training of Computer Science Teachers on Scratch Programming Language Skills Using an Electronic Learning Platform on Programming Skills and the Attitudes towards Teaching Programming

    ERIC Educational Resources Information Center

    Alkaria, Ahmed; Alhassan, Riyadh

    2017-01-01

    This study was conducted to examine the effect of in-service training of computer science teachers in Scratch language using an electronic learning platform on acquiring programming skills and attitudes towards teaching programming. The sample of this study consisted of 40 middle school computer science teachers. They were assigned into two…

  17. Digital Representation for Communication of Product Definition Data. Revision

    DTIC Science & Technology

    1990-04-30

    Treuhandgesellschaft Electrical Thomas C . Estervog Boeing Computer Services John C . Faulkner S D R C Ayron L. Fears IBM Corp. ix MEMBERS OF THE IGES/PDES...Force Eugene F. Gurga Computervision C . Hayden Hamilton III PDA Engineering William Hammon Tandem Computer Yosef Haridim Boeing Electronics Co. C ...British Aerospace Raphael McBain General Dynamics Corp. Dennis McBurney L-ueing Military Airplane Co Patrick McFadden Boeing Computer Services C . Kevin

  18. Extending Simple Network Management Protocol (SNMP) Beyond Network Management: A MIB Architecture for Network-Centric Services

    DTIC Science & Technology

    2007-03-01

    potential of moving closer to the goal of a fully service-oriented GIG by allowing even computing - and bandwidth-constrained elements to participate...the functionality provided by core network assets with relatively unlimited bandwidth and computing resources. Finally, the nature of information is...the Department of Defense is a requirement for ubiquitous computer connectivity. An espoused vehicle for delivering that ubiquity is the Global

  19. 76 FR 41234 - Advanced Scientific Computing Advisory Committee Charter Renewal

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-13

    ... Secretariat, General Services Administration, notice is hereby given that the Advanced Scientific Computing... advice and recommendations concerning the Advanced Scientific Computing program in response only to... Advanced Scientific Computing Research program and recommendations based thereon; --Advice on the computing...

  20. 20 CFR 726.308 - Service and computation of time.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Service and computation of time. 726.308 Section 726.308 Employees' Benefits EMPLOYMENT STANDARDS ADMINISTRATION, DEPARTMENT OF LABOR FEDERAL COAL MINE HEALTH AND SAFETY ACT OF 1969, AS AMENDED BLACK LUNG BENEFITS; REQUIREMENTS FOR COAL MINE OPERATOR...

  1. 31 CFR 560.539 - Official activities of certain international organizations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... items such as many laptop computers, personal computers, cell phones, personal digital assistants and... limited to: (1) The provision of services involving Iran necessary for carrying out the official business; (2) Purchasing Iranian-origin goods and services for use in carrying out the official business; (3...

  2. 31 CFR 560.539 - Official activities of certain international organizations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... items such as many laptop computers, personal computers, cell phones, personal digital assistants and... limited to: (1) The provision of services involving Iran necessary for carrying out the official business; (2) Purchasing Iranian-origin goods and services for use in carrying out the official business; (3...

  3. Computer Service Technician "COMPS." Curriculum Grant 1985.

    ERIC Educational Resources Information Center

    Schoolcraft Coll., Livonia, MI.

    This document is a curriculum guide for a program in computer service technology developed at Schoolcraft College, Livonia, Michigan. The program is designed to give students a strong background in the fundamentals of electricity, electronic devices, and basic circuits (digital and linear). The curriculum includes laboratory demonstrations of the…

  4. 12 CFR 403.9 - Fees.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SECURITY INFORMATION § 403.9 Fees. The following specific fees shall be applicable with respect to services... records, per hour or fraction thereof: (i) Professional $11.00 (ii) Clerical 6.00 (b) Computer service charges per second for actual use of computer central processing unit .25 (c) Copies made by photostat or...

  5. 12 CFR 403.9 - Fees.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SECURITY INFORMATION § 403.9 Fees. The following specific fees shall be applicable with respect to services... records, per hour or fraction thereof: (i) Professional $11.00 (ii) Clerical 6.00 (b) Computer service charges per second for actual use of computer central processing unit .25 (c) Copies made by photostat or...

  6. 12 CFR 403.9 - Fees.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... SECURITY INFORMATION § 403.9 Fees. The following specific fees shall be applicable with respect to services... records, per hour or fraction thereof: (i) Professional $11.00 (ii) Clerical 6.00 (b) Computer service charges per second for actual use of computer central processing unit .25 (c) Copies made by photostat or...

  7. 12 CFR 403.9 - Fees.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SECURITY INFORMATION § 403.9 Fees. The following specific fees shall be applicable with respect to services... records, per hour or fraction thereof: (i) Professional $11.00 (ii) Clerical 6.00 (b) Computer service charges per second for actual use of computer central processing unit .25 (c) Copies made by photostat or...

  8. 12 CFR 403.9 - Fees.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SECURITY INFORMATION § 403.9 Fees. The following specific fees shall be applicable with respect to services... records, per hour or fraction thereof: (i) Professional $11.00 (ii) Clerical 6.00 (b) Computer service charges per second for actual use of computer central processing unit .25 (c) Copies made by photostat or...

  9. Deploying the Win TR-20 computational engine as a web service

    USDA-ARS?s Scientific Manuscript database

    Despite its simplicity and limitations, the runoff curve number method remains a widely-used hydrologic modeling tool, and its use through the USDA Natural Resources Conservation Service (NRCS) computer application WinTR-20 is expected to continue for the foreseeable future. To facilitate timely up...

  10. 24 CFR 208.112 - Cost.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... increases. (b) At the owner's option, the cost of the computer software may include service contracts to... requirements. (c) The source of funds for the purchase of hardware or software, or contracting for services for... formatted data, including either the purchase and maintenance of computer hardware or software, or both, the...

  11. Computers in Knowledge-Based Fields.

    ERIC Educational Resources Information Center

    Myers, Charles A.

    Last in a series of research projects on the implications of technological change and automation, this study is concerned with the use of computers in formal education and educational administration; in library systems and subsystems; in legal, legislative, and related services; in medical and hospital services; and in national and centralized…

  12. The ALL-OUT Library; A Design for Computer-Powered, Multidimensional Services.

    ERIC Educational Resources Information Center

    Sleeth, Jim; LaRue, James

    1983-01-01

    Preliminary description of design of electronic library and home information delivery system highlights potentials of personal computer interface program (applying for service, assuring that users are valid, checking for measures, searching, locating titles) and incorporation of concepts used in other information systems (security checks,…

  13. Changes in Pre-Service Teachers' Algebraic Misconceptions by Using Computer-Assisted Instruction

    ERIC Educational Resources Information Center

    Lin, ByCheng-Yao; Ko, Yi-Yin; Kuo, Yu-Chun

    2014-01-01

    In order to carry out current reforms regarding algebra and technology in elementary school mathematics successfully, pre-service elementary mathematics teachers must be equipped with adequate understandings of algebraic concepts and self-confidence in using computers for their future teaching. This paper examines the differences in preservice…

  14. An On-line Microcomputer Course for Pre-service Teachers.

    ERIC Educational Resources Information Center

    Abkemeier, Mary K.

    This paper describes "Microcomputer Applications for Educators," a course at Fontbonne College for pre-service teachers which introduces the educational applications of the computer and related technologies. The course introduces students to the Macintosh computer, its operating system, Claris Works 4.0, and various other educational and…

  15. An Introduction to Archival Automation: A RAMP Study with Guidelines.

    ERIC Educational Resources Information Center

    Cook, Michael

    Developed under a contract with the International Council on Archives, these guidelines are designed to emphasize the role of automation techniques in archives and records services, provide an indication of existing computer systems used in different archives services and of specific computer applications at various stages of archives…

  16. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  17. Above the cloud computing: applying cloud computing principles to create an orbital services model

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft). Two prospective uses for the proposed system are presented: Earth-orbiting applications and planetary science applications. A mission scenario is presented for both uses to illustrate system functionality and operation. The performance of the proposed system is compared to traditional self-contained spacecraft performance, both in terms of task performance (e.g., how well / quickly / etc. was a given task performed) and task performance as a function of cost. The integration of the proposed service provider model is compared to other control architectures for satellites including traditional scripted control, top-down multi-tier autonomy and bottom-up multi-tier autonomy.

  18. Cloudbus Toolkit for Market-Oriented Cloud Computing

    NASA Astrophysics Data System (ADS)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  19. Computer Literacy Project. A General Orientation in Basic Computer Concepts and Applications.

    ERIC Educational Resources Information Center

    Murray, David R.

    This paper proposes a two-part, basic computer literacy program for university faculty, staff, and students with no prior exposure to computers. The program described would introduce basic computer concepts and computing center service programs and resources; provide fundamental preparation for other computer courses; and orient faculty towards…

  20. Development of the Telehealth Usability Questionnaire (TUQ).

    PubMed

    Parmanto, Bambang; Lewis, Allen Nelson; Graham, Kristin M; Bertolet, Marnie H

    2016-01-01

    Current telehealth usability questionnaires are designed primarily for older technologies, where telehealth interaction is conducted over dedicated videoconferencing applications. However, telehealth services are increasingly conducted over computer-based systems that rely on commercial software and a user supplied computer interface. Therefore, a usability questionnaire that addresses the changes in telehealth service delivery and technology is needed. The Telehealth Usability Questionnaire (TUQ) was developed to evaluate the usability of telehealth implementation and services. This paper addresses: (1) the need for a new measure of telehealth usability, (2) the development of the TUQ, (3) intended uses for the TUQ, and (4) the reliability of the TUQ. Analyses indicate that the TUQ is a solid, robust, and versatile measure that can be used to measure the quality of the computer-based user interface and the quality of the telehealth interaction and services.

  1. Computer control of a robotic satellite servicer

    NASA Technical Reports Server (NTRS)

    Fernandez, K. R.

    1980-01-01

    The advantages that will accrue from the in-orbit servicing of satellites are listed. It is noted that in a concept in satellite servicing which holds promise as a compromise between the high flexibility and adaptability of manned vehicles and the lower cost of an unmanned vehicle involves an unmanned servicer carrying a remotely supervised robotic manipulator arm. Because of deficiencies in sensor technology, robot servicing would require that satellites be designed according to a modular concept. A description is given of the servicer simulation hardware, the computer and interface hardware, and the software. It is noted that several areas require further development; these include automated docking, modularization of satellite design, reliable connector and latching mechanisms, development of manipulators for space environments, and development of automated diagnostic techniques.

  2. An Implemented Strategy for Campus Connectivity and Cooperative Computing.

    ERIC Educational Resources Information Center

    Halaris, Antony S.; Sloan, Lynda W.

    1989-01-01

    ConnectPac, a software package developed at Iona College to allow a computer user to access all services from a single personal computer, is described. ConnectPac uses mainframe computing to support a campus computing network, integrating personal and centralized computing into a menu-driven user environment. (Author/MLW)

  3. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  4. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  5. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  6. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  7. 48 CFR 552.216-72 - Placement of Orders.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Acquisition Service (FAS) will place all orders by EDI using computer-to-computer EDI. If computer-to-computer... EDI. (d) When computer-to-computer EDI procedures will be used to place orders, the Contractor shall... electronic orders are placed, the transaction sets used, security procedures, and guidelines for...

  8. The preparedness of hospital Health Information Services for system failures due to internal disasters.

    PubMed

    Lee, Cheens; Robinson, Kerin M; Wendt, Kate; Williamson, Dianne

    The unimpeded functioning of hospital Health Information Services (HIS) is essential for patient care, clinical governance, organisational performance measurement, funding and research. In an investigation of hospital Health Information Services' preparedness for internal disasters, all hospitals in the state of Victoria with the following characteristics were surveyed: they have a Health Information Service/ Department; there is a Manager of the Health Information Service/Department; and their inpatient capacity is greater than 80 beds. Fifty percent of the respondents have experienced an internal disaster within the past decade, the majority affecting the Health Information Service. The most commonly occurring internal disasters were computer system failure and floods. Two-thirds of the hospitals have internal disaster plans; the most frequently occurring scenarios provided for are computer system failure, power failure and fire. More large hospitals have established back-up systems than medium- and small-size hospitals. Fifty-three percent of hospitals have a recovery plan for internal disasters. Hospitals typically self-rate as having a 'medium' level of internal disaster preparedness. Overall, large hospitals are better prepared for internal disasters than medium and small hospitals, and preparation for disruption of computer systems and medical record services is relatively high on their agendas.

  9. A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service

    PubMed Central

    Yin, Fan; Tang, Xiaohu

    2017-01-01

    Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching. PMID:28696395

  10. A Fine-Grained and Privacy-Preserving Query Scheme for Fog Computing-Enhanced Location-Based Service.

    PubMed

    Yang, Xue; Yin, Fan; Tang, Xiaohu

    2017-07-11

    Location-based services (LBS), as one of the most popular location-awareness applications, has been further developed to achieve low-latency with the assistance of fog computing. However, privacy issues remain a research challenge in the context of fog computing. Therefore, in this paper, we present a fine-grained and privacy-preserving query scheme for fog computing-enhanced location-based services, hereafter referred to as FGPQ. In particular, mobile users can obtain the fine-grained searching result satisfying not only the given spatial range but also the searching content. Detailed privacy analysis shows that our proposed scheme indeed achieves the privacy preservation for the LBS provider and mobile users. In addition, extensive performance analyses and experiments demonstrate that the FGPQ scheme can significantly reduce computational and communication overheads and ensure the low-latency, which outperforms existing state-of-the art schemes. Hence, our proposed scheme is more suitable for real-time LBS searching.

  11. Scalable service architecture for providing strong service guarantees

    NASA Astrophysics Data System (ADS)

    Christin, Nicolas; Liebeherr, Joerg

    2002-07-01

    For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.

  12. Unidata's Vision for Transforming Geoscience by Moving Data Services and Software to the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Fisher, W.; Yoksas, T.

    2014-12-01

    Universities are facing many challenges: shrinking budgets, rapidly evolving information technologies, exploding data volumes, multidisciplinary science requirements, and high student expectations. These changes are upending traditional approaches to accessing and using data and software. It is clear that Unidata's products and services must evolve to support new approaches to research and education. After years of hype and ambiguity, cloud computing is maturing in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Cloud services aimed at providing any resource, at any time, from any place, using any device are increasingly being embraced by all types of organizations. Given this trend and the enormous potential of cloud-based services, Unidata is taking moving to augment its products, services, data delivery mechanisms and applications to align with the cloud-computing paradigm. Specifically, Unidata is working toward establishing a community-based development environment that supports the creation and use of software services to build end-to-end data workflows. The design encourages the creation of services that can be broken into small, independent chunks that provide simple capabilities. Chunks could be used individually to perform a task, or chained into simple or elaborate workflows. The services will also be portable, allowing their use in researchers' own cloud-based computing environments. In this talk, we present a vision for Unidata's future in a cloud-enabled data services and discuss our initial efforts to deploy a subset of Unidata data services and tools in the Amazon EC2 and Microsoft Azure cloud environments, including the transfer of real-time meteorological data into its cloud instances, product generation using those data, and the deployment of TDS, McIDAS ADDE and AWIPS II data servers and the Integrated Data Server visualization tool.

  13. 7 CFR 2.98 - Director, Management Services.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...

  14. 7 CFR 2.98 - Director, Management Services.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...

  15. 7 CFR 2.98 - Director, Management Services.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... management services; information technology services related to end user office automation, desktop computers, enterprise networking support, handheld devices and voice telecommunications; with authority to take actions...

  16. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  17. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    PubMed Central

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  18. Secure encapsulation and publication of biological services in the cloud computing environment.

    PubMed

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  19. Command and data handling of science signals on Spacelab

    NASA Technical Reports Server (NTRS)

    Mccain, H. G.

    1975-01-01

    The Orbiter Avionics and the Spacelab Command and Data Management System (CDMS) combine to provide a relatively complete command, control, and data handling service to the instrument complement during a Shuttle Sortie Mission. The Spacelab CDMS services the instruments and the Orbiter in turn services the Spacelab. The CDMS computer system includes three computers, two I/O units, a mass memory, and a variable number of remote acquisition units. Attention is given to the CDMS high rate multiplexer, CDMS tape recorders, closed circuit television for the visual monitoring of payload bay and cabin area activities, methods of science data acquisition, questions of transmission and recording, CDMS experiment computer usage, and experiment electronics.

  20. Self-service for software development projects and HPC activities

    NASA Astrophysics Data System (ADS)

    Husejko, M.; Høimyr, N.; Gonzalez, A.; Koloventzos, G.; Asbury, D.; Trzcinska, A.; Agtzidis, I.; Botrel, G.; Otto, J.

    2014-05-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  1. Engineering and Computing Portal to Solve Environmental Problems

    NASA Astrophysics Data System (ADS)

    Gudov, A. M.; Zavozkin, S. Y.; Sotnikov, I. Y.

    2018-01-01

    This paper describes architecture and services of the Engineering and Computing Portal, which is considered to be a complex solution that provides access to high-performance computing resources, enables to carry out computational experiments, teach parallel technologies and solve computing tasks, including technogenic safety ones.

  2. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  3. Storm Warning Service

    NASA Technical Reports Server (NTRS)

    1993-01-01

    A Huntsville meteorologist of Baron Services, Inc. has formed a commercial weather advisory service. Weather information is based on data from Marshall Space Flight Center (MSFC) collected from antennas in Alabama and Tennessee. Bob Baron refines and enhances MSFC's real time display software. Computer data is changed to audio data for radio transmission, received by clients through an antenna and decoded by computer for display. Using his service, clients can monitor the approach of significant storms and schedule operations accordingly. Utilities and emergency management officials are able to plot a storm's path. A recent agreement with two other companies will promote continued development and marketing.

  4. Texas Education Computer Cooperative (TECC) Products and Services.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    Designed to broaden awareness of the Texas Education Computer Cooperative (TECC) and its efforts to develop products and services which can be most useful to schools, this publication provides an outline of the need, purpose, organization, and funding of TECC and descriptions of 19 projects. The project descriptions are organized by fiscal year in…

  5. 77 FR 44684 - General Dynamics Itronix Corporation; A Subsidiary of General Dynamics Corporation, Including...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-30

    ... program management services for rugged laptop computers and rugged mobile devices. The worker group... of program management services for rugged laptop computers and rugged mobile devices, meet the worker... threatened to become totally or partially separated. Section 222(a)(2)(A)(i) has been met because the sales...

  6. Computers and the supply of radiology services: anatomy of a disruptive technology.

    PubMed

    Levy, Frank

    2008-10-01

    Over the next decade, computers will augment the supply of radiology services at a time when reimbursement rules are likely to tighten. Increased supply and slower growing demand will result in a radiology market that is more competitive, with less income growth, than the market of the past 15 years.

  7. Construction of the NASA Thesaurus: Computer Processing Support. Final Report.

    ERIC Educational Resources Information Center

    Hammond, William

    Details are given on the necessary computer processing services required to produce a NASA thesaurus. These services included (1) keypunching the terminology to specifications from approximately 19,000 Term Review Forms furnished by NASA; (2) modifying a set of programs to satisfy NASA specifications, principally to accommodate 42 character terms…

  8. A "Service-Learning Approach" to Teaching Computer Graphics

    ERIC Educational Resources Information Center

    Hutzel, Karen

    2007-01-01

    The author taught a computer graphics course through a service-learning framework to undergraduate and graduate students in the spring of 2003 at Florida State University (FSU). The students in this course participated in learning a software program along with youths from a neighboring, low-income, primarily African-American community. Together,…

  9. Business Models of High Performance Computing Centres in Higher Education in Europe

    ERIC Educational Resources Information Center

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  10. MIX. The McGraw-Hill Information Exchange.

    ERIC Educational Resources Information Center

    Laliberte, Stephen M.

    1986-01-01

    "MIX" is an online publishing service and information exchange from the Educational Management Services Division of McGraw-Hill. Through computer conferencing and electronic mail, MIX provides access to a network of people across the country who are seeking ways to put computers to use to improve the quality of education. MIX is an…

  11. 48 CFR 9905.506-60 - Illustrations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., installs a computer service center to begin operations on May 1. The operating expense related to the new... operating expenses of the computer service center for the 8-month part of the cost accounting period may be... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Illustrations. 9905.506-60...

  12. Education and Library Services for Community Information Utilities.

    ERIC Educational Resources Information Center

    Farquhar, John A.

    The concept of "computer utility"--the provision of computing and information service by a utility in the form of a national network to which any person desiring information could gain access--has been gaining interest among the public and among the technical community. This report on planning community information utilities discusses the…

  13. Learning with Personal Computers: Issues, Observations and Perspectives.

    ERIC Educational Resources Information Center

    Rowe, Helga A. H.; And Others

    This book is about learning and teaching with personal computers. It is aimed at teachers, student teachers, those responsible for pre-service and in-service teacher training, school administrators, and parents. The book is arranged in four sections. Part I includes two chapters providing a theoretical framework for learning and teaching with…

  14. 11 CFR 111.35 - If the respondent decides to challenge the alleged violation or proposed civil money penalty...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... staff; (4) Committee computer, software or Internet service provider failures; (5) A committee's failure... software despite the respondent seeking technical assistance from Commission personnel and resources; (2) A... Commission's or respondent's computer systems or Internet service provider; and (3) Severe weather or other...

  15. A Review of Computer Simulations in Teacher Education

    ERIC Educational Resources Information Center

    Bradley, Elizabeth Gates; Kendall, Brittany

    2014-01-01

    Computer simulations can provide guided practice for a variety of situations that pre-service teachers would not frequently experience during their teacher education studies. Pre-service teachers can use simulations to turn the knowledge they have gained in their coursework into real experience. Teacher simulation training has come a long way over…

  16. Auto-Generated Semantic Processing Services

    NASA Technical Reports Server (NTRS)

    Davis, Rodney; Hupf, Greg

    2009-01-01

    Auto-Generated Semantic Processing (AGSP) Services is a suite of software tools for automated generation of other computer programs, denoted cross-platform semantic adapters, that support interoperability of computer-based communication systems that utilize a variety of both new and legacy communication software running in a variety of operating- system/computer-hardware combinations. AGSP has numerous potential uses in military, space-exploration, and other government applications as well as in commercial telecommunications. The cross-platform semantic adapters take advantage of common features of computer- based communication systems to enforce semantics, messaging protocols, and standards of processing of streams of binary data to ensure integrity of data and consistency of meaning among interoperating systems. The auto-generation aspect of AGSP Services reduces development time and effort by emphasizing specification and minimizing implementation: In effect, the design, building, and debugging of software for effecting conversions among complex communication protocols, custom device mappings, and unique data-manipulation algorithms is replaced with metadata specifications that map to an abstract platform-independent communications model. AGSP Services is modular and has been shown to be easily integrable into new and legacy NASA flight and ground communication systems.

  17. S-Cube: Enabling the Next Generation of Software Services

    NASA Astrophysics Data System (ADS)

    Metzger, Andreas; Pohl, Klaus

    The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.

  18. Migration of the CERN IT Data Centre Support System to ServiceNow

    NASA Astrophysics Data System (ADS)

    Alvarez Alonso, R.; Arneodo, G.; Barring, O.; Bonfillou, E.; Coelho dos Santos, M.; Dore, V.; Lefebure, V.; Fedorko, I.; Grossir, A.; Hefferman, J.; Mendez Lorenzo, P.; Moller, M.; Pera Mira, O.; Salter, W.; Trevisani, F.; Toteva, Z.

    2014-06-01

    The large potential and flexibility of the ServiceNow infrastructure based on "best practises" methods is allowing the migration of some of the ticketing systems traditionally used for the monitoring of the servers and services available at the CERN IT Computer Centre. This migration enables the standardization and globalization of the ticketing and control systems implementing a generic system extensible to other departments and users. One of the activities of the Service Management project together with the Computing Facilities group has been the migration of the ITCM structure based on Remedy to ServiceNow within the context of one of the ITIL processes called Event Management. The experience gained during the first months of operation has been instrumental towards the migration to ServiceNow of other service monitoring systems and databases. The usage of this structure is also extended to the service tracking at the Wigner Centre in Budapest.

  19. Agent-based user-adaptive service provision in ubiquitous systems

    NASA Astrophysics Data System (ADS)

    Saddiki, H.; Harroud, H.; Karmouch, A.

    2012-11-01

    With the increasing availability of smartphones, tablets and other computing devices, technology consumers have grown accustomed to performing all of their computing tasks anytime, anywhere and on any device. There is a greater need to support ubiquitous connectivity and accommodate users by providing software as network-accessible services. In this paper, we propose a MAS-based approach to adaptive service composition and provision that automates the selection and execution of a suitable composition plan for a given service. With agents capable of autonomous and intelligent behavior, the composition plan is selected in a dynamic negotiation driven by a utility-based decision-making mechanism; and the composite service is built by a coalition of agents each providing a component necessary to the target service. The same service can be built in variations for catering to dynamic user contexts and further personalizing the user experience. Also multiple services can be grouped to satisfy new user needs.

  20. Grid workflow job execution service 'Pilot'

    NASA Astrophysics Data System (ADS)

    Shamardin, Lev; Kryukov, Alexander; Demichev, Andrey; Ilyin, Vyacheslav

    2011-12-01

    'Pilot' is a grid job execution service for workflow jobs. The main goal for the service is to automate computations with multiple stages since they can be expressed as simple workflows. Each job is a directed acyclic graph of tasks and each task is an execution of something on a grid resource (or 'computing element'). Tasks may be submitted to any WS-GRAM (Globus Toolkit 4) service. The target resources for the tasks execution are selected by the Pilot service from the set of available resources which match the specific requirements from the task and/or job definition. Some simple conditional execution logic is also provided. The 'Pilot' service is built on the REST concepts and provides a simple API through authenticated HTTPS. This service is deployed and used in production in a Russian national grid project GridNNN.

  1. The DFVLR main department for central data processing, 1976 - 1983

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Data processing, equipment and systems operation, operative and user systems, user services, computer networks and communications, text processing, computer graphics, and high power computers are discussed.

  2. The use of National Weather Service Data to Compute the Dose to the MEOI.

    PubMed

    Vickers, Linda

    2018-05-01

    The Turner method is the "benchmark method" for computing the stability class that is used to compute the X/Q (s m). The Turner method should be used to ascertain the validity of X/Q results determined by other methods. This paper used site-specific meteorological data obtained from the National Weather Service. The Turner method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for verification and validation with published literature.

  3. Computing Services and Assured Computing

    DTIC Science & Technology

    2006-05-01

    fighters’ ability to execute the mission.” Computing Services 4 We run IT Systems that: provide medical care pay the warfighters manage maintenance...users • 1,400 applications • 18 facilities • 180 software vendors • 18,000+ copies of executive software products • Virtually every type of mainframe and... chocs electriques, de branchez les deux cordons d’al imentation avant de faire le depannage P R IM A R Y SD A S B 1 2 PowerHub 7000 RST U L 00- 00

  4. From serological to computer cross-matching in nine hospitals.

    PubMed

    Georgsen, J; Kristensen, T

    1998-01-01

    In 1991 it was decided to reorganise the transfusion service of the County of Funen. The aims were to standardise and improve the quality of blood components, laboratory procedures and the transfusion service and to reduce the number of outdated blood units. Part of the efficiency gains was reinvested in a dedicated computer system making it possible--among other things--to change the cross-match procedures from serological to computer cross-matching according to the ABCD-concept. This communication describes how this transition was performed in terms of laboratory techniques, education of personnel as well as implementation of the computer system and indicates the results obtained. The Funen Transfusion Service has by now performed more than 100.000 red cell transfusions based on ABCD-cross-matching and has not encountered any problems. Major results are the significant reductions of cross-match procedures, blood grouping as well as the number of outdated blood components.

  5. Computer simulation and performance assessment of the packet-data service of the Aeronautical Mobile Satellite Service (AMSS)

    NASA Technical Reports Server (NTRS)

    Ferzali, Wassim; Zacharakis, Vassilis; Upadhyay, Triveni; Weed, Dennis; Burke, Gregory

    1995-01-01

    The ICAO Aeronautical Mobile Communications Panel (AMCP) completed the drafting of the Aeronautical Mobile Satellite Service (AMSS) Standards and Recommended Practices (SARP's) and the associated Guidance Material and submitted these documents to ICAO Air Navigation Commission (ANC) for ratification in May 1994. This effort, encompassed an extensive, multi-national SARP's validation. As part of this activity, the US Federal Aviation Administration (FAA) sponsored an effort to validate the SARP's via computer simulation. This paper provides a description of this effort. Specifically, it describes: (1) the approach selected for the creation of a high-fidelity AMSS computer model; (2) the test traffic generation scenarios; and (3) the resultant AMSS performance assessment. More recently, the AMSS computer model was also used to provide AMSS performance statistics in support of the RTCA standardization activities. This paper describes this effort as well.

  6. Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem

    NASA Astrophysics Data System (ADS)

    Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.

    2015-12-01

    Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.

  7. Centralized Authorization Using a Direct Service, Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wachsmann, A

    Authorization is the process of deciding if entity X is allowed to have access to resource Y. Determining the identity of X is the job of the authentication process. One task of authorization in computer networks is to define and determine which user has access to which computers in the network. On Linux, the tendency exists to create a local account for each single user who should be allowed to logon to a computer. This is typically the case because a user not only needs login privileges to a computer but also additional resources like a home directory to actuallymore » do some work. Creating a local account on every computer takes care of all this. The problem with this approach is that these local accounts can be inconsistent with each other. The same user name could have a different user ID and/or group ID on different computers. Even more problematic is when two different accounts share the same user ID and group ID on different computers: User joe on computer1 could have user ID 1234 and group ID 56 and user jane on computer2 could have the same user ID 1234 and group ID 56. This is a big security risk in case shared resources like NFS are used. These two different accounts are the same for an NFS server so that these users can wipe out each other's files. The solution to this inconsistency problem is to have only one central, authoritative data source for this kind of information and a means of providing all your computers with access to this central source. This is what a ''Directory Service'' is. The two directory services most widely used for centralizing authorization data are the Network Information Service (NIS, formerly known as Yellow Pages or YP) and Lightweight Directory Access Protocol (LDAP).« less

  8. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    PubMed

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  9. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing

    PubMed Central

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-01-01

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users’ costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers’ resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center’s energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically. PMID:26901201

  10. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  11. The Library as Leader: Computer Assisted Information Services at Northwestern University. A Report of the NULCAIS Committee on the Present Status, and Proposals for the Future, of Computer Assisted Information Services at Northwestern University Library.

    ERIC Educational Resources Information Center

    Northwestern Univ., Evanston, IL. Univ. Libraries.

    In March 1974, a study was undertaken at Northwestern University to examine the role of the library in providing information services based on computerized data bases. After taking an inventory of existing data bases at Northwestern and in the greater Chicago area, a committee suggested ways to continue and expand the scope of information…

  12. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  13. The 30/20 GHz fixed communications systems service demand assessment. Volume 3: Appendices

    NASA Technical Reports Server (NTRS)

    Gabriszeski, T.; Reiner, P.; Rogers, J.; Terbo, W.

    1979-01-01

    The market analysis of voice, video, and data 18/30 GHz communications systems services and satellite transmission services is discussed. Detail calculations, computer displays of traffic, survey questionnaires, and detailed service forecasts are presented.

  14. Government Cloud Computing Policies: Potential Opportunities for Advancing Military Biomedical Research.

    PubMed

    Lebeda, Frank J; Zalatoris, Jeffrey J; Scheerer, Julia B

    2018-02-07

    This position paper summarizes the development and the present status of Department of Defense (DoD) and other government policies and guidances regarding cloud computing services. Due to the heterogeneous and growing biomedical big datasets, cloud computing services offer an opportunity to mitigate the associated storage and analysis requirements. Having on-demand network access to a shared pool of flexible computing resources creates a consolidated system that should reduce potential duplications of effort in military biomedical research. Interactive, online literature searches were performed with Google, at the Defense Technical Information Center, and at two National Institutes of Health research portfolio information sites. References cited within some of the collected documents also served as literature resources. We gathered, selected, and reviewed DoD and other government cloud computing policies and guidances published from 2009 to 2017. These policies were intended to consolidate computer resources within the government and reduce costs by decreasing the number of federal data centers and by migrating electronic data to cloud systems. Initial White House Office of Management and Budget information technology guidelines were developed for cloud usage, followed by policies and other documents from the DoD, the Defense Health Agency, and the Armed Services. Security standards from the National Institute of Standards and Technology, the Government Services Administration, the DoD, and the Army were also developed. Government Services Administration and DoD Inspectors General monitored cloud usage by the DoD. A 2016 Government Accountability Office report characterized cloud computing as being economical, flexible and fast. A congressionally mandated independent study reported that the DoD was active in offering a wide selection of commercial cloud services in addition to its milCloud system. Our findings from the Department of Health and Human Services indicated that the security infrastructure in cloud services may be more compliant with the Health Insurance Portability and Accountability Act of 1996 regulations than traditional methods. To gauge the DoD's adoption of cloud technologies proposed metrics included cost factors, ease of use, automation, availability, accessibility, security, and policy compliance. Since 2009, plans and policies were developed for the use of cloud technology to help consolidate and reduce the number of data centers which were expected to reduce costs, improve environmental factors, enhance information technology security, and maintain mission support for service members. Cloud technologies were also expected to improve employee efficiency and productivity. Federal cloud computing policies within the last decade also offered increased opportunities to advance military healthcare. It was assumed that these opportunities would benefit consumers of healthcare and health science data by allowing more access to centralized cloud computer facilities to store, analyze, search and share relevant data, to enhance standardization, and to reduce potential duplications of effort. We recommend that cloud computing be considered by DoD biomedical researchers for increasing connectivity, presumably by facilitating communications and data sharing, among the various intra- and extramural laboratories. We also recommend that policies and other guidances be updated to include developing additional metrics that will help stakeholders evaluate the above mentioned assumptions and expectations. Published by Oxford University Press on behalf of the Association of Military Surgeons of the United States 2018. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  15. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  16. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  17. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  18. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  19. 47 CFR 32.2124 - General purpose computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...

  20. 47 CFR 80.771 - Method of computing coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Method of computing coverage. 80.771 Section 80... STATIONS IN THE MARITIME SERVICES Standards for Computing Public Coast Station VHF Coverage § 80.771 Method of computing coverage. Compute the +17 dBu contour as follows: (a) Determine the effective antenna...

Top