Science.gov

Sample records for additional computational costs

  1. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics.

    PubMed

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems. PMID:27176426

  2. Quantum ring-polymer contraction method: Including nuclear quantum effects at no additional computational cost in comparison to ab initio molecular dynamics

    NASA Astrophysics Data System (ADS)

    John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.

    2016-04-01

    We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.

  3. Cost of Computer Searching

    ERIC Educational Resources Information Center

    Chenery, Peter J.

    1973-01-01

    The program described has the primary objective of making Federally generated technology and research information available to public and private agencies. Cost analysis, data banks, and search strategies are explained. (Author/DH)

  4. Computational Process Modeling for Additive Manufacturing (OSU)

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  5. Computational Process Modeling for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Bagg, Stacey; Zhang, Wei

    2014-01-01

    Computational Process and Material Modeling of Powder Bed additive manufacturing of IN 718. Optimize material build parameters with reduced time and cost through modeling. Increase understanding of build properties. Increase reliability of builds. Decrease time to adoption of process for critical hardware. Potential to decrease post-build heat treatments. Conduct single-track and coupon builds at various build parameters. Record build parameter information and QM Meltpool data. Refine Applied Optimization powder bed AM process model using data. Report thermal modeling results. Conduct metallography of build samples. Calibrate STK models using metallography findings. Run STK models using AO thermal profiles and report STK modeling results. Validate modeling with additional build. Photodiode Intensity measurements highly linear with power input. Melt Pool Intensity highly correlated to Melt Pool Size. Melt Pool size and intensity increase with power. Applied Optimization will use data to develop powder bed additive manufacturing process model.

  6. Computer-based electric energy cost management

    SciTech Connect

    Grant, D.C.; Gallant, R.W.

    1988-01-01

    Control over electrical energy operating costs and their associated administrative overheads can be greatly improved by using a computer to manage electric service contracts. The electrical power supervision system (EPSS) is particularly effective for oil and gas producers whose electric loads are both diversified and distributed over several geographic areas. The system allows for centralized control under a trained specialist who ensures that for each production facility the contract terms and electrical costs are optimized. In addition, this approach to electric energy management effectively reduces corporate overheads by automating invoice payment procedures and enhancing lines of communication with the electric utilities.

  7. Computer Maintenance Operations Center (CMOC), additional computer support equipment ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Computer Maintenance Operations Center (CMOC), additional computer support equipment - Beale Air Force Base, Perimeter Acquisition Vehicle Entry Phased-Array Warning System, Techinical Equipment Building, End of Spencer Paul Road, north of Warren Shingle Road (14th Street), Marysville, Yuba County, CA

  8. A Study of Additional Costs of Second Language Instruction.

    ERIC Educational Resources Information Center

    McEwen, Nelly

    A study was conducted whose primary aim was to identify and explain additional costs incurred by Alberta, Canada school jurisdictions providing second language instruction in 1980. Additional costs were defined as those which would not have been incurred had the second language program not been in existence. Three types of additional costs were…

  9. Cut Costs with Thin Client Computing.

    ERIC Educational Resources Information Center

    Hartley, Patrick H.

    2001-01-01

    Discusses how school districts can considerably increase the number of administrative computers in their districts without a corresponding increase in costs by using the "Thin Client" component of the Total Cost of Ownership (TCC) model. TCC and Thin Client are described, including its software and hardware components. An example of a Thin Client…

  10. Computed tomography characterisation of additive manufacturing materials.

    PubMed

    Bibb, Richard; Thompson, Darren; Winder, John

    2011-06-01

    Additive manufacturing, covering processes frequently referred to as rapid prototyping and rapid manufacturing, provides new opportunities in the manufacture of highly complex and custom-fitting medical devices and products. Whilst many medical applications of AM have been explored and physical properties of the resulting parts have been studied, the characterisation of AM materials in computed tomography has not been explored. The aim of this study was to determine the CT number of commonly used AM materials. There are many potential applications of the information resulting from this study in the design and manufacture of wearable medical devices, implants, prostheses and medical imaging test phantoms. A selection of 19 AM material samples were CT scanned and the resultant images analysed to ascertain the materials' CT number and appearance in the images. It was found that some AM materials have CT numbers very similar to human tissues, FDM, SLA and SLS produce samples that appear uniform on CT images and that 3D printed materials show a variation in internal structure.

  11. 20 CFR 404.278 - Additional cost-of-living increase.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Additional cost-of-living increase. 404.278 Section 404.278 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Cost-Of-Living Increases §...

  12. Computer-Controlled HVAC -- at Low Cost

    ERIC Educational Resources Information Center

    American School and University, 1974

    1974-01-01

    By tying into a computerized building-automation network, Schaumburg High School, Illinois, slashed its energy consumption by one-third. The remotely connected computer controls the mechanical system for the high school as well as other buildings in the community, with the cost being shared by all. (Author)

  13. The Hidden Costs of Wireless Computer Labs

    ERIC Educational Resources Information Center

    Daly, Una

    2005-01-01

    Various elementary schools and middle schools across the U.S. have purchased one or more mobile laboratories. Although the wireless labs have provided more classroom computing, teachers and technology aides still have mixed views about their cost-benefit ratio. This is because the proliferation of viruses and spyware has dramatically increased…

  14. New cement additive improves slurry properties and saves cost

    SciTech Connect

    Pollard, R.; Hibbeler, J.; DiLullo, G.; Shotton, E.A.

    1994-12-31

    A new cement additive has been developed which improves slurry performance and reduces cost. The additive is a vitrified aggregate of calcium-magnesium aluminosilicates with potential cementitious reactivity, hereafter abbreviated CMAS. CMAS has been used successfully on oil and gas wells throughout Indonesia. The purpose of this paper is to illustrate the technical enhancements and cost effectiveness of slurries incorporating CMAS. Laboratory data is presented and working mechanisms are defined to highlight CMAS`s positive effect on; compressive strength, fluid loss control, free water control, gas migration control, resistance to strength retrogression and aggressive fluids. Finally, case studies and an economic analysis are presented to show the cost savings for actual well applications.

  15. 48 CFR 246.470-1 - Assessment of additional costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Assessment of additional costs. 246.470-1 Section 246.470-1 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE CONTRACT MANAGEMENT QUALITY ASSURANCE Government Contract...

  16. Additive Manufacturing of Low Cost Upper Stage Propulsion Components

    NASA Technical Reports Server (NTRS)

    Protz, Christopher; Bowman, Randy; Cooper, Ken; Fikes, John; Taminger, Karen; Wright, Belinda

    2014-01-01

    NASA is currently developing Additive Manufacturing (AM) technologies and design tools aimed at reducing the costs and manufacturing time of regeneratively cooled rocket engine components. These Low Cost Upper Stage Propulsion (LCUSP) tasks are funded through NASA's Game Changing Development Program in the Space Technology Mission Directorate. The LCUSP project will develop a copper alloy additive manufacturing design process and develop and optimize the Electron Beam Freeform Fabrication (EBF3) manufacturing process to direct deposit a nickel alloy structural jacket and manifolds onto an SLM manufactured GRCop chamber and Ni-alloy nozzle. In order to develop these processes, the project will characterize both the microstructural and mechanical properties of the SLMproduced GRCop-84, and will explore and document novel design techniques specific to AM combustion devices components. These manufacturing technologies will be used to build a 25K-class regenerative chamber and nozzle (to be used with tested DMLS injectors) that will be tested individually and as a system in hot fire tests to demonstrate the applicability of the technologies. These tasks are expected to bring costs and manufacturing time down as spacecraft propulsion systems typically comprise more than 70% of the total vehicle cost and account for a significant portion of the development schedule. Additionally, high pressure/high temperature combustion chambers and nozzles must be regeneratively cooled to survive their operating environment, causing their design to be time consuming and costly to build. LCUSP presents an opportunity to develop and demonstrate a process that can infuse these technologies into industry, build competition, and drive down costs of future engines.

  17. Cost Estimation of Laser Additive Manufacturing of Stainless Steel

    NASA Astrophysics Data System (ADS)

    Piili, Heidi; Happonen, Ari; Väistö, Tapio; Venkataramanan, Vijaikrishnan; Partanen, Jouni; Salminen, Antti

    Laser additive manufacturing (LAM) is a layer wise fabrication method in which a laser beam melts metallic powder to form solid objects. Although 3D printing has been invented 30 years ago, the industrial use is quite limited whereas the introduction of cheap consumer 3D printers, in recent years, has familiarized the 3D printing. Interest is focused more and more in manufacturing of functional parts. Aim of this study is to define and discuss the current economic opportunities and restrictions of LAM process. Manufacturing costs were studied with different build scenarios each with estimated cost structure by calculated build time and calculating the costs of the machine, material and energy with optimized machine utilization. All manufacturing and time simulations in this study were carried out with a research machine equal to commercial EOS M series equipment. The study shows that the main expense in LAM is the investment cost of the LAM machine, compared to which the relative proportions of the energy and material costs are very low. The manufacturing time per part is the key factor to optimize costs of LAM.

  18. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  19. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  20. 48 CFR 352.216-70 - Additional cost principles.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... needed to support the bids, proposals, and applications. (2) B & P costs of the current accounting period are allowable as indirect costs. (3) B & P costs of past accounting periods are unallowable in the current period. However, if the organization's established practice is to treat these costs by some...

  1. X-ray computed tomography for additive manufacturing: a review

    NASA Astrophysics Data System (ADS)

    Thompson, A.; Maskery, I.; Leach, R. K.

    2016-07-01

    In this review, the use of x-ray computed tomography (XCT) is examined, identifying the requirement for volumetric dimensional measurements in industrial verification of additively manufactured (AM) parts. The XCT technology and AM processes are summarised, and their historical use is documented. The use of XCT and AM as tools for medical reverse engineering is discussed, and the transition of XCT from a tool used solely for imaging to a vital metrological instrument is documented. The current states of the combined technologies are then examined in detail, separated into porosity measurements and general dimensional measurements. In the conclusions of this review, the limitation of resolution on improvement of porosity measurements and the lack of research regarding the measurement of surface texture are identified as the primary barriers to ongoing adoption of XCT in AM. The limitations of both AM and XCT regarding slow speeds and high costs, when compared to other manufacturing and measurement techniques, are also noted as general barriers to continued adoption of XCT and AM.

  2. Estimating the additional cost of disability: beyond budget standards.

    PubMed

    Wilkinson-Meyers, Laura; Brown, Paul; McNeill, Robert; Patston, Philip; Dylan, Sacha; Baker, Ronelle

    2010-11-01

    Disabled people have long advocated for sufficient resources to live a life with the same rights and responsibilities as non-disabled people. Identifying the unique resource needs of disabled people relative to the population as a whole and understanding the source of these needs is critical for determining adequate levels of income support and for prioritising service provision. Previous attempts to identify the resources and costs associated with disability have tended to rely on surveys of current resource use. These approaches have been criticised as being inadequate for identifying the resources that would be required to achieve a similar standard of living to non-disabled people and for not using methods that are acceptable to and appropriate for the disabled community. The challenge is therefore to develop a methodology that accurately identifies these unique resource needs, uses an approach that is acceptable to the disabled community, enables all disabled people to participate, and distinguishes 'needs' from 'wants.' This paper describes and presents the rationale for a mixed methodology for identifying and prioritising the resource needs of disabled people. The project is a partnership effort between disabled researchers, a disability support organisation and academic researchers in New Zealand. The method integrates a social model of disability framework and an economic cost model using a budget standards approach to identify additional support, equipment, travel and time required to live an 'ordinary life' in the community. A survey is then used to validate the findings and identify information gaps and resource priorities of the community. Both the theoretical basis of the approach and the practical challenges of designing and implementing a methodology that is acceptable to the disabled community, service providers and funding agencies are discussed. PMID:20933315

  3. 48 CFR 3452.216-70 - Additional cost principles.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... scientific, cost and other data needed to support the bids, proposals and applications. Bid and proposal... practice is to treat these costs by some other method, they may be accepted if they are found to...

  4. 48 CFR 3452.216-70 - Additional cost principles.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... scientific, cost, and other data needed to support the bids, proposals, and applications. Bid and proposal... practice is to treat these costs by some other method, they may be accepted if they are found to...

  5. 48 CFR 352.216-70 - Additional cost principles.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-Federal contracts, grants, and agreements, including the development of scientific, cost, and other data... method, they may be accepted if they are found to be reasonable and equitable. (4) B & P costs do...

  6. 48 CFR 352.216-70 - Additional cost principles.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... include independent research and development (IR & D) costs covered by the following paragraph, or pre-award costs covered by paragraph 36 of Attachment B to OMB Circular A-122. (b) IR & D costs. (1) IR & D...-Federal contracts, grants, or other agreements. (2) IR & D shall be allocated its proportionate share...

  7. 48 CFR 352.216-70 - Additional cost principles.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... include independent research and development (IR & D) costs covered by the following paragraph, or pre-award costs covered by paragraph 36 of Attachment B to OMB Circular A-122. (b) IR & D costs. (1) IR & D...-Federal contracts, grants, or other agreements. (2) IR & D shall be allocated its proportionate share...

  8. 48 CFR 352.216-70 - Additional cost principles.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... include independent research and development (IR & D) costs covered by the following paragraph, or pre-award costs covered by paragraph 36 of Attachment B to OMB Circular A-122. (b) IR & D costs. (1) IR & D...-Federal contracts, grants, or other agreements. (2) IR & D shall be allocated its proportionate share...

  9. 38 CFR 36.4404 - Computation of cost.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... cost of adaptations. Under section 2101(b) of Chapter 21, for the purpose of computing the amount of... market value of the adaptations, including installation costs, determined to be reasonably necessary,...

  10. Additional development of the XTRAN3S computer program

    NASA Technical Reports Server (NTRS)

    Borland, C. J.

    1989-01-01

    Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.

  11. Computed Tomography Inspection and Analysis for Additive Manufacturing Components

    NASA Technical Reports Server (NTRS)

    Beshears, Ronald D.

    2016-01-01

    Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.

  12. Cutting Technology Costs with Refurbished Computers

    ERIC Educational Resources Information Center

    Dessoff, Alan

    2010-01-01

    Many district administrators are finding that they can save money on computers by buying preowned ones instead of new ones. The practice has other benefits as well: It allows districts to give more computers to more students who need them, and it also promotes good environmental practices by keeping the machines out of landfills, where they…

  13. Estimating pressurized water reactor decommissioning costs: A user`s manual for the PWR Cost Estimating Computer Program (CECP) software. Draft report for comment

    SciTech Connect

    Bierschbach, M.C.; Mencinsky, G.J.

    1993-10-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the US Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personnel computer, provides estimates for the cost of decommissioning PWR plant stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

  14. Estimating boiling water reactor decommissioning costs. A user`s manual for the BWR Cost Estimating Computer Program (CECP) software: Draft report for comment

    SciTech Connect

    Bierschbach, M.C.

    1994-12-01

    With the issuance of the Decommissioning Rule (July 27, 1988), nuclear power plant licensees are required to submit to the U.S. Regulatory Commission (NRC) for review, decommissioning plans and cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning BWR power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

  15. Estimating boiling water reactor decommissioning costs: A user`s manual for the BWR Cost Estimating Computer Program (CECP) software. Final report

    SciTech Connect

    Bierschbach, M.C.

    1996-06-01

    Nuclear power plant licensees are required to submit to the US Nuclear Regulatory Commission (NRC) for review their decommissioning cost estimates. This user`s manual and the accompanying Cost Estimating Computer Program (CECP) software provide a cost-calculating methodology to the NRC staff that will assist them in assessing the adequacy of the licensee submittals. The CECP, designed to be used on a personal computer, provides estimates for the cost of decommissioning boiling water reactor (BWR) power stations to the point of license termination. Such cost estimates include component, piping, and equipment removal costs; packaging costs; decontamination costs; transportation costs; burial costs; and manpower costs. In addition to costs, the CECP also calculates burial volumes, person-hours, crew-hours, and exposure person-hours associated with decommissioning.

  16. Low-Cost Computers for Education in Developing Countries

    ERIC Educational Resources Information Center

    James, Jeffrey

    2011-01-01

    This paper studies the distribution of computer use in a comparison between two of the most dominant suppliers of low-cost computers for education in developing countries (partly because they involve diametrically opposite ways of tackling the problem). The comparison is made in the context of an analytical framework which traces the changing…

  17. Thermodynamic cost of computation, algorithmic complexity and the information metric

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  18. Costs incurred by applying computer-aided design/computer-aided manufacturing techniques for the reconstruction of maxillofacial defects.

    PubMed

    Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur

    2014-12-01

    This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p < 0.001). Significant differences in mean costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p < 0.001). Significant differences were also obtained between the costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit.

  19. Quantifying the Cost-Benefits of Computer Dental Management Systems

    PubMed Central

    Kerr, David R.

    1982-01-01

    There is little literature that attempts to quantify the cost benefits of computers in dental practice. This study surveyed eighteen vendors to establish a typical computer dental management system. Estimating the labor savings of such computers, the rate of return of these systems was calculated using the methodology employed in a study by Arthur Young and Company. Sensitivity to adjusted initial investment, applications and clerical salary was also calculated. Unlike the findings of the Young study which assumed a fully automated office, this study found that the typical uses of most computer dental management systems does not give an adequate return on investment.

  20. Cost analysis for computer supported multiple-choice paper examinations

    PubMed Central

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913

  1. Software Requirements for a System to Compute Mean Failure Cost

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  2. CHARMM additive and polarizable force fields for biophysics and computer-aided drug design

    PubMed Central

    Vanommeslaeghe, K.

    2014-01-01

    Background Molecular Mechanics (MM) is the method of choice for computational studies of biomolecular systems owing to its modest computational cost, which makes it possible to routinely perform molecular dynamics (MD) simulations on chemical systems of biophysical and biomedical relevance. Scope of Review As one of the main factors limiting the accuracy of MD results is the empirical force field used, the present paper offers a review of recent developments in the CHARMM additive force field, one of the most popular bimolecular force fields. Additionally, we present a detailed discussion of the CHARMM Drude polarizable force field, anticipating a growth in the importance and utilization of polarizable force fields in the near future. Throughout the discussion emphasis is placed on the force fields’ parametrization philosophy and methodology. Major Conclusions Recent improvements in the CHARMM additive force field are mostly related to newly found weaknesses in the previous generation of additive force fields. Beyond the additive approximation is the newly available CHARMM Drude polarizable force field, which allows for MD simulations of up to 1 microsecond on proteins, DNA, lipids and carbohydrates. General Significance Addressing the limitations ensures the reliability of the new CHARMM36 additive force field for the types of calculations that are presently coming into routine computational reach while the availability of the Drude polarizable force fields offers a model that is an inherently more accurate model of the underlying physical forces driving macromolecular structures and dynamics. PMID:25149274

  3. Additional support for the TDK/MABL computer program

    NASA Technical Reports Server (NTRS)

    Nickerson, G. R.; Dunn, Stuart S.

    1993-01-01

    An advanced version of the Two-Dimensional Kinetics (TDK) computer program was developed under contract and released to the propulsion community in early 1989. Exposure of the code to this community indicated a need for improvements in certain areas. In particular, the TDK code needed to be adapted to the special requirements imposed by the Space Transportation Main Engine (STME) development program. This engine utilizes injection of the gas generator exhaust into the primary nozzle by means of a set of slots. The subsequent mixing of this secondary stream with the primary stream with finite rate chemical reaction can have a major impact on the engine performance and the thermal protection of the nozzle wall. In attempting to calculate this reacting boundary layer problem, the Mass Addition Boundary Layer (MABL) module of TDK was found to be deficient in several respects. For example, when finite rate chemistry was used to determine gas properties, (MABL-K option) the program run times became excessive because extremely small step sizes were required to maintain numerical stability. A robust solution algorithm was required so that the MABL-K option could be viable as a rocket propulsion industry design tool. Solving this problem was a primary goal of the phase 1 work effort.

  4. 7 CFR 1710.253 - Engineering and cost studies-addition of generation capacity.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Engineering and cost studies-addition of generation... TO ELECTRIC LOANS AND GUARANTEES Construction Work Plans and Related Studies § 1710.253 Engineering... engineering and cost studies as specified by RUS. The studies shall cover a period from the beginning of...

  5. 7 CFR 1710.253 - Engineering and cost studies-addition of generation capacity.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Engineering and cost studies-addition of generation... TO ELECTRIC LOANS AND GUARANTEES Construction Work Plans and Related Studies § 1710.253 Engineering... engineering and cost studies as specified by RUS. The studies shall cover a period from the beginning of...

  6. 7 CFR 1710.253 - Engineering and cost studies-addition of generation capacity.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Engineering and cost studies-addition of generation... TO ELECTRIC LOANS AND GUARANTEES Construction Work Plans and Related Studies § 1710.253 Engineering... engineering and cost studies as specified by RUS. The studies shall cover a period from the beginning of...

  7. 7 CFR 1710.253 - Engineering and cost studies-addition of generation capacity.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Engineering and cost studies-addition of generation... TO ELECTRIC LOANS AND GUARANTEES Construction Work Plans and Related Studies § 1710.253 Engineering... engineering and cost studies as specified by RUS. The studies shall cover a period from the beginning of...

  8. 7 CFR 1710.253 - Engineering and cost studies-addition of generation capacity.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Engineering and cost studies-addition of generation... TO ELECTRIC LOANS AND GUARANTEES Construction Work Plans and Related Studies § 1710.253 Engineering... engineering and cost studies as specified by RUS. The studies shall cover a period from the beginning of...

  9. Computer-Based Demonstrations in Cognitive Psychology: Benefits and Costs

    ERIC Educational Resources Information Center

    Copeland, David E.; Scott, Jenna R.; Houska, Jeremy Ashton

    2010-01-01

    This study examined the costs and benefits of using demonstrations in an upper level psychology course. For 6 topics, half of the class read a chapter that explained the concept and theoretical explanations for the described effects, and the other half participated in a demonstration in addition to the reading. Students overwhelmingly reported…

  10. 42 CFR 413.355 - Additional payment: QIO photocopy and mailing costs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.355 Additional payment: QIO photocopy and mailing costs. An additional payment is made to a skilled nursing facility in accordance with § 476.78 of...

  11. 42 CFR 413.355 - Additional payment: QIO photocopy and mailing costs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.355 Additional payment: QIO photocopy and mailing costs. An additional payment is made to a skilled nursing facility in accordance with § 476.78 of...

  12. 42 CFR 413.355 - Additional payment: QIO photocopy and mailing costs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... RENAL DISEASE SERVICES; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing Facilities § 413.355 Additional payment: QIO photocopy and mailing costs. An additional payment is made to a skilled nursing facility in accordance with § 476.78 of...

  13. Additional extensions to the NASCAP computer code, volume 3

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Cooke, D. L.

    1981-01-01

    The ION computer code is designed to calculate charge exchange ion densities, electric potentials, plasma temperatures, and current densities external to a neutralized ion engine in R-Z geometry. The present version assumes the beam ion current and density to be known and specified, and the neutralizing electrons to originate from a hot-wire ring surrounding the beam orifice. The plasma is treated as being resistive, with an electron relaxation time comparable to the plasma frequency. Together with the thermal and electrical boundary conditions described below and other straightforward engine parameters, these assumptions suffice to determine the required quantities. The ION code, written in ASCII FORTRAN for UNIVAC 1100 series computers, is designed to be run interactively, although it can also be run in batch mode. The input is free-format, and the output is mainly graphical, using the machine-independent graphics developed for the NASCAP code. The executive routine calls the code's major subroutines in user-specified order, and the code allows great latitude for restart and parameter change.

  14. Additional extensions to the NASCAP computer code, volume 1

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.; Stannard, P. R.

    1981-01-01

    Extensions and revisions to a computer code that comprehensively analyzes problems of spacecraft charging (NASCAP) are documented. Using a fully three dimensional approach, it can accurately predict spacecraft potentials under a variety of conditions. Among the extensions are a multiple electron/ion gun test tank capability, and the ability to model anisotropic and time dependent space environments. Also documented are a greatly extended MATCHG program and the preliminary version of NASCAP/LEO. The interactive MATCHG code was developed into an extremely powerful tool for the study of material-environment interactions. The NASCAP/LEO, a three dimensional code to study current collection under conditions of high voltages and short Debye lengths, was distributed for preliminary testing.

  15. A Web-Based Computer-Tailored Alcohol Prevention Program for Adolescents: Cost-Effectiveness and Intersectoral Costs and Benefits

    PubMed Central

    2016-01-01

    Background Preventing excessive alcohol use among adolescents is important not only to foster individual and public health, but also to reduce alcohol-related costs inside and outside the health care sector. Computer tailoring can be both effective and cost-effective for working with many lifestyle behaviors, yet the available information on the cost-effectiveness of computer tailoring for reducing alcohol use by adolescents is limited as is information on the costs and benefits pertaining to sectors outside the health care sector, also known as intersectoral costs and benefits (ICBs). Objective The aim was to assess the cost-effectiveness of a Web-based computer-tailored intervention for reducing alcohol use and binge drinking by adolescents from a health care perspective (excluding ICBs) and from a societal perspective (including ICBs). Methods Data used were from the Alcoholic Alert study, a cluster randomized controlled trial with randomization at the level of schools into two conditions. Participants either played a game with tailored feedback on alcohol awareness after the baseline assessment (intervention condition) or received care as usual (CAU), meaning that they had the opportunity to play the game subsequent to the final measurement (waiting list control condition). Data were recorded at baseline (T0=January/February 2014) and after 4 months (T1=May/June 2014) and were used to calculate incremental cost-effectiveness ratios (ICERs), both from a health care perspective and a societal perspective. Stochastic uncertainty in the data was dealt with by using nonparametric bootstraps (5000 simulated replications). Additional sensitivity analyses were conducted based on excluding cost outliers. Subgroup cost-effectiveness analyses were conducted based on several background variables, including gender, age, educational level, religion, and ethnicity. Results From both the health care perspective and the societal perspective for both outcome measures, the

  16. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models.

  17. Computational calculation of equilibrium constants: addition to carbonyl compounds.

    PubMed

    Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio

    2009-10-22

    Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within +/- 0.5 log K(hyd) units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than +/- 1.0 log K(hyd), on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary. PMID:19761202

  18. Computational Calculation of Equilibrium Constants: Addition to Carbonyl Compounds

    NASA Astrophysics Data System (ADS)

    Gómez-Bombarelli, Rafael; González-Pérez, Marina; Pérez-Prior, María Teresa; Calle, Emilio; Casado, Julio

    2009-09-01

    Hydration reactions are relevant for understanding many organic mechanisms. Since the experimental determination of hydration and hemiacetalization equilibrium constants is fairly complex, computational calculations now offer a useful alternative to experimental measurements. In this work, carbonyl hydration and hemiacetalization constants were calculated from the free energy differences between compounds in solution, using absolute and relative approaches. The following conclusions can be drawn: (i) The use of a relative approach in the calculation of hydration and hemiacetalization constants allows compensation of systematic errors in the solvation energies. (ii) On average, the methodology proposed here can predict hydration constants within ± 0.5 log Khyd units for aldehydes. (iii) Hydration constants can be calculated for ketones and carboxylic acid derivatives within less than ± 1.0 log Khyd, on average, at the CBS-Q level of theory. (iv) The proposed methodology can predict hemiacetal formation constants accurately at the MP2 6-31++G(d,p) level using a common reference. If group references are used, the results obtained using the much cheaper DFT-B3LYP 6-31++G(d,p) level are almost as accurate. (v) In general, the best results are obtained if a common reference for all compounds is used. The use of group references improves the results at the lower levels of theory, but at higher levels, this becomes unnecessary.

  19. Additive Manufacturing of Anatomical Models from Computed Tomography Scan Data.

    PubMed

    Gür, Y

    2014-12-01

    The purpose of the study presented here was to investigate the manufacturability of human anatomical models from Computed Tomography (CT) scan data via a 3D desktop printer which uses fused deposition modelling (FDM) technology. First, Digital Imaging and Communications in Medicine (DICOM) CT scan data were converted to 3D Standard Triangle Language (STL) format by using In Vaselius digital imaging program. Once this STL file is obtained, a 3D physical version of the anatomical model can be fabricated by a desktop 3D FDM printer. As a case study, a patient's skull CT scan data was considered, and a tangible version of the skull was manufactured by a 3D FDM desktop printer. During the 3D printing process, the skull was built using acrylonitrile-butadiene-styrene (ABS) co-polymer plastic. The printed model showed that the 3D FDM printing technology is able to fabricate anatomical models with high accuracy. As a result, the skull model can be used for preoperative surgical planning, medical training activities, implant design and simulation to show the potential of the FDM technology in medical field. It will also improve communication between medical stuff and patients. Current result indicates that a 3D desktop printer which uses FDM technology can be used to obtain accurate anatomical models. PMID:26336695

  20. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    NASA Astrophysics Data System (ADS)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  1. Pupillary dynamics reveal computational cost in sentence planning.

    PubMed

    Sevilla, Yamila; Maldonado, Mora; Shalóm, Diego E

    2014-01-01

    This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production.

  2. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-01

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. PMID:25753841

  3. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    PubMed

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-01

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services.

  4. Manual of phosphoric acid fuel cell power plant cost model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.

  5. Additives

    NASA Technical Reports Server (NTRS)

    Smalheer, C. V.

    1973-01-01

    The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.

  6. Low Cost Injection Mold Creation via Hybrid Additive and Conventional Manufacturing

    SciTech Connect

    Dehoff, Ryan R.; Watkins, Thomas R.; List, III, Frederick Alyious; Carver, Keith; England, Roger

    2015-12-01

    The purpose of the proposed project between Cummins and ORNL is to significantly reduce the cost of the tooling (machining and materials) required to create injection molds to make plastic components. Presently, the high cost of this tooling forces the design decision to make cast aluminum parts because Cummins typical production volumes are too low to allow injection molded plastic parts to be cost effective with the amortized cost of the injection molding tooling. In addition to reducing the weight of components, polymer injection molding allows the opportunity for the alternative cooling methods, via nitrogen gas. Nitrogen gas cooling offers an environmentally and economically attractive cooling option, if the mold can be manufactured economically. In this project, a current injection molding design was optimized for cooling using nitrogen gas. The various components of the injection mold tooling were fabricated using the Renishaw powder bed laser additive manufacturing technology. Subsequent machining was performed on the as deposited components to form a working assembly. The injection mold is scheduled to be tested in a projection setting at a commercial vendor selected by Cummins.

  7. SideRack: A Cost-Effective Addition to Commercial Zebrafish Housing Systems

    PubMed Central

    Burg, Leonard; Gill, Ryan; Balciuniene, Jorune

    2014-01-01

    Abstract Commercially available aquatic housing systems provide excellent and relatively trouble-free hardware for rearing and housing juvenile as well as adult zebrafish. However, the cost of such systems is quite high and potentially prohibitive for smaller educational and research institutions. The need for tank space prompted us to experiment with various additions to our existing Aquaneering system. We also noted that high water exchange rates typical in commercial systems are suboptimal for quick growth of juvenile fish. We devised a housing system we call “SideRack,” which contains 20 large tanks with air supply and slow water circulation. It enables cost-effective expansion of existing fish facility, with a key additional benefit of increased growth and maturation rates of juvenile fish. PMID:24611601

  8. Computer program to perform cost and weight analysis of transport aircraft. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A digital computer program for evaluating the weight and costs of advanced transport designs was developed. The resultant program, intended for use at the preliminary design level, incorporates both batch mode and interactive graphics run capability. The basis of the weight and cost estimation method developed is a unique way of predicting the physical design of each detail part of a vehicle structure at a time when only configuration concept drawings are available. In addition, the technique relies on methods to predict the precise manufacturing processes and the associated material required to produce each detail part. Weight data are generated in four areas of the program. Overall vehicle system weights are derived on a statistical basis as part of the vehicle sizing process. Theoretical weights, actual weights, and the weight of the raw material to be purchased are derived as part of the structural synthesis and part definition processes based on the computed part geometry.

  9. Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo

    Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.

  10. The Cost of an Additional Disability-Free Life Year for Older Americans: 1992–2005

    PubMed Central

    Cai, Liming

    2013-01-01

    Objective To estimate the cost of an additional disability-free life year for older Americans in 1992–2005. Data Source This study used 1992–2005 Medicare Current Beneficiary Survey, a longitudinal survey of Medicare beneficiaries with a rotating panel design. Study Design This analysis used multistate life table model to estimate probabilities of transition among a discrete set of health states (nondisabled, disabled, and dead) for two panels of older Americans in 1992 and 2002. Health spending incurred between annual health interviews was estimated by a generalized linear mixed model. Health status, including death, was simulated for each member of the panel using these transition probabilities; the associated health spending was cross-walked to the simulated health changes. Principal Findings Disability-free life expectancy (DFLE) increased significantly more than life expectancy during the study period. Assuming that 50 percent of the gains in DFLE between 1992 and 2005 were attributable to increases in spending, the average discounted cost per additional disability-free life year was $71,000. There were small differences between gender and racial/ethnic groups. Conclusions The cost of an additional disability-free life year was substantially below previous estimates based on mortality trends alone. PMID:22670874

  11. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Computation of adjusted average per capita cost... of adjusted average per capita cost (AAPCC). (a) Basic data. In computing the AAPCC, CMS uses the U.S. per capita incurred cost and adjusts it by the factors specified in paragraph (c) of this section...

  12. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Computation of adjusted average per capita cost... of adjusted average per capita cost (AAPCC). (a) Basic data. In computing the AAPCC, CMS uses the U.S. per capita incurred cost and adjusts it by the factors specified in paragraph (c) of this section...

  13. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Computation of adjusted average per capita cost... adjusted average per capita cost (AAPCC). (a) Basic data. In computing the AAPCC, CMS uses the U.S. per capita incurred cost and adjusts it by the factors specified in paragraph (c) of this section...

  14. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Computation of adjusted average per capita cost... of adjusted average per capita cost (AAPCC). (a) Basic data. In computing the AAPCC, CMS uses the U.S. per capita incurred cost and adjusts it by the factors specified in paragraph (c) of this section...

  15. Teaching with Technology: The Classroom Manager. Cost-Conscious Computing.

    ERIC Educational Resources Information Center

    Smith, Rhea; And Others

    1992-01-01

    Teachers discuss how to make the most of technology in the classroom during a tight economy. Ideas include recycling computer printer ribbons, buying replacement batteries for computer power supply packs, upgrading via software, and soliciting donated computer equipment. (SM)

  16. 25 CFR 170.602 - If a tribe incurs unforeseen construction costs, can it get additional funds?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... sufficient additional funds are awarded. (See 25 CFR 900.130(e).) Miscellaneous Provisions ... 25 Indians 1 2013-04-01 2013-04-01 false If a tribe incurs unforeseen construction costs, can it... Funding Process § 170.602 If a tribe incurs unforeseen construction costs, can it get additional...

  17. Air-stripper design and costing computer program

    SciTech Connect

    Dzombak, D.A.; Roy, S.B. ); Fang, H.J. )

    1993-10-01

    Packed-tower, countercurrent air-stripping is widely used to remove volatile organic compounds from contaminated water. An air stripper is designed using a well-developed mathematical model of the process. However, the number of variables in the model exceeds the number of constraining equations by two, with the result that a number of alternative air-stripper designs are possible for a particular water treatment objective. To select one or several designs associated with minimum capital and operating costs, it is necessary to develop a large number of possible designs and to estimate the costs associated with each. The air-stripper design and costing (ASDC) program, a microcomputer-based public-domain program, automates the iterative design and cost calculations and thus enables rapid, preliminary evaluation of alternative air-stripper designs and associated costs. In this article, the design methodology and cost-estimation techniques incorporated in ASDC are described, and ASDC cost predictions are compared with costs reported for actual operating air strippers.

  18. Cost-Effective Applications of Computer-Based Education

    ERIC Educational Resources Information Center

    Avner, R. A.

    1978-01-01

    Cost effective applications of CBE do exist; however, they demand detailed cost information for all appropriate alternatives to CAI, a thorough understanding of instructional design, and an expert knowledge of the relative capabilities of alternative media in supporting particular instructional approaches. (Author/RAO)

  19. Reducing Computer Costs of Students Using the SPSS.

    ERIC Educational Resources Information Center

    Soley, Lawrence C.; And Others

    1980-01-01

    Compares the cost of using the general mode and the integer mode of the Statistical Package for the Social Sciences. Indicates that the integer mode is generally more cost efficient and should be learned by journalism students planning to analyze research data. (TJ)

  20. How to produce personality neuroscience research with high statistical power and low additional cost.

    PubMed

    Mar, Raymond A; Spreng, R Nathan; Deyoung, Colin G

    2013-09-01

    Personality neuroscience involves examining relations between cognitive or behavioral variability and neural variables like brain structure and function. Such studies have uncovered a number of fascinating associations but require large samples, which are expensive to collect. Here, we propose a system that capitalizes on neuroimaging data commonly collected for separate purposes and combines it with new behavioral data to test novel hypotheses. Specifically, we suggest that groups of researchers compile a database of structural (i.e., anatomical) and resting-state functional scans produced for other task-based investigations and pair these data with contact information for the participants who contributed the data. This contact information can then be used to collect additional cognitive, behavioral, or individual-difference data that are then reassociated with the neuroimaging data for analysis. This would allow for novel hypotheses regarding brain-behavior relations to be tested on the basis of large sample sizes (with adequate statistical power) for low additional cost. This idea can be implemented at small scales at single institutions, among a group of collaborating researchers, or perhaps even within a single lab. It can also be implemented at a large scale across institutions, although doing so would entail a number of additional complications.

  1. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF... adjusted average per capita cost (AAPCC). (a) Basic data. In computing the AAPCC, CMS uses the U.S....

  2. Some Useful Cost-Benefit Criteria for Evaluating Computer-Based Test Delivery Models and Systems

    ERIC Educational Resources Information Center

    Luecht, Richard M.

    2005-01-01

    Computer-based testing (CBT) is typically implemented using one of three general test delivery models: (1) multiple fixed testing (MFT); (2) computer-adaptive testing (CAT); or (3) multistage testing (MSTs). This article reviews some of the real cost drivers associated with CBT implementation--focusing on item production costs, the costs…

  3. Cost-Effective Additive Manufacturing in Space: HELIOS Technology Challenge Guide

    NASA Technical Reports Server (NTRS)

    DeVieneni, Alayna; Velez, Carlos Andres; Benjamin, David; Hollenbeck, Jay

    2012-01-01

    Welcome to the HELIOS Technology Challenge Guide. This document is intended to serve as a general road map for participants of the HELIOS Technology Challenge [HTC] Program and the associated inaugural challenge: HTC-01: Cost-Effective Additive Manufacturing in Space. Please note that this guide is not a rule book and is not meant to hinder the development of innovative ideas. Its primary goal is to highlight the objectives of the HTC-01 Challenge and to describe possible solution routes and pitfalls that such technology may encounter in space. Please also note that participants wishing to demonstrate any hardware developed under this program during any future HELIOS Technology Challenge showcase event(s) may be subject to event regulations to be published separately at a later date.

  4. ESF-X: a low-cost modular experiment computer for space flight experiments

    NASA Astrophysics Data System (ADS)

    Sell, Steven; Zapetis, Joseph; Littlefield, Jim; Vining, Joanne

    2004-08-01

    The high cost associated with spaceflight research often compels experimenters to scale back their research goals significantly purely for budgetary reasons; among experiment systems, control and data collection electronics are a major contributor to total project cost. ESF-X was developed as an architecture demonstration in response to this need: it is a highly capable, radiation-protected experiment support computer, designed to be configurable on demand to each investigator's particular experiment needs, and operational in LEO for missions lasting up to several years (e.g., ISS EXPRESS) without scheduled service or maintenance. ESF-X can accommodate up to 255 data channels (I/O, A/D, D/A, etc.), allocated per customer request, with data rates up to 40kHz. Additionally, ESF-X can be programmed using the graphical block-diagram based programming languages Simulink and MATLAB. This represents a major cost saving opportunity for future investigators, who can now obtain a customized, space-qualified experiment controller at steeply reduced cost compared to 'new' design, and without the performance compromises associated with using preexisting 'generic' systems. This paper documents the functional benchtop prototype, which utilizes a combination of COTS and space-qualified components, along with unit-gravity-specific provisions appropriate to laboratory environment evaluation of the ESF-X design concept and its physical implementation.

  5. Additive Manufacturing for Cost Efficient Production of Compact Ceramic Heat Exchangers and Recuperators

    SciTech Connect

    Shulman, Holly; Ross, Nicole

    2015-10-30

    An additive manufacture technique known as laminated object manufacturing (LOM) was used to fabricate compact ceramic heat exchanger prototypes. LOM uses precision CO2 laser cutting of ceramic green tapes, which are then precision stacked to build a 3D object with fine internal features. Modeling was used to develop prototype designs and predict the thermal response, stress, and efficiency in the ceramic heat exchangers. Build testing and materials analyses were used to provide feedback for the design selection. During this development process, laminated object manufacturing protocols were established. This included laser optimization, strategies for fine feature integrity, lamination fluid control, green handling, and firing profile. Three full size prototypes were fabricated using two different designs. One prototype was selected for performance testing. During testing, cross talk leakage prevented the application of a high pressure differential, however, the prototype was successful at withstanding the high temperature operating conditions (1300 °F). In addition, analysis showed that the bulk of the part did not have cracks or leakage issues. This led to the development of a module method for next generation LOM heat exchangers. A scale-up cost analysis showed that given a purpose built LOM system, these ceramic heat exchangers would be affordable for the applications.

  6. Developing Specifications for a Low-Cost Computer System for Secondary Schools. PREP 38.

    ERIC Educational Resources Information Center

    Kleiner, George

    More and more secondary schools are becoming interested in introducing their students to computers and computer concepts. A central problem for such schools, however, is obtaining reliable computer service with capacity for all the students who are interested, but at a cost the school can afford. Although many schools use commercial or small-scale…

  7. Energy Drain by Computers Stifles Efforts at Cost Control

    ERIC Educational Resources Information Center

    Keller, Josh

    2009-01-01

    The high price of storing and processing data is hurting colleges and universities across the country. In response, some institutions are embracing greener technologies to keep costs down and help the environment. But compared with other industries, colleges and universities have been slow to understand the problem and to adopt energy-saving…

  8. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  9. A Low Cost Microcomputer Laboratory for Investigating Computer Architecture.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    1980-01-01

    Described is a microcomputer laboratory at the United States Military Academy at West Point, New York, which provides easy access to non-volatile memory and a single input/output file system for 16 microcomputer laboratory positions. A microcomputer network that has a centralized data base is implemented using the concepts of computer network…

  10. Computers and Social Knowledge; Opportunities and Opportunity Cost.

    ERIC Educational Resources Information Center

    Hartoonian, Michael

    Educators must use computers to move society beyond the information age and toward the age of wisdom. The movement toward knowledge and wisdom constitutes an evolution beyond the "third wave" or electronic/information age, the phase of history in which, according to Alvin Toffler, we are now living. We are already moving into a fourth wave, the…

  11. Utilizing a Collaborative Cross Number Puzzle Game to Develop the Computing Ability of Addition and Subtraction

    ERIC Educational Resources Information Center

    Chen, Yen-Hua; Looi, Chee-Kit; Lin, Chiu-Pin; Shao, Yin-Juan; Chan, Tak-Wai

    2012-01-01

    While addition and subtraction is a key mathematical skill for young children, a typical activity for them in classrooms involves doing repetitive arithmetic calculation exercises. In this study, we explore a collaborative way for students to learn these skills in a technology-enabled way with wireless computers. Two classes, comprising a total of…

  12. Estimating development cost for a tailored interactive computer program to enhance colorectal cancer screening compliance.

    PubMed

    Lairson, David R; Chang, Yu-Chia; Bettencourt, Judith L; Vernon, Sally W; Greisinger, Anthony

    2006-01-01

    The authors used an actual-work estimate method to estimate the cost of developing a tailored interactive computer education program to improve compliance with colorectal cancer screening guidelines in a large multi-specialty group medical practice. Resource use was prospectively collected from time logs, administrative records, and a design and computing subcontract. Sensitivity analysis was performed to examine the uncertainty of the overhead cost rate and other parameters. The cost of developing the system was Dollars 328,866. The development cost was Dollars 52.79 per patient when amortized over a 7-year period with a cohort of 1,000 persons. About 20% of the cost was incurred in defining the theoretic framework and supporting literature, constructing the variables and survey, and conducting focus groups. About 41% of the cost was for developing the messages, algorithms, and constructing program elements, and the remaining cost was to create and test the computer education program. About 69% of the cost was attributable to personnel expenses. Development cost is rarely estimated but is important for feasibility studies and ex-ante economic evaluations of alternative interventions. The findings from this study may aid decision makers in planning, assessing, budgeting, and pricing development of tailored interactive computer-based interventions. PMID:16799126

  13. Multiple sequence alignment with arbitrary gap costs: computing an optimal solution using polyhedral combinatorics.

    PubMed

    Althaus, Ernst; Caprara, Alberto; Lenhof, Hans-Peter; Reinert, Knut

    2002-01-01

    Multiple sequence alignment is one of the dominant problems in computational molecular biology. Numerous scoring functions and methods have been proposed, most of which result in NP-hard problems. In this paper we propose for the first time a general formulation for multiple alignment with arbitrary gap-costs based on an integer linear program (ILP). In addition we describe a branch-and-cut algorithm to effectively solve the ILP to optimality. We evaluate the performances of our approach in terms of running time and quality of the alignments using the BAliBase database of reference alignments. The results show that our implementation ranks amongst the best programs developed so far.

  14. Method for computing marginal costs associated with on-site energy technologies

    SciTech Connect

    Bright, R.; Davitian, H.

    1980-08-01

    A method for calculating long-run marginal costs for an electric utility is described. The method is especially suitable for computing the marginal costs associated with the use of small on-site energy technologies, i.e., cogenerators, solar heating and hot water systems, wind generators, etc., which are interconnected with electric utilities. In particular, both the costs a utility avoids when power is delivered to it from a facility with an on-site generator and marginal cost to the utility of supplementary power sold to the facility can be calculated. A utility capacity expansion model is used to compute changes in the utility's costs when loads are modified by the use of the on-site technology. Changes in capacity-related costs and production costs are thus computed in an internally consistent manner. The variable nature of the generation/load pattern of the on-site technology is treated explicitly. The method yields several measures of utility costs that can be used to develop rates based on marginal avoided costs for on-site technologies as well as marginal cost rates for conventional utility customers.

  15. 78 FR 32224 - Availability of Version 3.1.2 of the Connect America Fund Phase II Cost Model; Additional...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ...; Additional Discussion Topics in Connect America Cost Model Virtual Workshop AGENCY: Federal Communications... issues in the ongoing virtual workshop. DATES: Comments are due on or before June 18, 2013. If you... comments. Virtual Workshop: In addition to the usual methods for filing electronic comments, the...

  16. Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application

    NASA Astrophysics Data System (ADS)

    Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.

    2013-12-01

    The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.

  17. A Mathematical Model for Project Planning and Cost Analysis in Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Fitzgerald, William F.

    Computer-assisted instruction (CAI) has become sufficiently widespread to require attention to the relationships between its costs, administration and benefits. Despite difficulties in instituting them, quantifiable cost-effectiveness analyses offer several advantages. They allow educators to specify with precision anticipated instructional loads,…

  18. Two Computer Programs for Equipment Cost Estimation and Economic Evaluation of Chemical Processes.

    ERIC Educational Resources Information Center

    Kuri, Carlos J.; Corripio, Armando B.

    1984-01-01

    Describes two computer programs for use in process design courses: an easy-to-use equipment cost estimation program based on latest cost correlations available and an economic evaluation program which calculates two profitability indices. Comparisons between programed and hand-calculated results are included. (JM)

  19. Minnesota Computer Aided Library System (MCALS); University of Minnesota Subsystem Cost/Benefits Analysis.

    ERIC Educational Resources Information Center

    Lourey, Eugene D., Comp.

    The Minnesota Computer Aided Library System (MCALS) provides a basis of unification for library service program development in Minnesota for eventual linkage to the national information network. A prototype plan for communications functions is illustrated. A cost/benefits analysis was made to show the cost/effectiveness potential for MCALS. System…

  20. An Evaluation of the Costs of Computer-Assisted Instruction. Program Report No. 80-B7.

    ERIC Educational Resources Information Center

    Levin, Henry M.; Woo, Louis

    Cost data were collected from a study on the effectiveness of computer assisted instruction (CAI) for culturally disadvantaged children in the Los Angeles Unified School District. Based upon the resource ingredients approach to measuring costs, it was found that up to three daily 10-minute sessions of drill and practice could be provided for each…

  1. The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments

    NASA Technical Reports Server (NTRS)

    Budney, T. J.; Stone, R. W.

    1982-01-01

    Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.

  2. hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers

    NASA Astrophysics Data System (ADS)

    Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland

    We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.

  3. Using a small/low cost computer in an information center

    NASA Technical Reports Server (NTRS)

    Wilde, D. U.

    1972-01-01

    Small/low cost computers are available with I/O capacities that make them suitable for SDI and retrospective searching on any of the many commercially available data bases. A small two-tape computer system is assumed, and an analysis of its run-time equations leads to a three-step search procedure. Run times and costs are shown as a function of file size, number of search terms, and input transmission rates. Actual examples verify that it is economically feasible for an information center to consider its own small, dedicated computer system.

  4. Transport Sector Marginal Abatement Cost Curves in Computable General Equilibrium Model

    NASA Astrophysics Data System (ADS)

    Tippichai, Atit; Fukuda, Atsushi; Morisugi, Hisayoshi

    In the last decade, computable general equilibrium (CGE) models have emerged a standard tool for climate policy evaluation due to their abilities to prospectively elucidate the character and magnitude of the economic impacts of energy and environmental policies. Furthermore, marginal abatement cost (MAC) curves which represent GHG emissions reduction potentials and costs can be derived from these top-down economic models. However, most studies have never address MAC curves for a specific sector that have a large coverage of countries which are needed for allocation of optimal emission reductions. This paper aims to explicitly describe the meaning and character of MAC curves for transport sector in a CGE context through using the AIM/CGE Model developed by Toshihiko Masui. It found that the MAC curves derived in this study are the inverse of the general equilibrium reduction function for CO2 emissions. Moreover, the transport sector MAC curves for six regions including USA, EU-15, Japan, China, India, and Brazil, derived from this study are compared to the reduction potentials under 100 USD/tCO2 in 2020 from a bottom-up study. The results showed that the ranking of the regional reduction potentials in transport sector from this study are almost same with the bottom-up study except the ranks of the EU-15 and China. In addition, the range of the reduction potentials from this study is wider and only the USA has higher potentials than those derived from the bottom-up study.

  5. 78 FR 12271 - Wireline Competition Bureau Seeks Additional Comment In Connect America Cost Model Virtual Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-22

    ... Virtual Workshop AGENCY: Federal Communications Commission. ACTION: Proposed rule. SUMMARY: In this... Site: http://fjallfoss.fcc.gov/ecfs2/ . Follow the instructions for submitting comments. Virtual...://www.fcc.gov/blog/wcb-cost-model-virtual-workshop-2012 . People with Disabilities: Contact the FCC...

  6. Government regulation and public opposition create high additional costs for field trials with GM crops in Switzerland.

    PubMed

    Bernauer, Thomas; Tribaldos, Theresa; Luginbühl, Carolin; Winzeler, Michael

    2011-12-01

    Field trials with GM crops are not only plant science experiments. They are also social experiments concerning the implications of government imposed regulatory constraints and public opposition for scientific activity. We assess these implications by estimating additional costs due to government regulation and public opposition in a recent set of field trials in Switzerland. We find that for every Euro spent on research, an additional 78 cents were spent on security, an additional 31 cents on biosafety, and an additional 17 cents on government regulatory supervision. Hence the total additional spending due to government regulation and public opposition was around 1.26 Euros for every Euro spent on the research per se. These estimates are conservative; they do not include additional costs that are hard to monetize (e.g. stakeholder information and dialogue activities, involvement of various government agencies). We conclude that further field experiments with GM crops in Switzerland are unlikely unless protected sites are set up to reduce these additional costs.

  7. Plant process computer replacements - techniques to limit installation schedules and costs

    SciTech Connect

    Baker, M.D.; Olson, J.L. )

    1992-01-01

    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested.

  8. Development and Validation of a Fast, Accurate and Cost-Effective Aeroservoelastic Method on Advanced Parallel Computing Systems

    NASA Technical Reports Server (NTRS)

    Goodwin, Sabine A.; Raj, P.

    1999-01-01

    Progress to date towards the development and validation of a fast, accurate and cost-effective aeroelastic method for advanced parallel computing platforms such as the IBM SP2 and the SGI Origin 2000 is presented in this paper. The ENSAERO code, developed at the NASA-Ames Research Center has been selected for this effort. The code allows for the computation of aeroelastic responses by simultaneously integrating the Euler or Navier-Stokes equations and the modal structural equations of motion. To assess the computational performance and accuracy of the ENSAERO code, this paper reports the results of the Navier-Stokes simulations of the transonic flow over a flexible aeroelastic wing body configuration. In addition, a forced harmonic oscillation analysis in the frequency domain and an analysis in the time domain are done on a wing undergoing a rigid pitch and plunge motion. Finally, to demonstrate the ENSAERO flutter-analysis capability, aeroelastic Euler and Navier-Stokes computations on an L-1011 wind tunnel model including pylon, nacelle and empennage are underway. All computational solutions are compared with experimental data to assess the level of accuracy of ENSAERO. As the computations described above are performed, a meticulous log of computational performance in terms of wall clock time, execution speed, memory and disk storage is kept. Code scalability is also demonstrated by studying the impact of varying the number of processors on computational performance on the IBM SP2 and the Origin 2000 systems.

  9. Addition of flexible body option to the TOLA computer program, part 1

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    This report describes a flexible body option that was developed and added to the Takeoff and Landing Analysis (TOLA) computer program. The addition of the flexible body option to TOLA allows it to be used to study essentially any conventional type airplane in the ground operating environment. It provides the capability to predict the total motion of selected points on the analytical methods incorporated in the program and operating instructions for the option are described. A program listing is included along with several example problems to aid in interpretation of the operating instructions and to illustrate program usage.

  10. Computer program to perform cost and weight analysis of transport aircraft. Volume 2: Technical volume

    NASA Technical Reports Server (NTRS)

    1973-01-01

    An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.

  11. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  12. Healthcare-associated Staphylococcus aureus bloodstream infection: length of stay, attributable mortality, and additional direct costs.

    PubMed

    Primo, Mariusa Gomes Borges; Guilarde, Adriana Oliveira; Martelli, Celina M Turchi; Batista, Lindon Johnson de Abreu; Turchi, Marília Dalva

    2012-01-01

    This study aimed to determine the excess length of stay, extra expenditures, and attributable mortality to healthcare-associated S. aureus bloodstream infection (BSI) at a teaching hospital in central Brazil. The study design was a matched (1:1) case-control. Cases were defined as patients >13 years old, with a healthcare-associated S. aureus BSI. Controls included patients without an S. aureus BSI, who were matched to cases by gender, age (± 7 years), morbidity, and underlying disease. Data were collected from medical records and from the Brazilian National Hospital Information System (Sistema de Informações Hospitalares do Sistema Único de Saúde - SIH/SUS). A Wilcoxon rank sum test was performed to compare length of stay and costs between cases and controls. Differences in mortality between cases and controls were compared using McNemar's tests. The Mantel-Haenzel stratified analysis was performed to compare invasive device utilization. Data analyses were conducted using Epi Info 6.0 and Statistical Package for Social Sciences (SPSS 13.0). 84 case-control pairs matched by gender, age, admission period, morbidity, and underlying disease were analyzed. The mean lengths of hospital stay were 48.3 and 16.2 days for cases and controls, respectively (p<0.01), yielding an excess hospital stay among cases of 32.1 days. The excess mortality among cases compared to controls that was attributable to S. aureus bloodstream infection was 45.2%. Cases had a higher risk of dying compared to controls (OR 7.3, 95% CI 3.1-21.1). Overall costs of hospitalization (SIH/SUS) reached US$ 123,065 for cases versus US$ 40,247 for controls (p<0.01). The cost of antimicrobial therapy was 6.7 fold higher for cases compared to controls. Healthcare-associated S. aureus BSI was associated with statistically significant increases in length of hospitalization, attributable mortality, and economic burden. Implementation of measures to minimize the risk of healthcare-associated bacterial

  13. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines.

    PubMed

    Gansäuer, Andreas; Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca; Grimme, Stefan

    2013-01-01

    The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol(-1) and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG (‡) and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically.

  14. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines.

    PubMed

    Gansäuer, Andreas; Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca; Grimme, Stefan

    2013-01-01

    The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol(-1) and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG (‡) and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  15. Computational study of the rate constants and free energies of intramolecular radical addition to substituted anilines

    PubMed Central

    Seddiqzai, Meriam; Dahmen, Tobias; Sure, Rebecca

    2013-01-01

    Summary The intramolecular radical addition to aniline derivatives was investigated by DFT calculations. The computational methods were benchmarked by comparing the calculated values of the rate constant for the 5-exo cyclization of the hexenyl radical with the experimental values. The dispersion-corrected PW6B95-D3 functional provided very good results with deviations for the free activation barrier compared to the experimental values of only about 0.5 kcal mol−1 and was therefore employed in further calculations. Corrections for intramolecular London dispersion and solvation effects in the quantum chemical treatment are essential to obtain consistent and accurate theoretical data. For the investigated radical addition reaction it turned out that the polarity of the molecules is important and that a combination of electrophilic radicals with preferably nucleophilic arenes results in the highest rate constants. This is opposite to the Minisci reaction where the radical acts as nucleophile and the arene as electrophile. The substitution at the N-atom of the aniline is crucial. Methyl substitution leads to slower addition than phenyl substitution. Carbamates as substituents are suitable only when the radical center is not too electrophilic. No correlations between free reaction barriers and energies (ΔG ‡ and ΔG R) are found. Addition reactions leading to indanes or dihydrobenzofurans are too slow to be useful synthetically. PMID:24062821

  16. Municipal Rebate Programs for Environmental Retrofits: An Evaluation of Additionality and Cost-Effectiveness

    ERIC Educational Resources Information Center

    Bennear, Lori S.; Lee, Jonathan M.; Taylor, Laura O.

    2013-01-01

    When policies incentivize voluntary activities that also take place in the absence of the incentive, it is critical to identify the additionality of the policy--that is, the degree to which the policy results in actions that would not have occurred otherwise. Rebate programs have become a common conservation policy tool for local municipalities…

  17. The economics of time shared computing: Congestion, user costs and capacity

    NASA Technical Reports Server (NTRS)

    Agnew, C. E.

    1982-01-01

    Time shared systems permit the fixed costs of computing resources to be spread over large numbers of users. However, bottleneck results in the theory of closed queueing networks can be used to show that this economy of scale will be offset by the increased congestion that results as more users are added to the system. If one considers the total costs, including the congestion cost, there is an optimal number of users for a system which equals the saturation value usually used to define system capacity.

  18. Low-cost additive improved silage quality and anaerobic digestion performance of napiergrass.

    PubMed

    Lianhua, Li; Feng, Zhen; Yongming, Sun; Zhenhong, Yuan; Xiaoying, Kong; Xianyou, Zhou; Hongzhi, Niu

    2014-12-01

    Effects of molasses-alcoholic wastewater on the ensiling quality of napiergrass were investigated at ambient temperature, and its anaerobic digestion performance was assessed at mesophilic temperature. Results showed that the molasses-alcoholic wastewater had positive effect on silage quality and anaerobic digestion performance. Lower pH values of 5.20-5.28, lower NH3-N contents of 32.65-36.60 g/kg and higher lactic acid contents of 56-61 mg/kg FM were obtained for the silage samples with molasses-alcoholic wastewater addition. Higher specific biogas yield of 273 mL/g VS was obtained for the sample with 11% molasses-alcoholic wastewater added. Therefore 11% molasses-alcoholic wastewater addition was recommended.

  19. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    NASA Technical Reports Server (NTRS)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  20. Additional reductions in Medicare spending growth will likely require shifting costs to beneficiaries.

    PubMed

    Chernew, Michael E

    2013-05-01

    Policy makers have considerable interest in reducing Medicare spending growth. Clarity in the debate on reducing Medicare spending growth requires recognition of three important distinctions: the difference between public and total spending on health, the difference between the level of health spending and rate of health spending growth, and the difference between growth per beneficiary and growth in the number of beneficiaries in Medicare. The primary policy issue facing the US health care system is the rate of spending growth in public programs, and solving that problem will probably require reforms to the entire health care sector. The Affordable Care Act created a projected trajectory for Medicare spending per beneficiary that is lower than historical growth rates. Although opportunities for one-time savings exist, any long-term savings from Medicare, beyond those already forecast, will probably require a shift in spending from taxpayers to beneficiaries via higher beneficiary premium contributions (overall or via means testing), changes in eligibility, or greater cost sharing at the point of service.

  1. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  2. Effectiveness of Multimedia Elements in Computer Supported Instruction: Analysis of Personalization Effects, Students' Performances and Costs

    ERIC Educational Resources Information Center

    Zaidel, Mark; Luo, XiaoHui

    2010-01-01

    This study investigates the efficiency of multimedia instruction at the college level by comparing the effectiveness of multimedia elements used in the computer supported learning with the cost of their preparation. Among the various technologies that advance learning, instructors and students generally identify interactive multimedia elements as…

  3. Cost-Effective Computing: Making the Most of Your PC Dollars.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1992-01-01

    Lists 27 suggestions for making cost-effective decisions when buying personal computers. Topics covered include physical comfort; modem speed; color graphics; institutional discounts; direct-order firms; brand names; replacing versus upgrading; expanding hard disk capacity; printers; software; wants versus needs; and RLIN (Research Libraries…

  4. Computational cost of full QCD simulations experienced by CP-PACS and JLQCD Collaborations

    NASA Astrophysics Data System (ADS)

    Ukawa, A.

    We summarize the experience of the CP-PACS and JLQCD Collaborations on the computational cost of two-flavor full QCD simulations with improved gauge and Wilson-type quark actions. Based on the experience, estimates are made on the Tflops.years necessary for advancing full QCD studies.

  5. Computer-Based Instruction: A Background Paper on its Status, Cost/Effectiveness and Telecommunications Requirements.

    ERIC Educational Resources Information Center

    Singh, Jai P.; Morgan, Robert P.

    In the slightly over twelve years since its inception, computer-based instruction (CBI) has shown the promise of being more cost-effective than traditional instruction for certain educational applications. Pilot experiments are underway to evaluate various CBI systems. Should these tests prove successful, a major problem confronting advocates of…

  6. Low-cost space-varying FIR filter architecture for computational imaging systems

    NASA Astrophysics Data System (ADS)

    Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.

    2010-01-01

    Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.

  7. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    NASA Astrophysics Data System (ADS)

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubenchik, A. M.

    2015-12-01

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  8. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  9. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    DOE PAGES

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Kamath, C.; Khairallah, S. A.; Rubencik, A. M.

    2015-12-29

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In thismore » study, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.« less

  10. Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges

    SciTech Connect

    King, W. E.; Anderson, A. T.; Ferencz, R. M.; Hodge, N. E.; Khairallah, S. A.; Kamath, C.; Rubenchik, A. M.

    2015-12-15

    The production of metal parts via laser powder bed fusion additive manufacturing is growing exponentially. However, the transition of this technology from production of prototypes to production of critical parts is hindered by a lack of confidence in the quality of the part. Confidence can be established via a fundamental understanding of the physics of the process. It is generally accepted that this understanding will be increasingly achieved through modeling and simulation. However, there are significant physics, computational, and materials challenges stemming from the broad range of length and time scales and temperature ranges associated with the process. In this paper, we review the current state of the art and describe the challenges that need to be met to achieve the desired fundamental understanding of the physics of the process.

  11. Cost effectiveness of a computer-delivered intervention to improve HIV medication adherence

    PubMed Central

    2013-01-01

    Background High levels of adherence to medications for HIV infection are essential for optimal clinical outcomes and to reduce viral transmission, but many patients do not achieve required levels. Clinician-delivered interventions can improve patients’ adherence, but usually require substantial effort by trained individuals and may not be widely available. Computer-delivered interventions can address this problem by reducing required staff time for delivery and by making the interventions widely available via the Internet. We previously developed a computer-delivered intervention designed to improve patients’ level of health literacy as a strategy to improve their HIV medication adherence. The intervention was shown to increase patients’ adherence, but it was not clear that the benefits resulting from the increase in adherence could justify the costs of developing and deploying the intervention. The purpose of this study was to evaluate the relation of development and deployment costs to the effectiveness of the intervention. Methods Costs of intervention development were drawn from accounting reports for the grant under which its development was supported, adjusted for costs primarily resulting from the project’s research purpose. Effectiveness of the intervention was drawn from results of the parent study. The relation of the intervention’s effects to changes in health status, expressed as utilities, was also evaluated in order to assess the net cost of the intervention in terms of quality adjusted life years (QALYs). Sensitivity analyses evaluated ranges of possible intervention effectiveness and durations of its effects, and costs were evaluated over several deployment scenarios. Results The intervention’s cost effectiveness depends largely on the number of persons using it and the duration of its effectiveness. Even with modest effects for a small number of patients the intervention was associated with net cost savings in some scenarios and for

  12. A performance/cost evaluation for a GPU-based drug discovery application on volunteer computing.

    PubMed

    Guerrero, Ginés D; Imbernón, Baldomero; Pérez-Sánchez, Horacio; Sanz, Francisco; García, José M; Cecilia, José M

    2014-01-01

    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor.

  13. A Performance/Cost Evaluation for a GPU-Based Drug Discovery Application on Volunteer Computing

    PubMed Central

    Guerrero, Ginés D.; Imbernón, Baldomero; García, José M.

    2014-01-01

    Bioinformatics is an interdisciplinary research field that develops tools for the analysis of large biological databases, and, thus, the use of high performance computing (HPC) platforms is mandatory for the generation of useful biological knowledge. The latest generation of graphics processing units (GPUs) has democratized the use of HPC as they push desktop computers to cluster-level performance. Many applications within this field have been developed to leverage these powerful and low-cost architectures. However, these applications still need to scale to larger GPU-based systems to enable remarkable advances in the fields of healthcare, drug discovery, genome research, etc. The inclusion of GPUs in HPC systems exacerbates power and temperature issues, increasing the total cost of ownership (TCO). This paper explores the benefits of volunteer computing to scale bioinformatics applications as an alternative to own large GPU-based local infrastructures. We use as a benchmark a GPU-based drug discovery application called BINDSURF that their computational requirements go beyond a single desktop machine. Volunteer computing is presented as a cheap and valid HPC system for those bioinformatics applications that need to process huge amounts of data and where the response time is not a critical factor. PMID:25025055

  14. On Training Efficiency and Computational Costs of a Feed Forward Neural Network: A Review

    PubMed Central

    Laudani, Antonino; Lozito, Gabriele Maria; Riganti Fulginei, Francesco; Salvini, Alessandro

    2015-01-01

    A comprehensive review on the problem of choosing a suitable activation function for the hidden layer of a feed forward neural network has been widely investigated. Since the nonlinear component of a neural network is the main contributor to the network mapping capabilities, the different choices that may lead to enhanced performances, in terms of training, generalization, or computational costs, are analyzed, both in general-purpose and in embedded computing environments. Finally, a strategy to convert a network configuration between different activation functions without altering the network mapping capabilities will be presented. PMID:26417368

  15. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse...

  16. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse...

  17. Computers in Secondary Schools in Developing Countries: Costs and Other Issues (Including Original Data from South Africa and Zimbabwe).

    ERIC Educational Resources Information Center

    Cawthera, Andy

    This research is mainly concerned with the costs of computers in schools in developing countries. It starts with a brief overview of the information revolution and its consequences. It then briefly examines some of the arguments for the use of computers in schools in developing countries, before reviewing previous work undertaken on the costs of…

  18. Using additive manufacturing in accuracy evaluation of reconstructions from computed tomography.

    PubMed

    Smith, Erin J; Anstey, Joseph A; Venne, Gabriel; Ellis, Randy E

    2013-05-01

    Bone models derived from patient imaging and fabricated using additive manufacturing technology have many potential uses including surgical planning, training, and research. This study evaluated the accuracy of bone surface reconstruction of two diarthrodial joints, the hip and shoulder, from computed tomography. Image segmentation of the tomographic series was used to develop a three-dimensional virtual model, which was fabricated using fused deposition modelling. Laser scanning was used to compare cadaver bones, printed models, and intermediate segmentations. The overall bone reconstruction process had a reproducibility of 0.3 ± 0.4 mm. Production of the model had an accuracy of 0.1 ± 0.1 mm, while the segmentation had an accuracy of 0.3 ± 0.4 mm, indicating that segmentation accuracy was the key factor in reconstruction. Generally, the shape of the articular surfaces was reproduced accurately, with poorer accuracy near the periphery of the articular surfaces, particularly in regions with periosteum covering and where osteophytes were apparent.

  19. Application of a single-board computer as a low-cost pulse generator

    NASA Astrophysics Data System (ADS)

    Fedrizzi, Marcus; Soria, Julio

    2015-09-01

    A BeagleBone Black (BBB) single-board open-source computer was implemented as a low-cost fully programmable pulse generator. The pulse generator makes use of the BBB Programmable Real-Time Unit (PRU) subsystem to achieve a deterministic temporal resolution of 5 ns, an RMS jitter of 290 ps and a timebase stability on the order of 10 ppm. A Python-based software framework has also been developed to simplify the usage of the pulse generator.

  20. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    SciTech Connect

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  1. Comparison of different strategies in prenatal screening for Down’s syndrome: cost effectiveness analysis of computer simulation

    PubMed Central

    Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François

    2009-01-01

    Objectives To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down’s syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Design Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. Data sources The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Main outcome measures Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options’ outcomes. Results The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26 833 per case of Down’s syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100 000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30 963 per additional birth with Down’s syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26 833 to $C37 260 and from $C35 215 to $C45 314 per case of Down’s syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100 000

  2. Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)

    1997-01-01

    Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.

  3. Reducing annotation cost and uncertainty in computer-aided diagnosis through selective iterative classification

    NASA Astrophysics Data System (ADS)

    Riely, Amelia; Sablan, Kyle; Xiaotao, Thomas; Furst, Jacob; Raicu, Daniela

    2015-03-01

    Medical imaging technology has always provided radiologists with the opportunity to view and keep records of anatomy of the patient. With the development of machine learning and intelligent computing, these images can be used to create Computer-Aided Diagnosis (CAD) systems, which can assist radiologists in analyzing image data in various ways to provide better health care to patients. This paper looks at increasing accuracy and reducing cost in creating CAD systems, specifically in predicting the malignancy of lung nodules in the Lung Image Database Consortium (LIDC). Much of the cost in creating an accurate CAD system stems from the need for multiple radiologist diagnoses or annotations of each image, since there is rarely a ground truth diagnosis and even different radiologists' diagnoses of the same nodule often disagree. To resolve this issue, this paper outlines an method of selective iterative classification that predicts lung nodule malignancy by using multiple radiologist diagnoses only for cases that can benefit from them. Our method achieved 81% accuracy while costing only 46% of the method that indiscriminately used all annotations, which achieved a lower accuracy of 70%, while costing more.

  4. Formation of gold nanostructures on copier paper surface for cost effective SERS active substrate - Effect of halide additives

    NASA Astrophysics Data System (ADS)

    Desmonda, Christa; Kar, Sudeshna; Tai, Yian

    2016-03-01

    In this study, we report the simple fabrication of an active substrate assisted by gold nanostructures (AuNS) for application in surface-enhanced Raman scattering (SERS) using copier paper, which is a biodegradable and cost-effective material. As cellulose is the main component of paper, it can behave as a reducing agent and as a capping molecule for the synthesis of AuNS on the paper substrate. AuNS can be directly generated on the surface of the copier paper by addition of halides. The AuNS thus synthesized were characterized by ultraviolet-visible spectroscopy, SEM, XRD, and XPS. In addition, the SERS effect of the AuNS-paper substrates synthesized by using various halides was investigated by using rhodamine 6G and melamine as probe molecules.

  5. Reducing metal alloy powder costs for use in powder bed fusion additive manufacturing: Improving the economics for production

    NASA Astrophysics Data System (ADS)

    Medina, Fransisco

    Titanium and its associated alloys have been used in industry for over 50 years and have become more popular in the recent decades. Titanium has been most successful in areas where the high strength to weight ratio provides an advantage over aluminum and steels. Other advantages of titanium include biocompatibility and corrosion resistance. Electron Beam Melting (EBM) is an additive manufacturing (AM) technology that has been successfully applied in the manufacturing of titanium components for the aerospace and medical industry with equivalent or better mechanical properties as parts fabricated via more traditional casting and machining methods. As the demand for titanium powder continues to increase, the price also increases. Titanium spheroidized powder from different vendors has a price range from 260/kg-450/kg, other spheroidized alloys such as Niobium can cost as high as $1,200/kg. Alternative titanium powders produced from methods such as the Titanium Hydride-Dehydride (HDH) process and the Armstrong Commercially Pure Titanium (CPTi) process can be fabricated at a fraction of the cost of powders fabricated via gas atomization. The alternative powders can be spheroidized and blended. Current sectors in additive manufacturing such as the medical industry are concerned that there will not be enough spherical powder for production and are seeking other powder options. It is believed the EBM technology can use a blend of spherical and angular powder to build fully dense parts with equal mechanical properties to those produced using traditional powders. Some of the challenges with angular and irregular powders are overcoming the poor flow characteristics and the attainment of the same or better packing densities as spherical powders. The goal of this research is to demonstrate the feasibility of utilizing alternative and lower cost powders in the EBM process. As a result, reducing the cost of the raw material to reduce the overall cost of the product produced with

  6. Early treatment revisions by addition or switch for type 2 diabetes: impact on glycemic control, diabetic complications, and healthcare costs

    PubMed Central

    Schwab, Phil; Saundankar, Vishal; Bouchard, Jonathan; Wintfeld, Neil; Suehs, Brandon; Moretz, Chad; Allen, Elsie; DeLuzio, Antonio

    2016-01-01

    Background The study examined the prevalence of early treatment revisions after glycosylated hemoglobin (HbA1c) ≥9.0% (75 mmol/mol) and estimated the impact of early treatment revisions on glycemic control, diabetic complications, and costs. Research design and methods A retrospective cohort study of administrative claims data of plan members with type 2 diabetes and HbA1c ≥9.0% (75 mmol/mol) was completed. Treatment revision was identified as treatment addition or switch. Glycemic control was measured as HbA1c during 6–12 months following the first qualifying HbA1c ≥9.0% (75 mmol/mol) laboratory result. Complications severity (via Diabetes Complication Severity Index (DCSI)) and costs were measured after 12, 24, and 36 months. Unadjusted comparisons and multivariable models were used to examine the relationship between early treatment revision (within 90 days of HbA1c) and outcomes after controlling for potentially confounding factors measured during a 12-month baseline period. Results 8463 participants were included with a mean baseline HbA1c of 10.2% (75 mmol/mol). Early treatment revision was associated with greater reduction in HbA1c at 6–12 months (−2.10% vs −1.87%; p<0.001). No significant relationship was observed between early treatment revision and DCSI at 12, 24, or 36 months (p=0.931, p=0.332, and p=0.418). Total costs, medical costs, and pharmacy costs at 12, 24, or 36 months were greater for the early treatment revision group compared with the delayed treatment revision group (all p<0.05). Conclusions The findings suggest that in patients with type 2 diabetes mellitus, treatment revision within 90 days of finding an HbA1c ≥9.0% is associated with a greater level of near-term glycemic control and higher cost. The impact on end points such as diabetic complications may not be realized over relatively short time frames. PMID:26925237

  7. Avoiding Split Attention in Computer-Based Testing: Is Neglecting Additional Information Facilitative?

    ERIC Educational Resources Information Center

    Jarodzka, Halszka; Janssen, Noortje; Kirschner, Paul A.; Erkens, Gijsbert

    2015-01-01

    This study investigated whether design guidelines for computer-based learning can be applied to computer-based testing (CBT). Twenty-two students completed a CBT exam with half of the questions presented in a split-screen format that was analogous to the original paper-and-pencil version and half in an integrated format. Results show that students…

  8. Experiments with a low-cost system for computer graphics material model acquisition

    NASA Astrophysics Data System (ADS)

    Rushmeier, Holly; Lockerman, Yitzhak; Cartwright, Luke; Pitera, David

    2015-03-01

    We consider the design of an inexpensive system for acquiring material models for computer graphics rendering applications in animation, games and conceptual design. To be useful in these applications a system must be able to model a rich range of appearances in a computationally tractable form. The range of appearance of interest in computer graphics includes materials that have spatially varying properties, directionality, small-scale geometric structure, and subsurface scattering. To be computationally tractable, material models for graphics must be compact, editable, and efficient to numerically evaluate for ray tracing importance sampling. To construct appropriate models for a range of interesting materials, we take the approach of separating out directly and indirectly scattered light using high spatial frequency patterns introduced by Nayar et al. in 2006. To acquire the data at low cost, we use a set of Raspberry Pi computers and cameras clamped to miniature projectors. We explore techniques to separate out surface and subsurface indirect lighting. This separation would allow the fitting of simple, and so tractable, analytical models to features of the appearance model. The goal of the system is to provide models for physically accurate renderings that are visually equivalent to viewing the original physical materials.

  9. Evolutionary adaptive eye tracking for low-cost human computer interaction applications

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Shin, Hak Chul; Sung, Won Jun; Khim, Sarang; Kim, Honglak; Rhee, Phill Kyu

    2013-01-01

    We present an evolutionary adaptive eye-tracking framework aiming for low-cost human computer interaction. The main focus is to guarantee eye-tracking performance without using high-cost devices and strongly controlled situations. The performance optimization of eye tracking is formulated into the dynamic control problem of deciding on an eye tracking algorithm structure and associated thresholds/parameters, where the dynamic control space is denoted by genotype and phenotype spaces. The evolutionary algorithm is responsible for exploring the genotype control space, and the reinforcement learning algorithm organizes the evolved genotype into a reactive phenotype. The evolutionary algorithm encodes an eye-tracking scheme as a genetic code based on image variation analysis. Then, the reinforcement learning algorithm defines internal states in a phenotype control space limited by the perceived genetic code and carries out interactive adaptations. The proposed method can achieve optimal performance by compromising the difficulty in the real-time performance of the evolutionary algorithm and the drawback of the huge search space of the reinforcement learning algorithm. Extensive experiments were carried out using webcam image sequences and yielded very encouraging results. The framework can be readily applied to other low-cost vision-based human computer interactions in solving their intrinsic brittleness in unstable operational environments.

  10. Treatment of a simulated textile wastewater in a sequencing batch reactor (SBR) with addition of a low-cost adsorbent.

    PubMed

    Santos, Sílvia C R; Boaventura, Rui A R

    2015-06-30

    Color removal from textile wastewaters, at a low-cost and consistent technology, is even today a challenge. Simultaneous biological treatment and adsorption is a known alternative to the treatment of wastewaters containing biodegradable and non-biodegradable contaminants. The present work aims at evaluating the treatability of a simulated textile wastewater by simultaneously combining biological treatment and adsorption in a SBR (sequencing batch reactor), but using a low-cost adsorbent, instead of a commercial one. The selected adsorbent was a metal hydroxide sludge (WS) from an electroplating industry. Direct Blue 85 dye (DB) was used in the preparation of the synthetic wastewater. Firstly, adsorption kinetics and equilibrium were studied, in respect to many factors (temperature, pH, WS dosage and presence of salts and dyeing auxiliary chemicals in the aqueous media). At 25 °C and pH 4, 7 and 10, maximum DB adsorption capacities in aqueous solution were 600, 339 and 98.7 mg/g, respectively. These values are quite considerable, compared to other reported in literature, but proved to be significantly reduced by the presence of dyeing auxiliary chemicals in the wastewater. The simulated textile wastewater treatment in SBR led to BOD5 removals of 53-79%, but color removal was rather limited (10-18%). The performance was significantly enhanced by the addition of WS, with BOD5 removals above 91% and average color removals of 60-69%.

  11. Addition of flexible body option to the TOLA computer program. Part 2: User and programmer documentation

    NASA Technical Reports Server (NTRS)

    Dick, J. W.; Benda, B. J.

    1975-01-01

    User and programmer oriented documentation for the flexible body option of the Takeoff and Landing Analysis (TOLA) computer program are provided. The user information provides sufficient knowledge of the development and use of the option to enable the engineering user to successfully operate the modified program and understand the results. The programmer's information describes the option structure and logic enabling a programmer to make major revisions to this part of the TOLA computer program.

  12. Dealing with electronic waste: modeling the costs and environmental benefits of computer monitor disposal.

    PubMed

    Macauley, Molly; Palmer, Karen; Shih, Jhih-Shyang

    2003-05-01

    The importance of information technology to the world economy has brought about a surge in demand for electronic equipment. With rapid technological change, a growing fraction of the increasing stock of many types of electronics becomes obsolete each year. We model the costs and benefits of policies to manage 'e-waste' by focusing on a large component of the electronic waste stream-computer monitors-and the environmental concerns associated with disposal of the lead embodied in cathode ray tubes (CRTs) used in most monitors. We find that the benefits of avoiding health effects associated with CRT disposal appear far outweighed by the costs for a wide range of policies. For the stock of monitors disposed of in the United States in 1998, we find that policies restricting or banning some popular disposal options would increase disposal costs from about US dollar 1 per monitor to between US dollars 3 and US dollars 20 per monitor. Policies to promote a modest amount of recycling of monitor parts, including lead, can be less expensive. In all cases, however, the costs of the policies exceed the value of the avoided health effects of CRT disposal.

  13. Computing confidence intervals on solution costs for stochastic grid generation expansion problems.

    SciTech Connect

    Woodruff, David L..; Watson, Jean-Paul

    2010-12-01

    A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.

  14. User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates

    SciTech Connect

    Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.

    1982-05-01

    SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.

  15. Versatile, low-cost, computer-controlled, sample positioning system for vacuum applications

    NASA Technical Reports Server (NTRS)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1991-01-01

    A versatile, low-cost, easy to implement, microprocessor-based motorized positioning system (MPS) suitable for accurate sample manipulation in a Second Ion Mass Spectrometry (SIMS) system, and for other ultra-high vacuum (UHV) applications was designed and built at NASA LeRC. The system can be operated manually or under computer control. In the latter case, local, as well as remote operation is possible via the IEEE-488 bus. The position of the sample can be controlled in three linear orthogonal and one angular coordinates.

  16. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  17. Can Computer-Assisted Discovery Learning Foster First Graders' Fluency with the Most Basic Addition Combinations?

    ERIC Educational Resources Information Center

    Baroody, Arthur J.; Eiland, Michael D.; Purpura, David J.; Reid, Erin E.

    2013-01-01

    In a 9-month training experiment, 64 first graders with a risk factor were randomly assigned to computer-assisted structured discovery of the add-1 rule (e.g., the sum of 7 + 1 is the number after "seven" when we count), unstructured discovery learning of this regularity, or an active-control group. Planned contrasts revealed that the add-1…

  18. Low-cost computer-controlled current stimulator for the student laboratory.

    PubMed

    Güçlü, Burak

    2007-06-01

    Electrical stimulation of nerve and muscle tissues is frequently used for teaching core concepts in physiology. It is usually expensive to provide every student group in the laboratory with an individual stimulator. This article presents the design and application of a low-cost [about $100 (U.S.)] isolated stimulator that can be controlled by two analog-output channels (e.g., output channels of a data-acquisition card or onboard audio channels) of a computer. The device is based on a voltage-to-current converter circuit and can produce accurate monopolar and bipolar current pulses, pulse trains, arbitrary current waveforms, and a trigger output. The compliance of the current source is +/-15 V, and the maximum available current is +/-1.5 mA. The device was electrically tested by using the audio output of a personal computer. In this condition, the device had a dynamic range of 46 dB and the available pulse-width range was 0.1-10 ms. The device is easily programmable, and a freeware MATLAB script is posted on the World Wide Web. The practical use of the device was demonstrated by electrically stimulating the sciatic nerve of a frog and recording compound action potentials. The newly designed current stimulator is a flexible and effective tool for teaching in the physiology laboratory, and it can increase the efficiency of learning by maximizing performance-to-cost ratio.

  19. A novel cost based model for energy consumption in cloud computing.

    PubMed

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  20. A novel cost based model for energy consumption in cloud computing.

    PubMed

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  1. A Novel Cost Based Model for Energy Consumption in Cloud Computing

    PubMed Central

    Horri, A.; Dastghaibyfard, Gh.

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. PMID:25705716

  2. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    NASA Technical Reports Server (NTRS)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  3. The Effects of Computer-Assisted Instruction on Student Achievement in Addition and Subtraction at First Grade Level.

    ERIC Educational Resources Information Center

    Spivey, Patsy M.

    This study was conducted to determine whether the traditional classroom approach to instruction involving the addition and subtraction of number facts (digits 0-6) is more or less effective than the traditional classroom approach plus a commercially-prepared computer game. A pretest-posttest control group design was used with two groups of first…

  4. Identification of Students' Intuitive Mental Computational Strategies for 1, 2 and 3 Digits Addition and Subtraction: Pedagogical and Curricular Implications

    ERIC Educational Resources Information Center

    Ghazali, Munirah; Alias, Rohana; Ariffin, Noor Asrul Anuar; Ayub, Ayminsyadora

    2010-01-01

    This paper reports on a study to examine mental computation strategies used by Year 1, Year 2, and Year 3 students to solve addition and subtraction problems. The participants in this study were twenty five 7 to 9 year-old students identified as excellent, good and satisfactory in their mathematics performance from a school in Penang, Malaysia.…

  5. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...

  6. An Example of the Application of Cost-Effectiveness Techniques in a Computer-Based Study Management System Evaluation.

    ERIC Educational Resources Information Center

    Stern, Hervey W.

    This paper considers some of the problems in implementing a cost-effectiveness analysis in training and education, and provides a specific example of an analysis that partially meets the cost-effectiveness analysis requirements. A computer-based study management system (SMS), which was implemented on a limited basis, was evaluated in the context…

  7. Engineering and environmental properties of thermally treated mixtures containing MSWI fly ash and low-cost additives.

    PubMed

    Polettini, A; Pomi, R; Trinci, L; Muntoni, A; Lo Mastro, S

    2004-09-01

    An experimental work was carried out to investigate the feasibility of application of a sintering process to mixtures composed of Municipal Solid Waste Incinerator (MSWI) fly ash and low-cost additives (waste from feldspar production and cullet). The proportions of the three constituents were varied to adjust the mixture compositions to within the optimal range for sintering. The material was compacted in cylindrical specimens and treated at 1100 and 1150 degrees C for 30 and 60 min. Engineering and environmental characteristics including weight loss, dimensional changes, density, open porosity, mechanical strength, chemical stability and leaching behavior were determined for the treated material, allowing the relationship between the degree of sintering and both mixture composition and treatment conditions to be singled out. Mineralogical analyses detected the presence of neo-formation minerals from the pyroxene group. Estimation of the extent of metal loss from the samples indicated that the potential for volatilization of species of Pb, Cd and Zn is still a matter of major concern when dealing with thermal treatment of incinerator ash. PMID:15268956

  8. Engineering and environmental properties of thermally treated mixtures containing MSWI fly ash and low-cost additives.

    PubMed

    Polettini, A; Pomi, R; Trinci, L; Muntoni, A; Lo Mastro, S

    2004-09-01

    An experimental work was carried out to investigate the feasibility of application of a sintering process to mixtures composed of Municipal Solid Waste Incinerator (MSWI) fly ash and low-cost additives (waste from feldspar production and cullet). The proportions of the three constituents were varied to adjust the mixture compositions to within the optimal range for sintering. The material was compacted in cylindrical specimens and treated at 1100 and 1150 degrees C for 30 and 60 min. Engineering and environmental characteristics including weight loss, dimensional changes, density, open porosity, mechanical strength, chemical stability and leaching behavior were determined for the treated material, allowing the relationship between the degree of sintering and both mixture composition and treatment conditions to be singled out. Mineralogical analyses detected the presence of neo-formation minerals from the pyroxene group. Estimation of the extent of metal loss from the samples indicated that the potential for volatilization of species of Pb, Cd and Zn is still a matter of major concern when dealing with thermal treatment of incinerator ash.

  9. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (a) The firm's real after-tax weighted average marginal cost of capital (K) is computed with equation... common stock expressed as a fraction. t=Marginal federal income tax rate for the current year. (b...% (B) The “beta” coefficient is computed with regression analysis techniques. The regression...

  10. Subsonic flutter analysis addition to NASTRAN. [for use with CDC 6000 series digital computers

    NASA Technical Reports Server (NTRS)

    Doggett, R. V., Jr.; Harder, R. L.

    1973-01-01

    A subsonic flutter analysis capability has been developed for NASTRAN, and a developmental version of the program has been installed on the CDC 6000 series digital computers at the Langley Research Center. The flutter analysis is of the modal type, uses doublet lattice unsteady aerodynamic forces, and solves the flutter equations by using the k-method. Surface and one-dimensional spline functions are used to transform from the aerodynamic degrees of freedom to the structural degrees of freedom. Some preliminary applications of the method to a beamlike wing, a platelike wing, and a platelike wing with a folded tip are compared with existing experimental and analytical results.

  11. Restructuring the introductory physics lab with the addition of computer-based laboratories.

    PubMed

    Pierri-Galvao, Monica

    2011-07-01

    Nowadays, data acquisition software and sensors are being widely used in introductory physics laboratories. This allows the student to spend more time exploring the data that is collected by the computer hence focusing more on the physical concept. Very often, a faculty is faced with the challenge of updating or introducing a microcomputer-based laboratory (MBL) at his or her institution. This article will provide a list of experiments and equipment needed to convert about half of the traditional labs on a 1-year introductory physics lab into MBLs.

  12. Addition of higher order plate and shell elements into NASTRAN computer program

    NASA Technical Reports Server (NTRS)

    Narayanaswami, R.; Goglia, G. L.

    1976-01-01

    Two higher order plate elements, the linear strain triangular membrane element and the quintic bending element, along with a shallow shell element, suitable for inclusion into the NASTRAN (NASA Structural Analysis) program are described. Additions to the NASTRAN Theoretical Manual, Users' Manual, Programmers' Manual and the NASTRAN Demonstration Problem Manual, for inclusion of these elements into the NASTRAN program are also presented.

  13. Cost justification for an interactive Computer-Aided Design Drafting/Manufacturing system

    SciTech Connect

    Norton, F.J.

    1980-09-23

    Many factors influence the capital investment decision. System costs and benefits are weighed by methods of financial analysis to determine the advisability of an investment. Capital, expense, and benefits as related to Interactive Computer-Aided Design Drafting/Manufacturing (CADD/M) Systems are discussed and model calculations are included. An example is treated by the simple payback method and the more sophisticated methods of Net Present Value (NPV) and Internal Rate of Return (IRR). The NPV and IRR approaches include in the calculation the time value of money and provide a sounder foundation on which to base the purchase decision. It is hoped that an understanding of these techniques by technical personnel will make an optimum system purchase more likely.

  14. Processing power limits social group size: computational evidence for the cognitive costs of sociality.

    PubMed

    Dávid-Barrett, T; Dunbar, R I M

    2013-08-22

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses.

  15. Processing power limits social group size: computational evidence for the cognitive costs of sociality

    PubMed Central

    Dávid-Barrett, T.; Dunbar, R. I. M.

    2013-01-01

    Sociality is primarily a coordination problem. However, the social (or communication) complexity hypothesis suggests that the kinds of information that can be acquired and processed may limit the size and/or complexity of social groups that a species can maintain. We use an agent-based model to test the hypothesis that the complexity of information processed influences the computational demands involved. We show that successive increases in the kinds of information processed allow organisms to break through the glass ceilings that otherwise limit the size of social groups: larger groups can only be achieved at the cost of more sophisticated kinds of information processing that are disadvantageous when optimal group size is small. These results simultaneously support both the social brain and the social complexity hypotheses. PMID:23804623

  16. Low-cost monitoring of patients during unsupervised robot/computer assisted motivating stroke rehabilitation.

    PubMed

    Johnson, Michelle J; Shakya, Yuniya; Strachota, Elaine; Ahamed, Sheikh Iqbal

    2011-02-01

    There is a need for effective stroke rehabilitation systems that can be used in undersupervised/unsupervised environments such as the home to assist in improving and/or sustaining functional outcomes. We determined the stability, accuracy and usability of an extremely low-cost mobile robot for use with a robot/computer motivating rehabilitation device, TheraDrive. The robot provided cues to discourage excessive trunk movements and to encourage arm movements. The mobile robot system was positively received by potential users, and it was accurate and stable over time. Feedback from users suggests that finding the optimal frequency and type of encouragement and corrective feedback given by the robot helper will be critical for long-term acceptance.

  17. A simple, low-cost, data logging pendulum built from a computer mouse

    SciTech Connect

    Gintautas, Vadas; Hubler, Alfred

    2009-01-01

    Lessons and homework problems involving a pendulum are often a big part of introductory physics classes and laboratory courses from high school to undergraduate levels. Although laboratory equipment for pendulum experiments is commercially available, it is often expensive and may not be affordable for teachers on fixed budgets, particularly in developing countries. We present a low-cost, easy-to-build rotary sensor pendulum using the existing hardware in a ball-type computer mouse. We demonstrate how this apparatus may be used to measure both the frequency and coefficient of damping of a simple physical pendulum. This easily constructed laboratory equipment makes it possible for all students to have hands-on experience with one of the most important simple physical systems.

  18. Large Advanced Space Systems (LASS) computer-aided design program additions

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.

    1982-01-01

    The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.

  19. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  20. Addition of visual noise boosts evoked potential-based brain-computer interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  1. Low cost, highly effective parallel computing achieved through a Beowulf cluster.

    PubMed

    Bitner, Marc; Skelton, Gordon

    2003-01-01

    A Beowulf cluster is a means of bringing together several computers and using software and network components to make this cluster of computers appear and function as one computer with multiple parallel computing processors. A cluster of computers can provide comparable computing power usually found only in very expensive super computers or servers.

  2. Theory and computer simulation for the equation of state of additive hard-disk fluid mixtures

    NASA Astrophysics Data System (ADS)

    Barrio, C.; Solana, J. R.

    2001-01-01

    A procedure previously developed by the authors to obtain the equation of state for a mixture of additive hard spheres on the basis of a pure fluid equation of state is applied here to a binary mixture of additive hard disks in two dimensions. The equation of state depends on two parameters which are determined from the second and third virial coefficients for the mixture, which are known exactly. Results are compared with Monte Carlo calculations which are also reported. The agreement between theory and simulation is very good. For the fourth and fifth virial coefficients of the mixture, the equation of state gives results which are also in close agreement with exact numerical values reported in the literature.

  3. Improving Classroom Performance in Underachieving Preadolescents: The Additive Effects of Response Cost to a School-Home Note System.

    ERIC Educational Resources Information Center

    McCain, Alyson P.; Kelley, Mary Lou

    1994-01-01

    Compared the effectiveness of a school-home note with and without response cost on the disruptive and on-task behavior of three preadolescents. Inclusion of response cost was associated with marked improvements in attentiveness and stabilization of disruptive behavior as compared with that obtained with a traditional school-home note. (LKS)

  4. Low-cost, high-performance and efficiency computational photometer design

    NASA Astrophysics Data System (ADS)

    Siewert, Sam B.; Shihadeh, Jeries; Myers, Randall; Khandhar, Jay; Ivanov, Vitaly

    2014-05-01

    Researchers at the University of Alaska Anchorage and University of Colorado Boulder have built a low cost high performance and efficiency drop-in-place Computational Photometer (CP) to test in field applications ranging from port security and safety monitoring to environmental compliance monitoring and surveying. The CP integrates off-the-shelf visible spectrum cameras with near to long wavelength infrared detectors and high resolution digital snapshots in a single device. The proof of concept combines three or more detectors into a single multichannel imaging system that can time correlate read-out, capture, and image process all of the channels concurrently with high performance and energy efficiency. The dual-channel continuous read-out is combined with a third high definition digital snapshot capability and has been designed using an FPGA (Field Programmable Gate Array) to capture, decimate, down-convert, re-encode, and transform images from two standard definition CCD (Charge Coupled Device) cameras at 30Hz. The continuous stereo vision can be time correlated to megapixel high definition snapshots. This proof of concept has been fabricated as a fourlayer PCB (Printed Circuit Board) suitable for use in education and research for low cost high efficiency field monitoring applications that need multispectral and three dimensional imaging capabilities. Initial testing is in progress and includes field testing in ports, potential test flights in un-manned aerial systems, and future planned missions to image harsh environments in the arctic including volcanic plumes, ice formation, and arctic marine life.

  5. Least-squares reverse-time migration with cost-effective computation and memory storage

    NASA Astrophysics Data System (ADS)

    Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

    2016-06-01

    Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

  6. Enantioselective conjugate addition of nitro compounds to α,β-unsaturated ketones: an experimental and computational study.

    PubMed

    Manzano, Rubén; Andrés, José M; Álvarez, Rosana; Muruzábal, María D; de Lera, Ángel R; Pedrosa, Rafael

    2011-05-16

    A series of chiral thioureas derived from easily available diamines, prepared from α-amino acids, have been tested as catalysts in the enantioselective Michael additions of nitroalkanes to α,β-unsaturated ketones. The best results are obtained with the bifunctional catalyst prepared from L-valine. This thiourea promotes the reaction with high enantioselectivities and chemical yields for aryl/vinyl ketones, but the enantiomeric ratio for alkyl/vinyl derivatives is very modest. The addition of substituted nitromethanes led to the corresponding adducts with excellent enantioselectivity but very poor diastereoselectivity. Evidence for the isomerization of the addition products has been obtained from the reaction of chalcone with [D(3)]nitromethane, which shows that the final addition products epimerize under the reaction conditions. The epimerization explains the low diastereoselectivity observed in the formation of adducts with two adjacent tertiary stereocenters. Density functional studies of the transition structures corresponding to two alternative activation modes of the nitroalkanes and α,β-unsaturated ketones by the bifunctional organocatalyst have been carried out at the B3LYP/3-21G* level. The computations are consistent with a reaction model involving the Michael addition of the thiourea-activated nitronate to the ketone activated by the protonated amine of the organocatalyst. The enantioselectivities predicted by the computations are consistent with the experimental values obtained for aryl- and alkyl-substituted α,β-unsaturated ketones.

  7. Low-Dose Chest Computed Tomography for Lung Cancer Screening Among Hodgkin Lymphoma Survivors: A Cost-Effectiveness Analysis

    SciTech Connect

    Wattson, Daniel A.; Hunink, M.G. Myriam; DiPiro, Pamela J.; Das, Prajnan; Hodgson, David C.; Mauch, Peter M.; Ng, Andrea K.

    2014-10-01

    Purpose: Hodgkin lymphoma (HL) survivors face an increased risk of treatment-related lung cancer. Screening with low-dose computed tomography (LDCT) may allow detection of early stage, resectable cancers. We developed a Markov decision-analytic and cost-effectiveness model to estimate the merits of annual LDCT screening among HL survivors. Methods and Materials: Population databases and HL-specific literature informed key model parameters, including lung cancer rates and stage distribution, cause-specific survival estimates, and utilities. Relative risks accounted for radiation therapy (RT) technique, smoking status (>10 pack-years or current smokers vs not), age at HL diagnosis, time from HL treatment, and excess radiation from LDCTs. LDCT assumptions, including expected stage-shift, false-positive rates, and likely additional workup were derived from the National Lung Screening Trial and preliminary results from an internal phase 2 protocol that performed annual LDCTs in 53 HL survivors. We assumed a 3% discount rate and a willingness-to-pay (WTP) threshold of $50,000 per quality-adjusted life year (QALY). Results: Annual LDCT screening was cost effective for all smokers. A male smoker treated with mantle RT at age 25 achieved maximum QALYs by initiating screening 12 years post-HL, with a life expectancy benefit of 2.1 months and an incremental cost of $34,841/QALY. Among nonsmokers, annual screening produced a QALY benefit in some cases, but the incremental cost was not below the WTP threshold for any patient subsets. As age at HL diagnosis increased, earlier initiation of screening improved outcomes. Sensitivity analyses revealed that the model was most sensitive to the lung cancer incidence and mortality rates and expected stage-shift from screening. Conclusions: HL survivors are an important high-risk population that may benefit from screening, especially those treated in the past with large radiation fields including mantle or involved-field RT. Screening

  8. Projections of costs, financing, and additional resource requirements for low- and lower middle-income country immunization programs over the decade, 2011-2020.

    PubMed

    Gandhi, Gian; Lydon, Patrick; Cornejo, Santiago; Brenzel, Logan; Wrobel, Sandra; Chang, Hugh

    2013-04-18

    The Decade of Vaccines Global Vaccine Action Plan has outlined a set of ambitious goals to broaden the impact and reach of immunization across the globe. A projections exercise has been undertaken to assess the costs, financing availability, and additional resource requirements to achieve these goals through the delivery of vaccines against 19 diseases across 94 low- and middle-income countries for the period 2011-2020. The exercise draws upon data from existing published and unpublished global forecasts, country immunization plans, and costing studies. A combination of an ingredients-based approach and use of approximations based on past spending has been used to generate vaccine and non-vaccine delivery costs for routine programs, as well as supplementary immunization activities (SIAs). Financing projections focused primarily on support from governments and the GAVI Alliance. Cost and financing projections are presented in constant 2010 US dollars (US$). Cumulative total costs for the decade are projected to be US$57.5 billion, with 85% for routine programs and the remaining 15% for SIAs. Delivery costs account for 54% of total cumulative costs, and vaccine costs make up the remainder. A conservative estimate of total financing for immunization programs is projected to be $34.3 billion over the decade, with country governments financing 65%. These projections imply a cumulative funding gap of $23.2 billion. About 57% of the total resources required to close the funding gap are needed just to maintain existing programs and scale up other currently available vaccines (i.e., before adding in the additional costs of vaccines still in development). Efforts to mobilize additional resources, manage program costs, and establish mutual accountability between countries and development partners will all be necessary to ensure the goals of the Decade of Vaccines are achieved. Establishing or building on existing mechanisms to more comprehensively track resources and

  9. Improving the precision and speed of Euler angles computation from low-cost rotation sensor data.

    PubMed

    Janota, Aleš; Šimák, Vojtech; Nemec, Dušan; Hrbček, Jozef

    2015-01-01

    This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)-the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles' time derivations has a singularity, therefore is not accurate at full range of object's attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware). PMID:25806874

  10. Improving the Precision and Speed of Euler Angles Computation from Low-Cost Rotation Sensor Data

    PubMed Central

    Janota, Aleš; Šimák, Vojtech; Nemec, Dušan; Hrbček, Jozef

    2015-01-01

    This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)—the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles’ time derivations has a singularity, therefore is not accurate at full range of object’s attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware). PMID:25806874

  11. Improving the precision and speed of Euler angles computation from low-cost rotation sensor data.

    PubMed

    Janota, Aleš; Šimák, Vojtech; Nemec, Dušan; Hrbček, Jozef

    2015-03-23

    This article compares three different algorithms used to compute Euler angles from data obtained by the angular rate sensor (e.g., MEMS gyroscope)-the algorithms based on a rotational matrix, on transforming angular velocity to time derivations of the Euler angles and on unit quaternion expressing rotation. Algorithms are compared by their computational efficiency and accuracy of Euler angles estimation. If attitude of the object is computed only from data obtained by the gyroscope, the quaternion-based algorithm seems to be most suitable (having similar accuracy as the matrix-based algorithm, but taking approx. 30% less clock cycles on the 8-bit microcomputer). Integration of the Euler angles' time derivations has a singularity, therefore is not accurate at full range of object's attitude. Since the error in every real gyroscope system tends to increase with time due to its offset and thermal drift, we also propose some measures based on compensation by additional sensors (a magnetic compass and accelerometer). Vector data of mentioned secondary sensors has to be transformed into the inertial frame of reference. While transformation of the vector by the matrix is slightly faster than doing the same by quaternion, the compensated sensor system utilizing a matrix-based algorithm can be approximately 10% faster than the system utilizing quaternions (depending on implementation and hardware).

  12. 25 CFR 171.555 - What additional costs will I incur if I am granted a Payment Plan?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... AND WATER IRRIGATION OPERATION AND MAINTENANCE Financial Matters: Assessments, Billing, and... following costs: (a) An administrative fee to process your Payment Plan, as required by 31 CFR 901.9....

  13. 25 CFR 171.555 - What additional costs will I incur if I am granted a Payment Plan?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AND WATER IRRIGATION OPERATION AND MAINTENANCE Financial Matters: Assessments, Billing, and... following costs: (a) An administrative fee to process your Payment Plan, as required by 31 CFR 901.9....

  14. 25 CFR 171.555 - What additional costs will I incur if I am granted a Payment Plan?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AND WATER IRRIGATION OPERATION AND MAINTENANCE Financial Matters: Assessments, Billing, and... following costs: (a) An administrative fee to process your Payment Plan, as required by 31 CFR 901.9....

  15. 25 CFR 171.555 - What additional costs will I incur if I am granted a Payment Plan?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AND WATER IRRIGATION OPERATION AND MAINTENANCE Financial Matters: Assessments, Billing, and... following costs: (a) An administrative fee to process your Payment Plan, as required by 31 CFR 901.9....

  16. 25 CFR 171.555 - What additional costs will I incur if I am granted a Payment Plan?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... AND WATER IRRIGATION OPERATION AND MAINTENANCE Financial Matters: Assessments, Billing, and... following costs: (a) An administrative fee to process your Payment Plan, as required by 31 CFR 901.9....

  17. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    NASA Astrophysics Data System (ADS)

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-08-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, Eyegrade, a system for automatic grading of multiple choice exams is presented. While most current solutions are based on expensive scanners, Eyegrade offers a truly low-cost solution requiring only a regular off-the-shelf webcam. Additionally, Eyegrade performs both mark recognition as well as optical character recognition of handwritten student identification numbers, which avoids the use of bubbles in the answer sheet. When compared with similar webcam-based systems, the user interface in Eyegrade has been designed to provide a more efficient and error-free data collection procedure. The tool has been validated with a set of experiments that show the ease of use (both setup and operation), the reduction in grading time, and an increase in the reliability of the results when compared with conventional, more expensive systems.

  18. The Applications of Computers in Education in Developing Countries--with Specific Reference to the Cost-Effectiveness of Computer-Assisted Instruction.

    ERIC Educational Resources Information Center

    Lai, Kwok-Wing

    Designed to examine the application and cost-effectiveness of computer-assisted instruction (CAI) for secondary education in developing countries, this document is divided into eight chapters. A general introduction defines the research problem, describes the research methodology, and provides definitions of key terms used throughout the paper.…

  19. Accounting for polarization cost when using fixed charge force fields. II. Method and application for computing effect of polarization cost on free energy of hydration.

    PubMed

    Swope, William C; Horn, Hans W; Rice, Julia E

    2010-07-01

    Polarization cost is the energy needed to distort the wave function of a molecule from one appropriate to the gas phase to one appropriate for some condensed phase. Although it is not currently standard practice, polarization cost should be considered when deriving improved fixed charge force fields based on fits to certain types of experimental data and when using such force fields to compute observables that involve changes in molecular polarization. Building on earlier work, we present mathematical expressions and a method to estimate the effect of polarization cost on free energy and enthalpy implied by a charge model meant to represent a solvated state. The charge model can be any combination of point charges, higher-order multipoles, or even distributed charge densities, as long as they do not change in response to environment. The method is illustrated by computing the effect of polarization cost on free energies of hydration for the neutral amino acid side chain analogues as predicted using two popular fixed charge force fields and one based on electron densities computed using quantum chemistry techniques that employ an implicit model to represent aqueous solvent. From comparison of the computed and experimental hydration free energies, we find that two commonly used force fields are too underpolarized in their description of the solute-water interaction. On the other hand, a charge model based on the charge density from a hybrid density functional calculation that used an implicit model for aqueous solvent performs well for hydration free energies of these molecules after the correction for dipole polarization is applied. As such, an improved description of the density (e.g., B3LYP, MP2) in conjunction with an implicit solvent (e.g., PCM) or explicit solvent (e.g., QM/MM) approach may offer promise as a starting point for the development of improved fixed charge models for force fields.

  20. Computations on the primary photoreaction of Br2 with CO2: stepwise vs concerted addition of Br atoms.

    PubMed

    Xu, Kewei; Korter, Timothy M; Braiman, Mark S

    2015-04-01

    It was proposed previously that Br2-sensitized photolysis of liquid CO2 proceeds through a metastable primary photoproduct, CO2Br2. Possible mechanisms for such a photoreaction are explored here computationally. First, it is shown that the CO2Br radical is not stable in any geometry. This rules out a free-radical mechanism, for example, photochemical splitting of Br2 followed by stepwise addition of Br atoms to CO2-which in turn accounts for the lack of previously observed Br2+CO2 photochemistry in gas phases. A possible alternative mechanism in liquid phase is formation of a weakly bound CO2:Br2 complex, followed by concerted photoaddition of Br2. This hypothesis is suggested by the previously published spectroscopic detection of a binary CO2:Br2 complex in the supersonically cooled gas phase. We compute a global binding-energy minimum of -6.2 kJ mol(-1) for such complexes, in a linear geometry. Two additional local minima were computed for perpendicular (C2v) and nearly parallel asymmetric planar geometries, both with binding energies near -5.4 kJ mol(-1). In these two latter geometries, C-Br and O-Br bond distances are simultaneously in the range of 3.5-3.8 Å, that is, perhaps suitable for a concerted photoaddition under the temperature and pressure conditions where Br2 + CO2 photochemistry has been observed.

  1. Phase Transition in Computing Cost of Overconstrained NP-Complete 3-SAT Problems

    NASA Astrophysics Data System (ADS)

    Woodson, Adam; O'Donnell, Thomas; Maniloff, Peter

    2002-03-01

    Many intractable, NP-Complete problems such as Traveling Salesmen (TSP) and 3-Satisfiability (3-Sat) which arise in hundreds of computer science, industrial and commercial applications, are now known to exhibit phase transitions in computational cost. While these problems appear to not have any structure which would make them amenable to attack with quantum computing, their critical behavior may allow physical insights derived from statistical mechanics and critical theory to shed light on these computationally ``hardest" of problems. While computational theory indicates that ``the intractability of the NP-Complete class resides solely in the exponential growth of the possible solutions" with the number of variables, n, the present work instead investigates the complex patterns of ``overlap" amongst 3-SAT clauses (their combined effects) when n-tuples of these act in succession to reduce the space of valid solutions. An exhaustive-search algorithm was used to eliminate `bad' states from amongst the `good' states residing within the spaces of all 2^n--possible solutions of randomly generated 3-Sat problems. No backtracking nor optimization heuristics were employed, nor was problem structure exploited (i.e., phtypical cases were generated), and the (k=3)-Sat propositional logic problems generated were in standard, conjunctive normal form (CNF). Each problem had an effectively infinite number of clauses, m (i.e., with r = m/n >= 10), to insure every problem would not be satisfiable (i.e. that each would fail), and duplicate clauses were not permitted. This process was repeated for each of several low values of n (i.e., 4 <= n <= 20). The entire history of solution-states elimination as successive clauses were applied was archived until, in each instance, sufficient clauses were applied to kill all possible solutions . An asymmetric, sigmoid-shaped phase transition is observed in Fg = F_g(m'/n), the fraction of the original 2^n ``good" solutions remaining valid as a

  2. Protecting child health and nutrition status with ready-to-use food in addition to food assistance in urban Chad: a cost-effectiveness analysis

    PubMed Central

    2013-01-01

    Background Despite growing interest in use of lipid nutrient supplements for preventing child malnutrition and morbidity, there is inconclusive evidence on the effectiveness, and no evidence on the cost-effectiveness of this strategy. Methods A cost effectiveness analysis was conducted comparing costs and outcomes of two arms of a cluster randomized controlled trial implemented in eastern Chad during the 2010 hunger gap by Action contre la Faim France and Ghent University. This trial assessed the effect on child malnutrition and morbidity of a 5-month general distribution of staple rations, or staple rations plus a ready-to-use supplementary food (RUSF). RUSF was distributed to households with a child aged 6–36 months who was not acutely malnourished (weight-for-height > = 80% of the NCHS reference median, and absence of bilateral pitting edema), to prevent acute malnutrition in these children. While the addition of RUSF to a staple ration did not result in significant reduction in wasting rates, cost-effectiveness was assessed using successful secondary outcomes of cases of diarrhea and anemia (hemoglobin <110 g/L) averted among children receiving RUSF. Total costs of the program and incremental costs of RUSF and related management and logistics were estimated using accounting records and key informant interviews, and include costs to institutions and communities. An activity-based costing methodology was applied and incremental costs were calculated per episode of diarrhea and case of anemia averted. Results Adding RUSF to a general food distribution increased total costs by 23%, resulting in an additional cost per child of 374 EUR, and an incremental cost per episode of diarrhea averted of 1,083 EUR and per case of anemia averted of 3,627 EUR. Conclusions Adding RUSF to a staple ration was less cost-effective than other standard intervention options for averting diarrhea and anemia. This strategy holds potential to address a broad array of health and

  3. Origins of stereoselectivity in the Diels-Alder addition of chiral hydroxyalkyl vinyl ketones to cyclopentadiene: a quantitative computational study.

    PubMed

    Bakalova, Snezhana M; Kaneti, Jose

    2008-12-18

    Modest basis set level MP2/6-31G(d,p) calculations on the Diels-Alder addition of S-1-alkyl-1-hydroxy-but-3-en-2-ones (1-hydroxy-1-alkyl methyl vinyl ketones) to cyclopentadiene correctly reproduce the trends in known experimental endo/exo and diastereoface selectivity. B3LYP theoretical results at the same or significantly higher basis set level, on the other hand, do not satisfactorily model observed endo/exo selectivities and are thus unsuitable for quantitative studies. The same is valid also with regard to subtle effects originating from, for example, conformational distributions of reactants. The latter shortcomings are not alleviated by the fact that observed diastereoface selectivities are well-reproduced by DFT calculations. Quantitative computational studies of large cycloaddition systems would require higher basis sets and better account for electron correlation than MP2, such as, for example, CCSD. Presently, however, with 30 or more non-hydrogen atoms, these computations are hardly feasible. We present quantitatively correct stereochemical predictions using a hybrid layered ONIOM computational approach, including the chiral carbon atom and the intramolecular hydrogen bond into a higher level, MP2/6-311G(d,p) or CCSD/6-311G(d,p), layer. Significant computational economy is achieved by taking account of surrounding bulky (alkyl) residues at 6-31G(d) in a low HF theoretical level layer. We conclude that theoretical calculations based on explicit correlated MO treatment of the reaction site are sufficiently reliable for the prediction of both endo/exo and diastereoface selectivity of Diels-Alder addition reactions. This is in line with the understanding of endo/exo selectivity originating from dynamic electron correlation effects of interacting pi fragments and diastereofacial selectivity originating from steric interactions of fragments outside of the Diels-Alder reaction site. PMID:18637663

  4. Financial Quality Control of In-Patient Chemotherapy in Germany: Are Additional Payments Cost-Covering for Pharmaco-Oncological Expenses?

    PubMed Central

    Jacobs, Volker R.; Mallmann, Peter

    2011-01-01

    Summary Background Cost-covering in-patient care is increasingly important for hospital providers in Germany, especially with regard to expensive oncological pharmaceuticals. Additional payments (Zusatzentgelte; ZE) on top of flat rate diagnose-related group (DRG) reimbursement can be claimed by hospitals for in-patient use of selected medications. To verify cost coverage of in-patient chemotherapies, the costs of medication were compared to their revenues. Method From January to June 2010, a retrospective cost-revenue study was performed at a German obstetrics/gynecology university clinic. The hospital's pharmacy list of inpatient oncological therapies for breast and gynecological cancer was checked for accuracy and compared with the documented ZEs and the costs and revenues for each oncological application. Results N = 45 in-patient oncological therapies were identified in n = 18 patients, as well as n = 7 bisphosphonate applications; n = 11 ZEs were documented. Costs for oncological medication were € 33,752. The corresponding ZE revenues amounted to only € 13,980, resulting in a loss of € 19,772. All in-patient oncological therapies performed were not cost-covering. Data discrepancy, incorrect documentation and cost attribution, and process aborts were identified. Conclusions Routine financial quality control at the medicine-pharmacy administration interface is implemented, with monthly comparison of costs and revenues, as well as admission status. Non-cost-covering therapies for in-patients should be converted to out-patient therapies. Necessary adjustments of clinic processes are made according to these results, to avoid future losses. PMID:21673822

  5. A Low-Cost Computer-Controlled Arduino-Based Educational Laboratory System for Teaching the Fundamentals of Photovoltaic Cells

    ERIC Educational Resources Information Center

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-01-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental…

  6. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 3 2011-01-01 2011-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App....

  7. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions...

  8. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan...

  9. 12 CFR Appendix K to Part 1026 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan...

  10. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions...

  11. 12 CFR Appendix K to Part 226 - Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions...

  12. Resource Utilization and Costs during the Initial Years of Lung Cancer Screening with Computed Tomography in Canada

    PubMed Central

    Lam, Stephen; Tammemagi, Martin C.; Evans, William K.; Leighl, Natasha B.; Regier, Dean A.; Bolbocean, Corneliu; Shepherd, Frances A.; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R.; Mayo, John R.; McWilliams, Annette; Couture, Christian; English, John C.; Goffin, John; Hwang, David M.; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J.; Goss, Glenwood D.; Nicholas, Garth; Seely, Jean M.; Sekhon, Harmanjatinder S.; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N.; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D.; Tan, Wan C.; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J.

    2014-01-01

    Background: It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Methods: Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer’s perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. Results: The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400–$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553–$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254–$52,200; p = 0.061). Conclusion: In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure. PMID:25105438

  13. EPA evaluation of the SYNERGY-1 fuel additive under Section 511 of the Motor Vehicle Information and Cost Savings Act. Technical report

    SciTech Connect

    Syria, S.L.

    1981-06-01

    This document announces the conclusions of the EPA evaluation of the 'SYNERGY-1' device under provisions of Section 511 of the Motor Vehicle Information and Cost Savings Act. This additive is intended to improve fuel economy and exhaust emission levels of two and four cycle gasoline fueled engines.

  14. Low cost computer subsystem for the Solar Electric Propulsion Stage (SEPS)

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Solar Electric Propulsion Stage (SEPS) subsystem which consists of the computer, custom input/output (I/O) unit, and tape recorder for mass storage of telemetry data was studied. Computer software and interface requirements were developed along with computer and I/O unit design parameters. Redundancy implementation was emphasized. Reliability analysis was performed for the complete command computer sybsystem. A SEPS fault tolerant memory breadboard was constructed and its operation demonstrated.

  15. Development of Specifications for a Low Cost Computer System for Secondary Schools. Final Report.

    ERIC Educational Resources Information Center

    Kleiner, George

    The last few years have seen more and more secondary schools introduce computer concepts and some form of computer resource into their educational program--usually a commercial time-sharing service with a modest initial expenditure--but almost invariably the demand for terminal availability and computer usage suggest the need for alternatives.…

  16. Scaled opposite-spin CC2 for ground and excited states with fourth order scaling computational costs

    NASA Astrophysics Data System (ADS)

    Winter, Nina O. C.; Hättig, Christof

    2011-05-01

    An implementation of scaled opposite-spin CC2 (SOS-CC2) for ground and excited state energies is presented that requires only fourth order scaling computational costs. The SOS-CC2 method yields results with an accuracy comparable to the unscaled method. Furthermore the time-determining fifth order scaling steps in the algorithm can be replaced by only fourth order scaling computational costs using a "resolution of the identity" approximation for the electron repulsion integrals and a Laplace transformation of the orbital energy denominators. This leads to a significant reduction of computational costs especially for large systems. Timings for ground and excited state calculations are shown and the error of the Laplace transformation is investigated. An application to a chlorophyll molecule with 134 atoms results in a speed-up by a factor of five and demonstrates how the new implementation extends the applicability of the method. A SOS variant of the algebraic diagrammatic construction through second order ADC(2), which arises from a simplification of the SOS-CC2 model, is also presented. The SOS-ADC(2) model is a cost-efficient alternative in particular for future extensions to spectral intensities and excited state structure optimizations.

  17. Low-cost computing and network communication for a point-of-care device to perform a 3-part leukocyte differential

    NASA Astrophysics Data System (ADS)

    Powless, Amy J.; Feekin, Lauren E.; Hutcheson, Joshua A.; Alapat, Daisy V.; Muldoon, Timothy J.

    2016-03-01

    Point-of-care approaches for 3-part leukocyte differentials (granulocyte, monocyte, and lymphocyte), traditionally performed using a hematology analyzer within a panel of tests called a complete blood count (CBC), are essential not only to reduce cost but to provide faster results in low resource areas. Recent developments in lab-on-a-chip devices have shown promise in reducing the size and reagents used, relating to a decrease in overall cost. Furthermore, smartphone diagnostic approaches have shown much promise in the area of point-of-care diagnostics, but the relatively high per-unit cost may limit their utility in some settings. We present here a method to reduce computing cost of a simple epi-fluorescence imaging system using a Raspberry Pi (single-board computer, <$40) to perform a 3-part leukocyte differential comparable to results from a hematology analyzer. This system uses a USB color camera in conjunction with a leukocyte-selective vital dye (acridine orange) in order to determine a leukocyte count and differential from a low volume (<20 microliters) of whole blood obtained via fingerstick. Additionally, the system utilizes a "cloud-based" approach to send image data from the Raspberry Pi to a main server and return results back to the user, exporting the bulk of the computational requirements. Six images were acquired per minute with up to 200 cells per field of view. Preliminary results showed that the differential count varied significantly in monocytes with a 1 minute time difference indicating the importance of time-gating to produce an accurate/consist differential.

  18. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  19. Additive manufacturing of liquid/gas diffusion layers for low-cost and high-efficiency hydrogen production

    DOE PAGES

    Mo, Jingke; Zhang, Feng -Yuan; Dehoff, Ryan R.; Peter, William H.; Toops, Todd J.; Green, Jr., Johney Boyd

    2016-01-14

    The electron beam melting (EBM) additive manufacturing technology was used to fabricate titanium liquid/gas diffusion media with high-corrosion resistances and well-controllable multifunctional parameters, including two-phase transport and excellent electric/thermal conductivities, has been first demonstrated. Their applications in proton exchange membrane eletrolyzer cells have been explored in-situ in a cell and characterized ex-situ with SEM and XRD. Compared with the conventional woven liquid/gas diffusion layers (LGDLs), much better performance with EBM fabricated LGDLs is obtained due to their significant reduction of ohmic loss. The EBM technology components exhibited several distinguished advantages in fabricating gas diffusion layer: well-controllable pore morphology and structure,more » rapid prototyping, fast manufacturing, highly customizing and economic. In addition, by taking advantage of additive manufacturing, it possible to fabricate complicated three-dimensional designs of virtually any shape from a digital model into one single solid object faster, cheaper and easier, especially for titanium. More importantly, this development will provide LGDLs with control of pore size, pore shape, pore distribution, and therefore porosity and permeability, which will be very valuable to develop modeling and to validate simulations of electrolyzers with optimal and repeatable performance. Further, it will lead to a manufacturing solution to greatly simplify the PEMEC/fuel cell components and to couple the LGDLs with other parts, since they can be easily integrated together with this advanced manufacturing process« less

  20. Low-Cost Magnetic Stirrer from Recycled Computer Parts with Optional Hot Plate

    ERIC Educational Resources Information Center

    Guidote, Armando M., Jr.; Pacot, Giselle Mae M.; Cabacungan, Paul M.

    2015-01-01

    Magnetic stirrers and hot plates are key components of science laboratories. However, these are not readily available in many developing countries due to their high cost. This article describes the design of a low-cost magnetic stirrer with hot plate from recycled materials. Some of the materials used are neodymium magnets and CPU fans from…

  1. Usability of a Low-Cost Head Tracking Computer Access Method following Stroke.

    PubMed

    Mah, Jasmine; Jutai, Jeffrey W; Finestone, Hillel; Mckee, Hilary; Carter, Melanie

    2015-01-01

    Assistive technology devices for computer access can facilitate social reintegration and promote independence for people who have had a stroke. This work describes the exploration of the usefulness and acceptability of a new computer access device called the Nouse™ (Nose-as-mouse). The device uses standard webcam and video recognition algorithms to map the movement of the user's nose to a computer cursor, thereby allowing hands-free computer operation. Ten participants receiving in- or outpatient stroke rehabilitation completed a series of standardized and everyday computer tasks using the Nouse™ and then completed a device usability questionnaire. Task completion rates were high (90%) for computer activities only in the absence of time constraints. Most of the participants were satisfied with ease of use (70%) and liked using the Nouse™ (60%), indicating they could resume most of their usual computer activities apart from word-processing using the device. The findings suggest that hands-free computer access devices like the Nouse™ may be an option for people who experience upper motor impairment caused by stroke and are highly motivated to resume personal computing. More research is necessary to further evaluate the effectiveness of this technology, especially in relation to other computer access assistive technology devices. PMID:26427744

  2. The application of cloud computing to scientific workflows: a study of cost and performance.

    PubMed

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  3. Linking process, structure, property, and performance for metal-based additive manufacturing: computational approaches with experimental support

    NASA Astrophysics Data System (ADS)

    Smith, Jacob; Xiong, Wei; Yan, Wentao; Lin, Stephen; Cheng, Puikei; Kafka, Orion L.; Wagner, Gregory J.; Cao, Jian; Liu, Wing Kam

    2016-04-01

    Additive manufacturing (AM) methods for rapid prototyping of 3D materials (3D printing) have become increasingly popular with a particular recent emphasis on those methods used for metallic materials. These processes typically involve an accumulation of cyclic phase changes. The widespread interest in these methods is largely stimulated by their unique ability to create components of considerable complexity. However, modeling such processes is exceedingly difficult due to the highly localized and drastic material evolution that often occurs over the course of the manufacture time of each component. Final product characterization and validation are currently driven primarily by experimental means as a result of the lack of robust modeling procedures. In the present work, the authors discuss primary detrimental hurdles that have plagued effective modeling of AM methods for metallic materials while also providing logical speculation into preferable research directions for overcoming these hurdles. The primary focus of this work encompasses the specific areas of high-performance computing, multiscale modeling, materials characterization, process modeling, experimentation, and validation for final product performance of additively manufactured metallic components.

  4. Reduction of computer usage costs in predicting unsteady aerodynamic loadings caused by control surface motions: Computer program description

    NASA Technical Reports Server (NTRS)

    Petrarca, J. R.; Harrison, B. A.; Redman, M. C.; Rowe, W. S.

    1979-01-01

    A digital computer program was developed to calculate unsteady loadings caused by motions of lifting surfaces with leading edge and trailing edge controls based on the subsonic kernel function approach. The pressure singularities at hinge line and side edges were extracted analytically as a preliminary step to solving the integral equation of collocation. The program calculates generalized aerodynamic forces for user supplied deflection modes. Optional intermediate output includes pressure at an array of points, and sectional generalized forces. From one to six controls on the half span can be accomodated.

  5. Neural Correlates of Task Cost for Stance Control with an Additional Motor Task: Phase-Locked Electroencephalogram Responses

    PubMed Central

    Hwang, Ing-Shiou; Huang, Cheng-Ya

    2016-01-01

    With appropriate reallocation of central resources, the ability to maintain an erect posture is not necessarily degraded by a concurrent motor task. This study investigated the neural control of a particular postural-suprapostural procedure involving brain mechanisms to solve crosstalk between posture and motor subtasks. Participants completed a single posture task and a dual-task while concurrently conducting force-matching and maintaining a tilted stabilometer stance at a target angle. Stabilometer movements and event-related potentials (ERPs) were recorded. The added force-matching task increased the irregularity of postural response rather than the size of postural response prior to force-matching. In addition, the added force-matching task during stabilometer stance led to marked topographic ERP modulation, with greater P2 positivity in the frontal and sensorimotor-parietal areas of the N1-P2 transitional phase and in the sensorimotor-parietal area of the late P2 phase. The time-frequency distribution of the ERP primary principal component revealed that the dual-task condition manifested more pronounced delta (1–4 Hz) and beta (13–35 Hz) synchronizations but suppressed theta activity (4–8 Hz) before force-matching. The dual-task condition also manifested coherent fronto-parietal delta activity in the P2 period. In addition to a decrease in postural regularity, this study reveals spatio-temporal and temporal-spectral reorganizations of ERPs in the fronto-sensorimotor-parietal network due to the added suprapostural motor task. For a particular set of postural-suprapostural task, the behavior and neural data suggest a facilitatory role of autonomous postural response and central resource expansion with increasing interregional interactions for task-shift and planning the motor-suprapostural task. PMID:27010634

  6. Can low-cost VOR and Omega receivers suffice for RNAV - A new computer-based navigation technique

    NASA Technical Reports Server (NTRS)

    Hollaar, L. A.

    1978-01-01

    It is shown that although RNAV is particularly valuable for the personal transportation segment of general aviation, it has not gained complete acceptance. This is due, in part, to its high cost and the necessary special-handling air traffic control. VOR/DME RNAV calculations are ideally suited for analog computers, and the use of microprocessor technology has been suggested for reducing RNAV costs. Three navigation systems, VOR, Omega, and DR, are compared for common navigational difficulties, such as station geometry, siting errors, ground disturbances, and terminal area coverage. The Kalman filtering technique is described with reference to the disadvantages when using a system including standard microprocessors. An integrated navigation system, using input data from various low-cost sensor systems, is presented and current simulation studies are noted.

  7. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  8. Evaluating Computer-Assisted Career Guidance Systems: A Critique of the Differential Feature-Cost Approach.

    ERIC Educational Resources Information Center

    Oliver, Laurel W.

    1990-01-01

    Finds the feature-cost analysis method (Sampson et al., CE 521 972) a useful tool, but suggests that users need to determine which criteria are most important to them on the basis of a needs assessment. (SK)

  9. Model implementation for dynamic computation of system cost for advanced life support

    NASA Technical Reports Server (NTRS)

    Levri, J. A.; Vaccari, D. A.

    2004-01-01

    Life support system designs for long-duration space missions have a multitude of requirements drivers, such as mission objectives, political considerations, cost, crew wellness, inherent mission attributes, as well as many other influences. Evaluation of requirements satisfaction can be difficult, particularly at an early stage of mission design. Because launch cost is a critical factor and relatively easy to quantify, it is a point of focus in early mission design. The method used to determine launch cost influences the accuracy of the estimate. This paper discusses the appropriateness of dynamic mission simulation in estimating the launch cost of a life support system. This paper also provides an abbreviated example of a dynamic simulation life support model and possible ways in which such a model might be utilized for design improvement. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  10. Thoracoabdominal Computed Tomography in Trauma Patients: A Cost-Consequences Analysis

    PubMed Central

    van Vugt, Raoul; Kool, Digna R.; Brink, Monique; Dekker, Helena M.; Deunk, Jaap; Edwards, Michael J.

    2014-01-01

    Background: CT is increasingly used during the initial evaluation of blunt trauma patients. In this era of increasing cost-awareness, the pros and cons of CT have to be assessed. Objectives: This study was performed to evaluate cost-consequences of different diagnostic algorithms that use thoracoabdominal CT in primary evaluation of adult patients with high-energy blunt trauma. Materials and Methods: We compared three different algorithms in which CT was applied as an immediate diagnostic tool (rush CT), a diagnostic tool after limited conventional work-up (routine CT), and a selective tool (selective CT). Probabilities of detecting and missing clinically relevant injuries were retrospectively derived. We collected data on radiation exposure and performed a micro-cost analysis on a reference case-based approach. Results: Both rush and routine CT detected all thoracoabdominal injuries in 99.1% of the patients during primary evaluation (n = 1040). Selective CT missed one or more diagnoses in 11% of the patients in which a change of treatment was necessary in 4.8%. Rush CT algorithm costed € 2676 (US$ 3660) per patient with a mean radiation dose of 26.40 mSv per patient. Routine CT costed € 2815 (US$ 3850) and resulted in the same radiation exposure. Selective CT resulted in less radiation dose (23.23 mSv) and costed € 2771 (US$ 3790). Conclusions: Rush CT seems to result in the least costs and is comparable in terms of radiation dose exposure and diagnostic certainty with routine CT after a limited conventional work-up. However, selective CT results in less radiation dose exposure but a slightly higher cost and less certainty. PMID:25337521

  11. Computer Literacy Act of 1984. Report together with Minority Views [and] Computer Literacy Act of 1983. Report together with Additional Views. To accompany H.R. 3750.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Committee on Education and Labor.

    These two reports contain supporting material intended to accompany the Computer Literacy Acts of 1983 and 1984 (H. R. 3750). This bill was designed to promote the use of computer technologies in elementary and secondary schools by authorizing: (1) grants to local school districts, particularly in poor areas, to purchase computer hardware; (2)…

  12. Rural road characteristics and vehicle operating costs in developing countries. [Computer Aided Rural Transport Analysis (C. A. R. T. A. )

    SciTech Connect

    Crossley, C.P.

    1981-12-01

    The primary phase of transportation at the smallholder level, from village to local market, is a particularly important aspect of increasing agricultural production in developing countries. The realistic prediction of vehicle operating costs on the (largely) unsurface roads in this sector is a useful input to development planning and a computer program has been developed to produce such predictions from first principles. When compared with survey results obtained by the Transport and Road Research Laboratory in Kenya, it is found that correlation is satisfactorily close. The program can also be used to predict the effects, on the operating costs of various vehicles, of changing road characteristics (gradient, curvature, roughness, rolling resistance and traction). It is found that rolling resistance and road roughness are the factors most likely to influence operating costs, due to their effects on vehicle speeds, fuel consumption and service/repair costs. Small, cheap machines are not necessarily superior to larger vehicles in terms of costs per ton kilometer and fuel, particularly where the available load is sufficient to allow the larger vehicle to be utilized reasonably fully. 11 refs.

  13. Non-additive benefit or cost? Disentangling the indirect effects that occur when plants bearing extrafloral nectaries and honeydew-producing insects share exotic ant mutualists

    PubMed Central

    Savage, Amy M.; Rudgers, Jennifer A.

    2013-01-01

    Background and Aims In complex communities, organisms often form mutualisms with multiple different partners simultaneously. Non-additive effects may emerge among species linked by these positive interactions. Ants commonly participate in mutualisms with both honeydew-producing insects (HPI) and their extrafloral nectary (EFN)-bearing host plants. Consequently, HPI and EFN-bearing plants may experience non-additive benefits or costs when these groups co-occur. The outcomes of these interactions are likely to be influenced by variation in preferences among ants for honeydew vs. nectar. In this study, a test was made for non-additive effects on HPI and EFN-bearing plants resulting from sharing exotic ant guards. Preferences of the dominant exotic ant species for nectar vs. honeydew resources were also examined. Methods Ant access, HPI and nectar availability were manipulated on the EFN-bearing shrub, Morinda citrifolia, and ant and HPI abundances, herbivory and plant growth were assessed. Ant-tending behaviours toward HPI across an experimental gradient of nectar availability were also tracked in order to investigate mechanisms underlying ant responses. Key Results The dominant ant species, Anoplolepis gracilipes, differed from less invasive ants in response to multiple mutualists, with reductions in plot-wide abundances when nectar was reduced, but no response to HPI reduction. Conversely, at sites where A. gracilipes was absent or rare, abundances of less invasive ants increased when nectar was reduced, but declined when HPI were reduced. Non-additive benefits were found at sites dominated by A. gracilipes, but only for M. citrifolia plants. Responses of HPI at these sites supported predictions of the non-additive cost model. Interestingly, the opposite non-additive patterns emerged at sites dominated by other ants. Conclusions It was demonstrated that strong non-additive benefits and costs can both occur when a plant and herbivore share mutualist partners. These

  14. Computer simulation of energy use, greenhouse gas emissions, and costs for alternative methods of processing fluid milk.

    PubMed

    Tomasula, P M; Datta, N; Yee, W C F; McAloon, A J; Nutter, D W; Sampedro, F; Bonnaillie, L M

    2014-07-01

    Computer simulation is a useful tool for benchmarking electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature, short time (HTST) pasteurization was extended to include models for processes for shelf-stable milk and extended shelf-life milk that may help prevent the loss or waste of milk that leads to increases in the greenhouse gas (GHG) emissions for fluid milk. The models were for UHT processing, crossflow microfiltration (MF) without HTST pasteurization, crossflow MF followed by HTST pasteurization (MF/HTST), crossflow MF/HTST with partial homogenization, and pulsed electric field (PEF) processing, and were incorporated into the existing model for the fluid milk process. Simulation trials were conducted assuming a production rate for the plants of 113.6 million liters of milk per year to produce only whole milk (3.25%) and 40% cream. Results showed that GHG emissions in the form of process-related CO₂ emissions, defined as CO₂ equivalents (e)/kg of raw milk processed (RMP), and specific energy consumptions (SEC) for electricity and natural gas use for the HTST process alone were 37.6g of CO₂e/kg of RMP, 0.14 MJ/kg of RMP, and 0.13 MJ/kg of RMP, respectively. Emissions of CO2 and SEC for electricity and natural gas use were highest for the PEF process, with values of 99.1g of CO₂e/kg of RMP, 0.44 MJ/kg of RMP, and 0.10 MJ/kg of RMP, respectively, and lowest for the UHT process at 31.4 g of CO₂e/kg of RMP, 0.10 MJ/kg of RMP, and 0.17 MJ/kg of RMP. Estimated unit production costs associated with the various processes were lowest for the HTST process and MF/HTST with partial homogenization at $0.507/L and highest for the UHT process at $0.60/L. The increase in shelf life associated with the UHT and MF processes may eliminate some of the supply chain product and consumer losses and waste of milk and compensate for the small increases in GHG

  15. Computer simulation of energy use, greenhouse gas emissions, and costs for alternative methods of processing fluid milk.

    PubMed

    Tomasula, P M; Datta, N; Yee, W C F; McAloon, A J; Nutter, D W; Sampedro, F; Bonnaillie, L M

    2014-07-01

    Computer simulation is a useful tool for benchmarking electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature, short time (HTST) pasteurization was extended to include models for processes for shelf-stable milk and extended shelf-life milk that may help prevent the loss or waste of milk that leads to increases in the greenhouse gas (GHG) emissions for fluid milk. The models were for UHT processing, crossflow microfiltration (MF) without HTST pasteurization, crossflow MF followed by HTST pasteurization (MF/HTST), crossflow MF/HTST with partial homogenization, and pulsed electric field (PEF) processing, and were incorporated into the existing model for the fluid milk process. Simulation trials were conducted assuming a production rate for the plants of 113.6 million liters of milk per year to produce only whole milk (3.25%) and 40% cream. Results showed that GHG emissions in the form of process-related CO₂ emissions, defined as CO₂ equivalents (e)/kg of raw milk processed (RMP), and specific energy consumptions (SEC) for electricity and natural gas use for the HTST process alone were 37.6g of CO₂e/kg of RMP, 0.14 MJ/kg of RMP, and 0.13 MJ/kg of RMP, respectively. Emissions of CO2 and SEC for electricity and natural gas use were highest for the PEF process, with values of 99.1g of CO₂e/kg of RMP, 0.44 MJ/kg of RMP, and 0.10 MJ/kg of RMP, respectively, and lowest for the UHT process at 31.4 g of CO₂e/kg of RMP, 0.10 MJ/kg of RMP, and 0.17 MJ/kg of RMP. Estimated unit production costs associated with the various processes were lowest for the HTST process and MF/HTST with partial homogenization at $0.507/L and highest for the UHT process at $0.60/L. The increase in shelf life associated with the UHT and MF processes may eliminate some of the supply chain product and consumer losses and waste of milk and compensate for the small increases in GHG

  16. Low-Cost Computer-Controlled Current Stimulator for the Student Laboratory

    ERIC Educational Resources Information Center

    Guclu, Burak

    2007-01-01

    Electrical stimulation of nerve and muscle tissues is frequently used for teaching core concepts in physiology. It is usually expensive to provide every student group in the laboratory with an individual stimulator. This article presents the design and application of a low-cost [about $100 (U.S.)] isolated stimulator that can be controlled by two…

  17. Grading Multiple Choice Exams with Low-Cost and Portable Computer-Vision Techniques

    ERIC Educational Resources Information Center

    Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández

    2013-01-01

    Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…

  18. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    PubMed Central

    Handford, Matthew L.; Srinivasan, Manoj

    2016-01-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost – even lower than assuming that the non-amputee’s ankle torques are cost-free. PMID:26857747

  19. Virtual Grower: Estimating Greenhouse Energy Costs and Plant Growth Using New Computer Software

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Greenhouse crop production is a complex, integrated system wherein a change in one component inevitably influences different, sometimes seemingly disparate components. For example, growers may modify their heating schedules to reduce energy costs, but a cooler temperature set-point can delay crop d...

  20. Enhancing simulation of efficiency with analytical tools. [combining computer simulation and analytical techniques for cost reduction

    NASA Technical Reports Server (NTRS)

    Seltzer, S. M.

    1974-01-01

    Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.

  1. Computer analysis of effects of altering jet fuel properties on refinery costs and yields

    NASA Technical Reports Server (NTRS)

    Breton, T.; Dunbar, D.

    1984-01-01

    This study was undertaken to evaluate the adequacy of future U.S. jet fuel supplies, the potential for large increases in the cost of jet fuel, and to what extent a relaxation in jet fuel properties would remedy these potential problems. The results of the study indicate that refiners should be able to meet jet fuel output requirements in all regions of the country within the current Jet A specifications during the 1990-2010 period. The results also indicate that it will be more difficult to meet Jet A specifications on the West Coast, because the feedstock quality is worse and the required jet fuel yield (jet fuel/crude refined) is higher than in the East. The results show that jet fuel production costs could be reduced by relaxing fuel properties. Potential cost savings in the East (PADDs I-IV) through property relaxation were found to be about 1.3 cents/liter (5 cents/gallon) in January 1, 1981 dollars between 1990 and 2010. However, the savings from property relaxation were all obtained within the range of current Jet A specifications, so there is no financial incentive to relax Jet A fuel specifications in the East. In the West (PADD V) the potential cost savings from lowering fuel quality were considerably greater than in the East. Cost savings from 2.7 to 3.7 cents/liter (10-14 cents/gallon) were found. In contrast to the East, on the West Coast a significant part of the savings was obtained through relaxation of the current Jet A fuel specifications.

  2. Low cost SCR lamp driver indicates contents of digital computer registers

    NASA Technical Reports Server (NTRS)

    Cliff, R. A.

    1967-01-01

    Silicon Controlled Rectifier /SCR/ lamp driver is adapted for use in integrated circuit digital computers where it indicates the contents of the various registers. The threshold voltage at which visual indication begins is very sharply defined and can be adjusted to suit particular system requirements.

  3. The Effect of Emphasizing Mathematical Structure in the Acquisition of Whole Number Computation Skills (Addition and Subtraction) By Seven- and Eight-Year Olds: A Clinical Investigation.

    ERIC Educational Resources Information Center

    Uprichard, A. Edward; Collura, Carolyn

    This investigation sought to determine the effect of emphasizing mathematical structure in the acquisition of computational skills by seven- and eight-year-olds. The meaningful development-of-structure approach emphasized closure, commutativity, associativity, and the identity element of addition; the inverse relationship between addition and…

  4. Computer program to assess impact of fatigue and fracture criteria on weight and cost of transport aircraft

    NASA Technical Reports Server (NTRS)

    Tanner, C. J.; Kruse, G. S.; Oman, B. H.

    1975-01-01

    A preliminary design analysis tool for rapidly performing trade-off studies involving fatigue, fracture, static strength, weight, and cost is presented. Analysis subprograms were developed for fatigue life, crack growth life, and residual strength; and linked to a structural synthesis module which in turn was integrated into a computer program. The part definition module of a cost and weight analysis program was expanded to be compatible with the upgraded structural synthesis capability. The resultant vehicle design and evaluation program is named VDEP-2. It is an accurate and useful tool for estimating purposes at the preliminary design stage of airframe development. A sample case along with an explanation of program applications and input preparation is presented.

  5. Low-cost digital image processing on a university mainframe computer. [considerations in selecting and/or designing instructional systems

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.

    1981-01-01

    The advantages and limitations of using university mainframe computers in digital image processing instruction are listed. Aspects to be considered when designing software for this purpose include not only two general audience, but also the capabilities of the system regarding the size of the image/subimage, preprocessing and enhancement functions, geometric correction and registration techniques; classification strategy, classification algorithm, multitemporal analysis, and ancilliary data and geographic information systems. The user/software/hardware interaction as well as acquisition and operating costs must also be considered.

  6. Development and implementation of a low cost micro computer system for LANDSAT analysis and geographic data base applications

    NASA Technical Reports Server (NTRS)

    Faust, N.; Jordon, L.

    1981-01-01

    Since the implementation of the GRID and IMGRID computer programs for multivariate spatial analysis in the early 1970's, geographic data analysis subsequently moved from large computers to minicomputers and now to microcomputers with radical reduction in the costs associated with planning analyses. Programs designed to process LANDSAT data to be used as one element in a geographic data base were used once NIMGRID (new IMGRID), a raster oriented geographic information system, was implemented on the microcomputer. Programs for training field selection, supervised and unsupervised classification, and image enhancement were added. Enhancements to the color graphics capabilities of the microsystem allow display of three channels of LANDSAT data in color infrared format. The basic microcomputer hardware needed to perform NIMGRID and most LANDSAT analyses is listed as well as the software available for LANDSAT processing.

  7. Sensitive and cost-effective LC-MS/MS method for quantitation of CVT-6883 in human urine using sodium dodecylbenzenesulfonate additive to eliminate adsorptive losses.

    PubMed

    Chen, Chungwen; Bajpai, Lakshmikant; Mollova, Nevena; Leung, Kwan

    2009-04-01

    CVT-6883, a novel selective A(2B) adenosine receptor antagonist currently under clinical development, is highly lipophilic and exhibits high affinity for non-specific binding to container surfaces, resulting in very low recovery in urine assays. Our study showed the use of sodium dodecylbenzenesulfonate (SDBS), a low-cost additive, eliminated non-specific binding problems in the analysis of CVT-6883 in human urine without compromising sensitivity. A new sensitive and selective LC-MS/MS method for quantitation of CVT-6883 in the range of 0.200-80.0ng/mL using SDBS additive was therefore developed and validated for the analysis of human urine samples. The recoveries during sample collection, handling and extraction for the analyte and internal standard (d(5)-CVT-6883) were higher than 87%. CVT-6883 was found stable under the following conditions: in extract - at ambient temperature for 3 days, under refrigeration (5 degrees C) for 6 days; in human urine (containing 4mM SDBS) - after three freeze/thaw cycles, at ambient temperature for 26h, under refrigeration (5 degrees C) for 94h, and in a freezer set to -20 degrees C for at least 2 months. The results demonstrated that the validated method is sufficiently sensitive, specific, and cost-effective for the analysis of CVT-6883 in human urine and will provide a powerful tool to support the clinical programs for CVT-6883.

  8. [VALIDATION OF A COMPUTER PROGRAM FOR DETECTION OF MALNUTRITION HOSPITAL AND ANALYSIS OF HOSPITAL COSTS].

    PubMed

    Fernández Valdivia, Antonia; Rodríguez Rodríguez, José María; Valero Aguilera, Beatriz; Lobo Támer, Gabriela; Pérez de la Cruz, Antonio Jesús; García Larios, José Vicente

    2015-07-01

    Introducción: uno de los métodos de diagnóstico de la desnutrición es la albúmina sérica, por la sencillez de su determinación y bajo coste. Objetivos: el objetivo principal es validar e implementar un programa informático, basado en la determinación de albúmina sérica, que permita detectar y tratar precozmente a los pacientes desnutridos o en riesgo de desnutrición, siendo otro objetivo la evaluación de costes por grupos relacionados por el diagnóstico. Métodos: el diseño del estudio es de tipo cohorte, dinámico y prospectivo, en el que se han incluido las altas hospitalarias desde noviembre del año 2012 hasta marzo del año 2014, siendo la población de estudio los pacientes mayores de 14 años que ingresen en los diversos servicios de un Hospital Médico Quirúrgico del Complejo Hospitalario Universitario de Granada, cuyas cifras de albúmina sérica sean menores de 3,5 g/dL, siendo el total de 307 pacientes. Resultados: de los 307 pacientes, 141 presentan desnutrición (sensibilidad del programa: 45,9%). El 54,7% de los pacientes son hombres y el 45,3% mujeres. La edad media es de 65,68 años. La mediana de la estancia es de 16 días. El 13,4% de los pacientes han fallecido. El coste medio de los GRD es de 5.958,30 € y dicho coste medio después de detectar la desnutrición es de 11.376,48 €. Conclusiones: el algoritmo que implementa el programa informático identifica a casi la mitad de los pacientes hospitalizados desnutridos. Es fundamental registrar el diagnóstico de desnutrición.

  9. Matched filtering of gravitational waves from inspiraling compact binaries: Computational cost and template placement

    NASA Astrophysics Data System (ADS)

    Owen, Benjamin J.; Sathyaprakash, B. S.

    1999-07-01

    We estimate the number of templates, computational power, and storage required for a one-step matched filtering search for gravitational waves from inspiraling compact binaries. Our estimates for the one-step search strategy should serve as benchmarks for the evaluation of more sophisticated strategies such as hierarchical searches. We use a discrete family of two-parameter wave form templates based on the second post-Newtonian approximation for binaries composed of nonspinning compact bodies in circular orbits. We present estimates for all of the large- and mid-scale interferometers now under construction: LIGO (three configurations), VIRGO, GEO600, and TAMA. To search for binaries with components more massive than mmin=0.2Msolar while losing no more than 10% of events due to coarseness of template spacing, the initial LIGO interferometers will require about 1.0×1011 flops (floating point operations per second) for data analysis to keep up with data acquisition. This is several times higher than estimated in previous work by Owen, in part because of the improved family of templates and in part because we use more realistic (higher) sampling rates. Enhanced LIGO, GEO600, and TAMA will require computational power similar to initial LIGO. Advanced LIGO will require 7.8×1011 flops, and VIRGO will require 4.8×1012 flops to take full advantage of its broad target noise spectrum. If the templates are stored rather than generated as needed, storage requirements range from 1.5×1011 real numbers for TAMA to 6.2×1014 for VIRGO. The computational power required scales roughly as m-8/3min and the storage as m-13/3min. Since these scalings are perturbed by the curvature of the parameter space at second post-Newtonian order, we also provide estimates for a search with mmin=1Msolar. Finally, we sketch and discuss an algorithm for placing the templates in the parameter space.

  10. Computational evaluation of cellular metabolic costs successfully predicts genes whose expression is deleterious

    PubMed Central

    Wagner, Allon; Zarecki, Raphy; Reshef, Leah; Gochev, Camelia; Sorek, Rotem; Gophna, Uri; Ruppin, Eytan

    2013-01-01

    Gene suppression and overexpression are both fundamental tools in linking genotype to phenotype in model organisms. Computational methods have proven invaluable in studying and predicting the deleterious effects of gene deletions, and yet parallel computational methods for overexpression are still lacking. Here, we present Expression-Dependent Gene Effects (EDGE), an in silico method that can predict the deleterious effects resulting from overexpression of either native or foreign metabolic genes. We first test and validate EDGE’s predictive power in bacteria through a combination of small-scale growth experiments that we performed and analysis of extant large-scale datasets. Second, a broad cross-species analysis, ranging from microorganisms to multiple plant and human tissues, shows that genes that EDGE predicts to be deleterious when overexpressed are indeed typically down-regulated. This reflects a universal selection force keeping the expression of potentially deleterious genes in check. Third, EDGE-based analysis shows that cancer genetic reprogramming specifically suppresses genes whose overexpression impedes proliferation. The magnitude of this suppression is large enough to enable an almost perfect distinction between normal and cancerous tissues based solely on EDGE results. We expect EDGE to advance our understanding of human pathologies associated with up-regulation of particular transcripts and to facilitate the utilization of gene overexpression in metabolic engineering. PMID:24198337

  11. Taming the Electronic Structure of Diradicals through the Window of Computationally Cost Effective Multireference Perturbation Theory.

    PubMed

    Sinha Ray, Suvonil; Ghosh, Anirban; Chattopadhyay, Sudip; Chaudhuri, Rajat K

    2016-07-28

    Recently a state-specific multireference perturbation theory (SSMRPT) with an improved virtual orbitals complete active space configuration interaction (IVO-CASCI) reference function has been proposed for treating electronic structures of radicals such as methylene, m-benzyne, pyridyne, and pyridynium cation. This new development in MRPT, termed as IVO-SSMRPT, ensures that it is able to describe the structure of radicaloids with reasonable accuracy even with small reference spaces. IVO-SSMRPT is also capable of predicting the correct ordering of the lowest singlet-triplet gaps. Investigation of the first three electronic states of the oxygen molecule has also been used for rating our method. The agreement of our estimates with the available far more expensive benchmark state-of-the-art ab initio calculations is creditable. The IVO-SSMRPT method provides an effective avenue with manageable cost/accuracy ratio for accurately dealing with radicaloid systems possessing varying degrees of quasidegeneracy. PMID:27355260

  12. Avoiding the Enumeration of Infeasible Elementary Flux Modes by Including Transcriptional Regulatory Rules in the Enumeration Process Saves Computational Costs

    PubMed Central

    Jungreuthmayer, Christian; Ruckerbauer, David E.; Gerstl, Matthias P.; Hanscho, Michael; Zanghellini, Jürgen

    2015-01-01

    Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic networks. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, computational costs, such as runtime, memory usage, and disk space, are extremely reduced. Moreover, we show that the application of transcriptional rules identifies non-trivial system-wide effects on metabolism. Using the presented algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new and much higher limits without the loss of predictive quality. This makes unbiased, system-wide predictions in large scale metabolic networks possible without resorting to any optimization principle. PMID:26091045

  13. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    NASA Technical Reports Server (NTRS)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  14. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  15. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  16. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  17. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  18. 17 CFR Appendix B to Part 4 - Adjustments for Additions and Withdrawals in the Computation of Rate of Return

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Return Method Rate of return for a period may be calculated by computing the net performance divided by the beginning net asset value for each trading day in the period and compounding each daily rate of... commodity pool operator or commodity trading advisor may present to the Commission proposals regarding...

  19. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays.

    PubMed

    Goodman, Corey W; Major, Heather J; Walls, William D; Sheffield, Val C; Casavant, Thomas L; Darbro, Benjamin W

    2015-04-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments.

  20. CNV-ROC: A cost effective, computer-aided analytical performance evaluator of chromosomal microarrays

    PubMed Central

    Goodman, Corey W.; Major, Heather J.; Walls, William D.; Sheffield, Val C.; Casavant, Thomas L.; Darbro, Benjamin W.

    2016-01-01

    Chromosomal microarrays (CMAs) are routinely used in both research and clinical laboratories; yet, little attention has been given to the estimation of genome-wide true and false negatives during the assessment of these assays and how such information could be used to calibrate various algorithmic metrics to improve performance. Low-throughput, locus-specific methods such as fluorescence in situ hybridization (FISH), quantitative PCR (qPCR), or multiplex ligation-dependent probe amplification (MLPA) preclude rigorous calibration of various metrics used by copy number variant (CNV) detection algorithms. To aid this task, we have established a comparative methodology, CNV-ROC, which is capable of performing a high throughput, low cost, analysis of CMAs that takes into consideration genome-wide true and false negatives. CNV-ROC uses a higher resolution microarray to confirm calls from a lower resolution microarray and provides for a true measure of genome-wide performance metrics at the resolution offered by microarray testing. CNV-ROC also provides for a very precise comparison of CNV calls between two microarray platforms without the need to establish an arbitrary degree of overlap. Comparison of CNVs across microarrays is done on a per-probe basis and receiver operator characteristic (ROC) analysis is used to calibrate algorithmic metrics, such as log2 ratio threshold, to enhance CNV calling performance. CNV-ROC addresses a critical and consistently overlooked aspect of analytical assessments of genome-wide techniques like CMAs which is the measurement and use of genome-wide true and false negative data for the calculation of performance metrics and comparison of CNV profiles between different microarray experiments. PMID:25595567

  1. Modeling the economic costs of disasters and recovery: analysis using a dynamic computable general equilibrium model

    NASA Astrophysics Data System (ADS)

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2014-04-01

    Disaster damages have negative effects on the economy, whereas reconstruction investment has positive effects. The aim of this study is to model economic causes of disasters and recovery involving the positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and furthermore avoid the double-counting problem. In order to factor both shocks into the CGE model, direct loss is set as the amount of capital stock reduced on the supply side of the economy; a portion of investments restores the capital stock in an existing period; an investment-driven dynamic model is formulated according to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable to balance the fixed investment. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction, respectively. The study showed that output from S1 is found to be closer to real data than that from S2. Economic loss under S2 is roughly 1.5 times that under S1. The gap in the economic aggregate between S1 and S0 is reduced to 3% at the end of government-led reconstruction activity, a level that should take another four years to achieve under S2.

  2. Cost-Benefit Analysis for ECIA Chapter 1 and State DPPF Programs Comparing Groups Receiving Regular Program Instruction and Groups Receiving Computer Assisted Instruction/Computer Management System (CAI/CMS). 1986-87.

    ERIC Educational Resources Information Center

    Chamberlain, Ed

    A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…

  3. Cost and Resource Utilization Associated with Use of Computed Tomography to Evaluate Chest Pain in the Emergency Department: The ROMICAT Study

    PubMed Central

    Hulten, Edward; Goehler, Alexander; Bittencourt, Marcio; Bamberg, Fabian; Schlett, Christopher L.; Truong, Quynh A.; Nichols, John; Nasir, Khurram; Rogers, Ian S.; Gazelle, Scott G.; Nagurney, John T.; Hoffmann, Udo; Blankstein, Ron

    2013-01-01

    Background Coronary computed tomography angiography (cCTA) allows for rapid non-invasive exclusion of obstructive coronary artery disease (CAD). However, concern exists whether implementation of cCTA in the assessment of patients presenting to the emergency room with acute chest pain will lead to increased downstream testing and costs compared to alternative strategies. Our aim was to compare observed actual costs of usual care (UC) with projected costs of a strategy including early cCTA in the evaluation of patients with acute chest pain in the Rule Out Myocardial Infarction Using Computed Tomography (ROMICAT I) study. Methods and Results We compared cost and hospital length of stay of UC observed among 368 patients enrolled in the ROMICAT I trial with projected costs of management based on cCTA. Costs of UC were determined by an electronic cost accounting system. Notably, UC was not influenced by cCTA results, as patients and caregivers were blinded to the cCTA results. Costs after early implementation cCTA were estimated assuming changes in management based on cCTA findings of presence and severity of CAD. Sensitivity analysis was used to test influence of key variables on both outcomes and costs. We determined that in comparison to UC, cCTA-guided triage whereby patients with no CAD are discharged, could reduce total hospital costs by 23%, p < 0.001. However, when the prevalence of obstructive CAD increases, index hospitalization cost increases such that when the prevalence of ≥50% stenosis is greater than 28–33%, the use of cCTA becomes more costly than UC. Conclusion cCTA may be a cost saving tool in acute chest pain populations that have a prevalence of potentially obstructive CAD lower than 30%. However, increased cost would be anticipated in populations with higher prevalence of disease. PMID:24021693

  4. Towards a Low-Cost Real-Time Photogrammetric Landslide Monitoring System Utilising Mobile and Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Chidburee, P.; Mills, J. P.; Miller, P. E.; Fieber, K. D.

    2016-06-01

    Close-range photogrammetric techniques offer a potentially low-cost approach in terms of implementation and operation for initial assessment and monitoring of landslide processes over small areas. In particular, the Structure-from-Motion (SfM) pipeline is now extensively used to help overcome many constraints of traditional digital photogrammetry, offering increased user-friendliness to nonexperts, as well as lower costs. However, a landslide monitoring approach based on the SfM technique also presents some potential drawbacks due to the difficulty in managing and processing a large volume of data in real-time. This research addresses the aforementioned issues by attempting to combine a mobile device with cloud computing technology to develop a photogrammetric measurement solution as part of a monitoring system for landslide hazard analysis. The research presented here focusses on (i) the development of an Android mobile application; (ii) the implementation of SfM-based open-source software in the Amazon cloud computing web service, and (iii) performance assessment through a simulated environment using data collected at a recognized landslide test site in North Yorkshire, UK. Whilst the landslide monitoring mobile application is under development, this paper describes experiments carried out to ensure effective performance of the system in the future. Investigations presented here describe the initial assessment of a cloud-implemented approach, which is developed around the well-known VisualSFM algorithm. Results are compared to point clouds obtained from alternative SfM 3D reconstruction approaches considering a commercial software solution (Agisoft PhotoScan) and a web-based system (Autodesk 123D Catch). Investigations demonstrate that the cloud-based photogrammetric measurement system is capable of providing results of centimeter-level accuracy, evidencing its potential to provide an effective approach for quantifying and analyzing landslide hazard at a local-scale.

  5. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  6. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  7. A low-cost computer-controlled Arduino-based educational laboratory system for teaching the fundamentals of photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Zachariadou, K.; Yiasemides, K.; Trougkakos, N.

    2012-11-01

    We present a low-cost, fully computer-controlled, Arduino-based, educational laboratory (SolarInsight) to be used in undergraduate university courses concerned with electrical engineering and physics. The major goal of the system is to provide students with the necessary instrumentation, software tools and methodology in order to learn fundamental concepts of semiconductor physics by exploring the process of an experimental physics inquiry. The system runs under the Windows operating system and is composed of a data acquisition/control board, a power supply and processing boards, sensing elements, a graphical user interface and data analysis software. The data acquisition/control board is based on the Arduino open source electronics prototyping platform. The graphical user interface and communication with the Arduino are developed in C# and C++ programming languages respectively, by using IDE Microsoft Visual Studio 2010 Professional, which is freely available to students. Finally, the data analysis is performed by using the open source, object-oriented framework ROOT. Currently the system supports five teaching activities, each one corresponding to an independent tab in the user interface. SolarInsight has been partially developed in the context of a diploma thesis conducted within the Technological Educational Institute of Piraeus under the co-supervision of the Physics and Electronic Computer Systems departments’ academic staff.

  8. Reduced computational cost, totally symmetric angular quadrature sets for discrete ordinates radiation transport. Master`s thesis

    SciTech Connect

    Oder, J.M.

    1997-12-01

    Several new quadrature sets for use in the discrete ordinates method of solving the Boltzmann neutral particle transport equation are derived. These symmetric quadratures extend the traditional symmetric quadratures by allowing ordinates perpendicular to one or two of the coordinate axes. Comparable accuracy with fewer required ordinates is obtained. Quadratures up to seventh order are presented. The validity and efficiency of the quadratures is then tested and compared with the Sn level symmetric quadratures relative to a Monte Carlo benchmark solution. The criteria for comparison include current through the surface, scalar flux at the surface, volume average scalar flux, and time required for convergence. Appreciable computational cost was saved when used in an unstructured tetrahedral cell code using highly accurate characteristic methods. However, no appreciable savings in computation time was found using the new quadratures compared with traditional Sn methods on a regular Cartesian mesh using the standard diamond difference method. These quadratures are recommended for use in three-dimensional calculations on an unstructured mesh.

  9. A feasibility study on direct methanol fuel cells for laptop computers based on a cost comparison with lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Wee, Jung-Ho

    This paper compares the total cost of direct methanol fuel cell (DMFC) and lithium (Li)-ion battery systems when applied as the power supply for laptop computers in the Korean environment. The average power output and operational time of the laptop computers were assumed to be 20 W and 3000 h, respectively. Considering the status of their technologies and with certain conditions assumed, the total costs were calculated to be US140 for the Li-ion battery and US362 for DMFC. The manufacturing costs of the DMFC and Li-ion battery systems were calculated to be 16.65 W -1 and 0.77 W h -1, and the energy consumption costs to be 0.00051 W h -1 and 0.00032 W h -1, respectively. The higher fuel consumption cost of the DMFC system was due to the methanol (MeOH) crossover loss. Therefore, the requirements for DMFCs to be able to compete with Li-ion batteries in terms of energy cost include reducing the crossover level to at an order magnitude of -9 and the MeOH price to under 0.5 kg -1. Under these conditions, if the DMFC manufacturing cost could be reduced to 6.30 W -1, then the DMFC system would become at least as competitive as the Li-ion battery system for powering laptop computers in Korea.

  10. Costs Associated with Implementation of Computer-Assisted Clinical Decision Support System for Antenatal and Delivery Care: Case Study of Kassena-Nankana District of Northern Ghana

    PubMed Central

    Dalaba, Maxwell Ayindenaba; Akweongo, Patricia; Williams, John; Saronga, Happiness Pius; Tonchev, Pencho; Sauerborn, Rainer; Mensah, Nathan; Blank, Antje; Kaltschmidt, Jens; Loukanova, Svetla

    2014-01-01

    Objective This study analyzed cost of implementing computer-assisted Clinical Decision Support System (CDSS) in selected health care centres in Ghana. Methods A descriptive cross sectional study was conducted in the Kassena-Nankana district (KND). CDSS was deployed in selected health centres in KND as an intervention to manage patients attending antenatal clinics and the labour ward. The CDSS users were mainly nurses who were trained. Activities and associated costs involved in the implementation of CDSS (pre-intervention and intervention) were collected for the period between 2009–2013 from the provider perspective. The ingredients approach was used for the cost analysis. Costs were grouped into personnel, trainings, overheads (recurrent costs) and equipment costs (capital cost). We calculated cost without annualizing capital cost to represent financial cost and cost with annualizing capital costs to represent economic cost. Results Twenty-two trained CDSS users (at least 2 users per health centre) participated in the study. Between April 2012 and March 2013, users managed 5,595 antenatal clients and 872 labour clients using the CDSS. We observed a decrease in the proportion of complications during delivery (pre-intervention 10.74% versus post-intervention 9.64%) and a reduction in the number of maternal deaths (pre-intervention 4 deaths versus post-intervention 1 death). The overall financial cost of CDSS implementation was US$23,316, approximately US$1,060 per CDSS user trained. Of the total cost of implementation, 48% (US$11,272) was pre-intervention cost and intervention cost was 52% (US$12,044). Equipment costs accounted for the largest proportion of financial cost: 34% (US$7,917). When economic cost was considered, total cost of implementation was US$17,128–lower than the financial cost by 26.5%. Conclusions The study provides useful information in the implementation of CDSS at health facilities to enhance health workers' adherence to practice guidelines

  11. Improved operating scenarios of the DIII-D tokamak as a result of the addition of UNIX computer systems

    SciTech Connect

    Henline, P.A.

    1995-10-01

    The increased use of UNIX based computer systems for machine control, data handling and analysis has greatly enhanced the operating scenarios and operating efficiency of the DRI-D tokamak. This paper will describe some of these UNIX systems and their specific uses. These include the plasma control system, the electron cyclotron heating control system, the analysis of electron temperature and density measurements and the general data acquisition system (which is collecting over 130 Mbytes of data). The speed and total capability of these systems has dramatically affected the ability to operate DIII-D. The improved operating scenarios include better plasma shape control due to the more thorough MHD calculations done between shots and the new ability to see the time dependence of profile data as it relates across different spatial locations in the tokamak. Other analysis which engenders improved operating abilities will be described.

  12. Mechanistic and computational studies of the atom transfer radical addition of CCl4 to styrene catalyzed by copper homoscorpionate complexes.

    PubMed

    Muñoz-Molina, José María; Sameera, W M C; Álvarez, Eleuterio; Maseras, Feliu; Belderrain, Tomás R; Pérez, Pedro J

    2011-03-21

    Experimental as well as theoretical studies have been carried out with the aim of elucidating the mechanism of the atom transfer radical addition (ATRA) of styrene and carbon tetrachloride with a Tp(x)Cu(NCMe) complex as the catalyst precursor (Tp(x) = hydrotrispyrazolyl-borate ligand). The studies shown herein demonstrate the effect of different variables in the kinetic behavior. A mechanistic proposal consistent with theoretical and experimental data is presented.

  13. Computer image analysis: an additional tool for the identification of processed poultry and mammal protein containing bones.

    PubMed

    Pinotti, L; Fearn, T; Gulalp, S; Campagnoli, A; Ottoboni, M; Baldi, A; Cheli, F; Savoini, G; Dell'Orto, V

    2013-01-01

    The aims of this study were (1) to evaluate the potential of image analysis measurements, in combination with the official analytical methods for the detection of constituents of animal origin in feedstuffs, to distinguish between poultry versus mammals; and (2) to identify possible markers that can be used in routine analysis. For this purpose, 14 mammal and seven poultry samples and a total of 1081 bone fragment lacunae were analysed by combining the microscopic methods with computer image analysis. The distribution of 30 different measured size and shape bone lacunae variables were studied both within and between the two zoological classes. In all cases a considerable overlap between classes meant that classification of individual lacunae was problematic, though a clear separation in the means did allow successful classification of samples on the basis of averages. The variables most useful for classification were those related to size, lacuna area for example. The approach shows considerable promise but will need further study using a larger number of samples with a wider range.

  14. Additional value of computer assisted semen analysis (CASA) compared to conventional motility assessments in pig artificial insemination.

    PubMed

    Broekhuijse, M L W J; Soštarić, E; Feitsma, H; Gadella, B M

    2011-11-01

    In order to obtain a more standardised semen motility evaluation, Varkens KI Nederland has introduced a computer assisted semen analysis (CASA) system in all their pig AI laboratories. The repeatability of CASA was enhanced by standardising for: 1) an optimal sample temperature (39 °C); 2) an optimal dilution factor; 3) optimal mixing of semen and dilution buffer by using mechanical mixing; 4) the slide chamber depth, and together with the previous points; 5) the optimal training of technicians working with the CASA system; and 6) the use of a standard operating procedure (SOP). Once laboratory technicians were trained in using this SOP, they achieved a coefficient of variation of < 5% which was superior to the variation found when the SOP was not strictly used. Microscopic semen motility assessments by eye were subjective and not comparable to the data obtained by standardised CASA. CASA results are preferable as accurate continuous motility dates are generated rather than discrimination motility percentage increments of 10% motility as with motility estimation by laboratory technicians. The higher variability of sperm motility found with CASA and the continuous motility values allow better analysis of the relationship between semen motility characteristics and fertilising capacity. The benefits of standardised CASA for AI is discussed both with respect to estimate the correct dilution factor of the ejaculate for the production of artificial insemination (AI) doses (critical for reducing the number of sperm per AI doses) and thus to get more reliable fertility data from these AI doses in return.

  15. Advanced space power requirements and techniques. Task 1: Mission projections and requirements. Volume 3: Appendices. [cost estimates and computer programs

    NASA Technical Reports Server (NTRS)

    Wolfe, M. G.

    1978-01-01

    Contents: (1) general study guidelines and assumptions; (2) launch vehicle performance and cost assumptions; (3) satellite programs 1959 to 1979; (4) initiative mission and design characteristics; (5) satellite listing; (6) spacecraft design model; (7) spacecraft cost model; (8) mission cost model; and (9) nominal and optimistic budget program cost summaries.

  16. Novel low-cost 2D/3D switchable autostereoscopic system for notebook computers and other portable devices

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    1995-03-01

    Mounting a lenticular lens in front of a flat panel display is a well known, inexpensive, and easy way to create an autostereoscopic system. Such a lens produces half resolution 3D images because half the pixels on the LCD are seen by the left eye and half by the right eye. This may be acceptable for graphics, but it makes full resolution text, as displayed by common software, nearly unreadable. Very fine alignment tolerances normally preclude the possibility of removing and replacing the lens in order to switch between 2D and 3D applications. Lenticular lens based displays are therefore limited to use as dedicated 3D devices. DTI has devised a technique which removes this limitation, allowing switching between full resolution 2D and half resolution 3D imaging modes. A second element, in the form of a concave lenticular lens array whose shape is exactly the negative of the first lens, is mounted on a hinge so that it can be swung down over the first lens array. When so positioned the two lenses cancel optically, allowing the user to see full resolution 2D for text or numerical applications. The two lenses, having complementary shapes, naturally tend to nestle together and snap into perfect alignment when pressed together--thus obviating any need for user operated alignment mechanisms. This system represents an ideal solution for laptop and notebook computer applications. It was devised to meet the stringent requirements of a laptop computer manufacturer including very compact size, very low cost, little impact on existing manufacturing or assembly procedures, and compatibility with existing full resolution 2D text- oriented software as well as 3D graphics. Similar requirements apply to high and electronic calculators, several models of which now use LCDs for the display of graphics.

  17. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    NASA Astrophysics Data System (ADS)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  18. 1,4-Addition of bis(iodozincio)methane to α,β-unsaturated ketones: chemical and theoretical/computational studies.

    PubMed

    Sada, Mutsumi; Furuyama, Taniyuki; Komagawa, Shinsuke; Uchiyama, Masanobu; Matsubara, Seijiro

    2010-09-10

    1,4-Addition of bis(iodozincio)methane to simple α,β-unsaturated ketones does not proceed well; the reaction is slightly endothermic according to DFT calculations. In the presence of chlorotrimethylsilane, the reaction proceeded efficiently to afford a silyl enol ether of β-zinciomethyl ketone. The C--Zn bond of the silyl enol ether could be used in a cross-coupling reaction to form another C--C bond in a one-pot reaction. In contrast, 1,4-addition of the dizinc reagent to enones carrying an acyloxy group proceeded very efficiently without any additive. In this case, the product was a 1,3-diketone, which was generated in a novel tandem reaction. A theoretical/computational study indicates that the whole reaction pathway is exothermic, and that two zinc atoms of bis(iodozincio)methane accelerate each step cooperatively as effective Lewis acids. PMID:20645344

  19. Corrigendum to "Development of ANFIS model for air quality forecasting and input optimization for reducing the computational cost and time" [Atmos. Environ. 128 (2016) 246-262

    NASA Astrophysics Data System (ADS)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-10-01

    In the paper entitled "Development of ANFIS model for air quality forecasting and input optimization for reducing the computational cost and time" the correlation coefficient values of O3 with the other parameters (shown in Table 4) were mistakenly written from some other results. But, the analyses were done based on the actual results. The actual values are listed in the revised Table 4.

  20. JPL Energy Consumption Program (ECP) documentation: A computer model simulating heating, cooling and energy loads in buildings. [low cost solar array efficiency

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Chai, V. W.; Lascu, D.; Urbenajo, R.; Wong, P.

    1978-01-01

    The engineering manual provides a complete companion documentation about the structure of the main program and subroutines, the preparation of input data, the interpretation of output results, access and use of the program, and the detailed description of all the analytic, logical expressions and flow charts used in computations and program structure. A numerical example is provided and solved completely to show the sequence of computations followed. The program is carefully structured to reduce both user's time and costs without sacrificing accuracy. The user would expect a cost of CPU time of approximately $5.00 per building zone excluding printing costs. The accuracy, on the other hand, measured by deviation of simulated consumption from watt-hour meter readings, was found by many simulation tests not to exceed + or - 10 percent margin.

  1. Enhanced computational prediction of polyethylene wear in hip joints by incorporating cross-shear and contact pressure in additional to load and sliding distance: effect of head diameter.

    PubMed

    Kang, Lu; Galvin, Alison L; Fisher, John; Jin, Zhongmin

    2009-05-11

    A new definition of the experimental wear factor was established and reported as a function of cross-shear motion and contact pressure using a multi-directional pin-on-plate wear testing machine for conventional polyethylene in the present study. An independent computational wear model was developed by incorporating the cross-shear motion and contact pressure-dependent wear factor into the Archard's law, in additional to load and sliding distance. The computational prediction of wear volume was directly compared with a simulator testing of a polyethylene hip joint with a 28 mm diameter. The effect of increasing the femoral head size was subsequently considered and was shown to increase wear, as a result of increased sliding distance and reduced contact pressure. PMID:19261286

  2. The role of additional computed tomography in the decision-making process on the secondary prevention in patients after systemic cerebral thrombolysis

    PubMed Central

    Sobolewski, Piotr; Kozera, Grzegorz; Szczuchniak, Wiktor; Nyka, Walenty M

    2016-01-01

    Introduction Patients with ischemic stroke undergoing intravenous (iv)-thrombolysis are routinely controlled with computed tomography on the second day to assess stroke evolution and hemorrhagic transformation (HT). However, the benefits of an additional computed tomography (aCT) performed over the next days after iv-thrombolysis have not been determined. Methods We retrospectively screened 287 Caucasian patients with ischemic stroke who were consecutively treated with iv-thrombolysis from 2008 to 2012. The results of computed tomography performed on the second (control computed tomography) and seventh (aCT) day after iv-thrombolysis were compared in 274 patients (95.5%); 13 subjects (4.5%), who died before the seventh day from admission were excluded from the analysis. Results aCTs revealed a higher incidence of HT than control computed tomographies (14.2% vs 6.6%; P=0.003). Patients with HT in aCT showed higher median of National Institutes of Health Stroke Scale score on admission than those without HT (13.0 vs 10.0; P=0.01) and higher presence of ischemic changes >1/3 middle cerebral artery territory (66.7% vs 35.2%; P<0.01). Correlations between presence of HT in aCT and National Institutes of Health Stroke Scale score on admission (rpbi 0.15; P<0.01), and the ischemic changes >1/3 middle cerebral artery (phi=0.03) existed, and the presence of HT in aCT was associated with 3-month mortality (phi=0.03). Conclusion aCT after iv-thrombolysis enables higher detection of HT, which is related to higher 3-month mortality. Thus, patients with severe middle cerebral artery infarction may benefit from aCT in the decision-making process on the secondary prophylaxis. PMID:26730196

  3. ANL/RBC: a computer code for the analysis of Rankine bottoming cyles, including system cost evaluation and off-design performance

    SciTech Connect

    McLennan, G.A.

    1986-05-01

    This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to calculate the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.

  4. ANL/RBC: A computer code for the analysis of Rankine bottoming cycles, including system cost evaluation and off-design performance

    NASA Technical Reports Server (NTRS)

    Mclennan, G. A.

    1986-01-01

    This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.

  5. Potlining Additives

    SciTech Connect

    Rudolf Keller

    2004-08-10

    In this project, a concept to improve the performance of aluminum production cells by introducing potlining additives was examined and tested. Boron oxide was added to cathode blocks, and titanium was dissolved in the metal pool; this resulted in the formation of titanium diboride and caused the molten aluminum to wet the carbonaceous cathode surface. Such wetting reportedly leads to operational improvements and extended cell life. In addition, boron oxide suppresses cyanide formation. This final report presents and discusses the results of this project. Substantial economic benefits for the practical implementation of the technology are projected, especially for modern cells with graphitized blocks. For example, with an energy savings of about 5% and an increase in pot life from 1500 to 2500 days, a cost savings of $ 0.023 per pound of aluminum produced is projected for a 200 kA pot.

  6. An Economic Evaluation of a Video- and Text-Based Computer-Tailored Intervention for Smoking Cessation: A Cost-Effectiveness and Cost-Utility Analysis of a Randomized Controlled Trial

    PubMed Central

    Stanczyk, Nicola E.; Smit, Eline S.; Schulz, Daniela N.; de Vries, Hein; Bolman, Catherine; Muris, Jean W. M.; Evers, Silvia M. A. A.

    2014-01-01

    Background Although evidence exists for the effectiveness of web-based smoking cessation interventions, information about the cost-effectiveness of these interventions is limited. Objective The study investigated the cost-effectiveness and cost-utility of two web-based computer-tailored (CT) smoking cessation interventions (video- vs. text-based CT) compared to a control condition that received general text-based advice. Methods In a randomized controlled trial, respondents were allocated to the video-based condition (N = 670), the text-based condition (N = 708) or the control condition (N = 721). Societal costs, smoking status, and quality-adjusted life years (QALYs; EQ-5D-3L) were assessed at baseline, six-and twelve-month follow-up. The incremental costs per abstinent respondent and per QALYs gained were calculated. To account for uncertainty, bootstrapping techniques and sensitivity analyses were carried out. Results No significant differences were found in the three conditions regarding demographics, baseline values of outcomes and societal costs over the three months prior to baseline. Analyses using prolonged abstinence as outcome measure indicated that from a willingness to pay of €1,500, the video-based intervention was likely to be the most cost-effective treatment, whereas from a willingness to pay of €50,400, the text-based intervention was likely to be the most cost-effective. With regard to cost-utilities, when quality of life was used as outcome measure, the control condition had the highest probability of being the most preferable treatment. Sensitivity analyses yielded comparable results. Conclusion The video-based CT smoking cessation intervention was the most cost-effective treatment for smoking abstinence after twelve months, varying the willingness to pay per abstinent respondent from €0 up to €80,000. With regard to cost-utility, the control condition seemed to be the most preferable treatment. Probably, more time will be

  7. The cumulative cost of additional wakefulness: dose-response effects on neurobehavioral functions and sleep physiology from chronic sleep restriction and total sleep deprivation

    NASA Technical Reports Server (NTRS)

    Van Dongen, Hans P A.; Maislin, Greg; Mullington, Janet M.; Dinges, David F.

    2003-01-01

    were near-linearly related to the cumulative duration of wakefulness in excess of 15.84 h (s.e. 0.73 h). CONCLUSIONS: Since chronic restriction of sleep to 6 h or less per night produced cognitive performance deficits equivalent to up to 2 nights of total sleep deprivation, it appears that even relatively moderate sleep restriction can seriously impair waking neurobehavioral functions in healthy adults. Sleepiness ratings suggest that subjects were largely unaware of these increasing cognitive deficits, which may explain why the impact of chronic sleep restriction on waking cognitive functions is often assumed to be benign. Physiological sleep responses to chronic restriction did not mirror waking neurobehavioral responses, but cumulative wakefulness in excess of a 15.84 h predicted performance lapses across all four experimental conditions. This suggests that sleep debt is perhaps best understood as resulting in additional wakefulness that has a neurobiological "cost" which accumulates over time.

  8. Development of cost-effective media to increase the economic potential for larger-scale bioproduction of natural food additives by Lactobacillus rhamnosus , Debaryomyces hansenii , and Aspergillus niger.

    PubMed

    Salgado, José Manuel; Rodríguez, Noelia; Cortés, Sandra; Domínguez, José Manuel

    2009-11-11

    Yeast extract (YE) is the most common nitrogen source in a variety of bioprocesses in spite of the high cost. Therefore, the use of YE in culture media is one of the major technical hurdles to be overcome for the development of low-cost fermentation routes, making the search for alternative-cheaper nitrogen sources particularly desired. The aim of the current study is to develop cost-effective media based on corn steep liquor (CSL) and locally available vinasses in order to increase the economic potential for larger-scale bioproduction. Three microorganisms were evaluated: Lactobacillus rhamnosus , Debaryomyces hansenii , and Aspergillus niger . The amino acid profile and protein concentration was relevant for the xylitol and citric acid production by D. hansenii and A. niger , respectively. Metals also played an important role for citric acid production, meanwhile, D. hansenii showed a strong dependence with the initial amount of Mg(2+). Under the best conditions, 28.8 g lactic acid/L (Q(LA) = 0.800 g/L.h, Y(LA/S) = 0.95 g/g), 35.3 g xylitol/L (Q(xylitol) = 0.380 g/L.h, Y(xylitol/S) = 0.69 g/g), and 13.9 g citric acid/L (Q(CA) = 0.146 g/L.h, Y(CA/S) = 0.63 g/g) were obtained. The economic efficiency (E(p/euro)) parameter identify vinasses as a lower cost and more effective nutrient source in comparison to CSL.

  9. 16 CFR 4.8 - Costs for obtaining Commission records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Costs for obtaining Commission records. 4.8... format (scanning) 2.50 per page. Computer programming 8.00 per qtr. hour. Other Fees: Computer Tape 18.50... category, and adding 16 percent to reflect the cost of additional benefits accorded to government...

  10. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  11. Computer simulation to predict energy use, greenhouse gas emissions and costs for production of fluid milk using alternative processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computer simulation is a useful tool for benchmarking the electrical and fuel energy consumption and water use in a fluid milk plant. In this study, a computer simulation model of the fluid milk process based on high temperature short time (HTST) pasteurization was extended to include models for pr...

  12. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    PubMed

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  13. Synthesis of Bridged Heterocycles via Sequential 1,4- and 1,2-Addition Reactions to α,β-Unsaturated N-Acyliminium Ions: Mechanistic and Computational Studies.

    PubMed

    Yazici, Arife; Wille, Uta; Pyne, Stephen G

    2016-02-19

    Novel tricyclic bridged heterocyclic systems can be readily prepared from sequential 1,4- and 1,2-addition reactions of allyl and 3-substituted allylsilanes to indolizidine and quinolizidine α,β-unsaturated N-acyliminium ions. These reactions involve a novel N-assisted, transannular 1,5-hydride shift. Such a mechanism was supported by examining the reaction of a dideuterated indolizidine, α,β-unsaturated N-acyliminium ion precursor, which provided specifically dideuterated tricyclic bridged heterocyclic products, and from computational studies. In contrast, the corresponding pyrrolo[1,2-a]azepine system did not provide the corresponding tricyclic bridged heterocyclic product and gave only a bis-allyl adduct, while more substituted versions gave novel furo[3,2-d]pyrrolo[1,2-a]azepine products. Such heterocyclic systems would be expected to be useful scaffolds for the preparation of libraries of novel compounds for new drug discovery programs. PMID:26816207

  14. Synthesis of Bridged Heterocycles via Sequential 1,4- and 1,2-Addition Reactions to α,β-Unsaturated N-Acyliminium Ions: Mechanistic and Computational Studies.

    PubMed

    Yazici, Arife; Wille, Uta; Pyne, Stephen G

    2016-02-19

    Novel tricyclic bridged heterocyclic systems can be readily prepared from sequential 1,4- and 1,2-addition reactions of allyl and 3-substituted allylsilanes to indolizidine and quinolizidine α,β-unsaturated N-acyliminium ions. These reactions involve a novel N-assisted, transannular 1,5-hydride shift. Such a mechanism was supported by examining the reaction of a dideuterated indolizidine, α,β-unsaturated N-acyliminium ion precursor, which provided specifically dideuterated tricyclic bridged heterocyclic products, and from computational studies. In contrast, the corresponding pyrrolo[1,2-a]azepine system did not provide the corresponding tricyclic bridged heterocyclic product and gave only a bis-allyl adduct, while more substituted versions gave novel furo[3,2-d]pyrrolo[1,2-a]azepine products. Such heterocyclic systems would be expected to be useful scaffolds for the preparation of libraries of novel compounds for new drug discovery programs.

  15. Low-cost computer classification of land cover in the Portland area, Oregon, by signature extension techniques

    USGS Publications Warehouse

    Gaydos, Leonard

    1978-01-01

    The cost of classifying 5,607 square kilometers (2,165 sq. mi.) in the Portland area was less than 8 cents per square kilometer ($0.0788, or $0.2041 per square mile). Besides saving in costs, this and other signature extension techniques may be useful in completing land use and land cover mapping in other large areas where multispectral and multitemporal Landsat data are available in digital form but other source materials are generally lacking.

  16. 48 CFR 49.303-4 - Adjustment of indirect costs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... costs. 49.303-4 Section 49.303-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Cost-Reimbursement Contracts... compute indirect costs for other contracts performed during the applicable accounting period....

  17. 48 CFR 49.303-4 - Adjustment of indirect costs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... costs. 49.303-4 Section 49.303-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Cost-Reimbursement Contracts... compute indirect costs for other contracts performed during the applicable accounting period....

  18. 48 CFR 49.303-4 - Adjustment of indirect costs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... costs. 49.303-4 Section 49.303-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACT MANAGEMENT TERMINATION OF CONTRACTS Additional Principles for Cost-Reimbursement Contracts... compute indirect costs for other contracts performed during the applicable accounting period....

  19. ECG boy: low-cost medical instrumentation using mass-produced, hand-held entertainment computers. A preliminary report.

    PubMed

    Rohde, M M; Bement, S L; Lupa, R S

    1998-01-01

    A prototype low-cost, portable ECG monitor, the "ECG Boy," is described. A mass produced hand-held video game platform is the basis for a complete three-lead, driven right-leg electrocardiogram (ECG). The ECG circuitry is planned to fit in a standard modular cartridge that is inserted in a production Nintendo "Gameboy." The combination is slightly smaller than a paperback book and weighs less than 500 g. The unit contains essential safety features such as optical isolation and is powered by 9-V and AA batteries. Functionally, the ECG Boy permits viewing ECG recordings in real time on the integrated screen. The user can select both the lead displayed on the screen and the time scale used. A 1-mV reference allows for calibration. Other ECG enhancements such as data transmission via telephone can be easily and inexpensively added to this system. The ECG Boy is intended as a proof of concept for a new class of low-cost biomedical instruments. Rising health care costs coupled with tightened funding have created an acute demand for low-cost medical equipment that satisfies safety and quality standards. A mass-produced microprocessor-based platform designed for the entertainment market can keep costs low while providing a functional basis for a biomedical instrument.

  20. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems

    NASA Astrophysics Data System (ADS)

    Li, Ying

    2016-09-01

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g. superconducting circuits or quantum dots, is studied in this paper. Errors caused by topologically unprotected quantum systems need to be corrected with error correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer, and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about a thousand normal qubits.

  1. Middleware enabling computational self-reflection: exploring the need for and some costs of selfreflecting networks with application to homeland defense

    NASA Astrophysics Data System (ADS)

    Kramer, Michael J.; Bellman, Kirstie L.; Landauer, Christopher

    2002-07-01

    This paper will review and examine the definitions of Self-Reflection and Active Middleware. Then it will illustrate a conceptual framework for understanding and enumerating the costs of Self-Reflection and Active Middleware at increasing levels of Application. Then it will review some application of Self-Reflection and Active Middleware to simulations. Finally it will consider the application and additional kinds of costs applying Self-Reflection and Active Middleware to sharing information among the organizations expected to participate in Homeland Defense.

  2. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... structure which is debt. Wp=Fraction of existing capital structure which is preferred equity. We=Fraction of existing capital structure which is common equity and retained earnings. R d=Predicted nominal cost of long... sixty months of data. The first month (t=1) is sixty months before the month in which the firm's...

  3. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... structure which is debt. Wp=Fraction of existing capital structure which is preferred equity. We=Fraction of existing capital structure which is common equity and retained earnings. R d=Predicted nominal cost of long... sixty months of data. The first month (t=1) is sixty months before the month in which the firm's...

  4. Design and implementation of a medium speed communications interface and protocol for a low cost, refreshed display computer

    NASA Technical Reports Server (NTRS)

    Phyne, J. R.; Nelson, M. D.

    1975-01-01

    The design and implementation of hardware and software systems involved in using a 40,000 bit/second communication line as the connecting link between an IMLAC PDS 1-D display computer and a Univac 1108 computer system were described. The IMLAC consists of two independent processors sharing a common memory. The display processor generates the deflection and beam control currents as it interprets a program contained in the memory; the minicomputer has a general instruction set and is responsible for starting and stopping the display processor and for communicating with the outside world through the keyboard, teletype, light pen, and communication line. The processing time associated with each data byte was minimized by designing the input and output processes as finite state machines which automatically sequence from each state to the next. Several tests of the communication link and the IMLAC software were made using a special low capacity computer grade cable between the IMLAC and the Univac.

  5. Computer architecture providing high-performance and low-cost solutions for fast fMRI reconstruction

    NASA Astrophysics Data System (ADS)

    Chao, Hui; Goddard, J. Iain

    1998-07-01

    Due to the dynamic nature of brain studies in functional magnetic resonance imaging (fMRI), fast pulse sequences such as echo planar imaging (EPI) and spiral are often used for higher temporal resolution. Hundreds of frames of two- dimensional (2-D) images or multiple three-dimensional (3-D) images are often acquired to cover a larger space and time range. Therefore, fMRI often requires a much larger data storage, faster data transfer rate and higher processing power than conventional MRI. In Mercury Computer Systems' PCI-based embedded computer system, the computer architecture allows the concurrent use of a DMA engine for data transfer and CPU for data processing. This architecture allows a multicomputer to distribute processing and data with minimal time spent transferring data. Different types and numbers of processors are available to optimize system performance for the application. The fMRI reconstruction was first implemented in Mercury's PCI-based embedded computer system by using one digital signal processing (DSP) chip, with the host computer running under the Windows NTR platform. Double buffers in SRAM or cache were created for concurrent I/O and processing. The fMRI reconstruction was then implemented in parallel using multiple DSP chips. Data transfer and interprocessor synchronization were carefully managed to optimize algorithm efficiency. The image reconstruction times were measured with different numbers of processors ranging from one to 10. With one DSP chip, the timing for reconstructing 100 fMRI images measuring 128 X 64 pixels was 1.24 seconds, which is already faster than most existing commercial MRI systems. This PCI-based embedded multicomputer architecture, which has a nearly linear improvement in performance, provides high performance for fMRI processing. In summary, this embedded multicomputer system allows the choice of computer topologies to fit the specific application to achieve maximum system performance.

  6. Cost-Estimation Program

    NASA Technical Reports Server (NTRS)

    Cox, Brian

    1995-01-01

    COSTIT computer program estimates cost of electronic design by reading item-list file and file containing cost for each item. Accuracy of cost estimate based on accuracy of cost-list file. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. The Sun version (NPO-19587). PC version (NPO-19157).

  7. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... determining the current average yield on newly issued bonds—industrial or utility as appropriate—which have...—industrial or utility as appropriate—which has the same rating as the firm's most recent preferred stock...% (B) The “beta” coefficient is computed with regression analysis techniques. The regression...

  8. 10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...—industrial or utility as appropriate—which have the same rating as the firm's most recent debt issue. (3) The... newly issued preferred stock—industrial or utility as appropriate—which has the same rating as the firm... by Ibbotson and Sinquefield(1)—9.2% (B) The “beta” coefficient is computed with regression...

  9. Setting up a Low-Cost Lab Management System for a Multi-Purpose Computing Laboratory Using Virtualisation Technology

    ERIC Educational Resources Information Center

    Mok, Heng Ngee; Lee, Yeow Leong; Tan, Wee Kiat

    2012-01-01

    This paper describes how a generic computer laboratory equipped with 52 workstations is set up for teaching IT-related courses and other general purpose usage. The authors have successfully constructed a lab management system based on decentralised, client-side software virtualisation technology using Linux and free software tools from VMware that…

  10. OPTIM: Computer program to generate a vertical profile which minimizes aircraft fuel burn or direct operating cost. User's guide

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A profile of altitude, airspeed, and flight path angle as a function of range between a given set of origin and destination points for particular models of transport aircraft provided by NASA is generated. Inputs to the program include the vertical wind profile, the aircraft takeoff weight, the costs of time and fuel, certain constraint parameters and control flags. The profile can be near optimum in the sense of minimizing: (1) fuel, (2) time, or (3) a combination of fuel and time (direct operating cost (DOC)). The user can also, as an option, specify the length of time the flight is to span. The theory behind the technical details of this program is also presented.

  11. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal energy storage coupled with district heating or cooling systems. Volume I. Main text

    SciTech Connect

    Huber, H.D.; Brown, D.R.; Reilly, R.W.

    1982-04-01

    A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. The AQUASTOR model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two principal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains the main text, including introduction, program description, input data instruction, a description of the output, and Appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

  12. A computational study of the addition of ReO3L (L = Cl(-), CH3, OCH3 and Cp) to ethenone.

    PubMed

    Aniagyei, Albert; Tia, Richard; Adei, Evans

    2016-01-01

    The periselectivity and chemoselectivity of the addition of transition metal oxides of the type ReO3L (L = Cl, CH3, OCH3 and Cp) to ethenone have been explored at the MO6 and B3LYP/LACVP* levels of theory. The activation barriers and reaction energies for the stepwise and concerted addition pathways involving multiple spin states have been computed. In the reaction of ReO3L (L = Cl(-), OCH3, CH3 and Cp) with ethenone, the concerted [2 + 2] addition of the metal oxide across the C=C and C=O double bond to form either metalla-2-oxetane-3-one or metalla-2,4-dioxolane is the most kinetically favored over the formation of metalla-2,5-dioxolane-3-one from the direct [3 + 2] addition pathway. The trends in activation and reaction energies for the formation of metalla-2-oxetane-3-one and metalla-2,4-dioxolane are Cp < Cl(-) < OCH3 < CH3 and Cp < OCH3 < CH3 < Cl(-) and for the reaction energies are Cp < OCH3 < Cl(-) < CH3 and Cp < CH3 < OCH3 < Cl CH3. The concerted [3 + 2] addition of the metal oxide across the C=C double of the ethenone to form species metalla-2,5-dioxolane-3-one is thermodynamically the most favored for the ligand L = Cp. The direct [2 + 2] addition pathways leading to the formations of metalla-2-oxetane-3-one and metalla-2,4-dioxolane is thermodynamically the most favored for the ligands L = OCH3 and Cl(-). The difference between the calculated [2 + 2] activation barriers for the addition of the metal oxide LReO3 across the C=C and C=O functionalities of ethenone are small except for the case of L = Cl(-) and OCH3. The rearrangement of the metalla-2-oxetane-3-one-metalla-2,5-dioxolane-3-one even though feasible, are unfavorable due to high activation energies of their rate-determining steps. For the rearrangement of the metalla-2-oxetane-3-one to metalla-2,5-dioxolane-3-one, the trends in activation barriers is found to follow the order OCH3 < Cl(-) < CH3 < Cp. The trends in the activation energies for

  13. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  14. Additive Manufacturing of Single-Crystal Superalloy CMSX-4 Through Scanning Laser Epitaxy: Computational Modeling, Experimental Process Development, and Process Parameter Optimization

    NASA Astrophysics Data System (ADS)

    Basak, Amrita; Acharya, Ranadip; Das, Suman

    2016-08-01

    This paper focuses on additive manufacturing (AM) of single-crystal (SX) nickel-based superalloy CMSX-4 through scanning laser epitaxy (SLE). SLE, a powder bed fusion-based AM process was explored for the purpose of producing crack-free, dense deposits of CMSX-4 on top of similar chemistry investment-cast substrates. Optical microscopy and scanning electron microscopy (SEM) investigations revealed the presence of dendritic microstructures that consisted of fine γ' precipitates within the γ matrix in the deposit region. Computational fluid dynamics (CFD)-based process modeling, statistical design of experiments (DoE), and microstructural characterization techniques were combined to produce metallurgically bonded single-crystal deposits of more than 500 μm height in a single pass along the entire length of the substrate. A customized quantitative metallography based image analysis technique was employed for automatic extraction of various deposit quality metrics from the digital cross-sectional micrographs. The processing parameters were varied, and optimal processing windows were identified to obtain good quality deposits. The results reported here represent one of the few successes obtained in producing single-crystal epitaxial deposits through a powder bed fusion-based metal AM process and thus demonstrate the potential of SLE to repair and manufacture single-crystal hot section components of gas turbine systems from nickel-based superalloy powders.

  15. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation

    PubMed Central

    Loewe, Axel; Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538

  16. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    PubMed

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  17. Tandem β-elimination/hetero-michael addition rearrangement of an N-alkylated pyridinium oxime to an O-alkylated pyridine oxime ether: an experimental and computational study.

    PubMed

    Picek, Igor; Vianello, Robert; Šket, Primož; Plavec, Janez; Foretić, Blaženka

    2015-02-20

    A novel OH(-)-promoted tandem reaction involving C(β)-N(+)(pyridinium) cleavage and ether C(β)-O(oxime) bond formation in aqueous media has been presented. The study fully elucidates the fascinating reaction behavior of N-benzoylethylpyridinium-4-oxime chloride in aqueous media under mild reaction conditions. The reaction journey begins with the exclusive β-elimination and formation of pyridine-4-oxime and phenyl vinyl ketone and ends with the formation of O-alkylated pyridine oxime ether. A combination of experimental and computational studies enabled the introduction of a new type of rearrangement process that involves a unique tandem reaction sequence. We showed that (E)-O-benzoylethylpyridine-4-oxime is formed in aqueous solution by a base-induced tandem β-elimination/hetero-Michael addition rearrangement of (E)-N-benzoylethylpyridinium-4-oximate, the novel synthetic route to this engaging target class of compounds. The complete mechanistic picture of this rearrangement process was presented and discussed in terms of the E1cb reaction scheme within the rate-limiting β-elimination step.

  18. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    SciTech Connect

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  19. User manual for AQUASTOR: a computer model for cost analysis of aquifer thermal-energy storage oupled with district-heating or cooling systems. Volume II. Appendices

    SciTech Connect

    Huber, H.D.; Brown, D.R.; Reilly, R.W.

    1982-04-01

    A computer model called AQUASTOR was developed for calculating the cost of district heating (cooling) using thermal energy supplied by an aquifer thermal energy storage (ATES) system. the AQUASTOR Model can simulate ATES district heating systems using stored hot water or ATES district cooling systems using stored chilled water. AQUASTOR simulates the complete ATES district heating (cooling) system, which consists of two prinicpal parts: the ATES supply system and the district heating (cooling) distribution system. The supply system submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the ATES supply system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. The model combines the technical characteristics of the supply system and the technical characteristics of the distribution system with financial and tax conditions for the entities operating the two systems into one techno-economic model. This provides the flexibility to individually or collectively evaluate the impact of different economic and technical parameters, assumptions, and uncertainties on the cost of providing district heating (cooling) with an ATES system. This volume contains all the appendices, including supply and distribution system cost equations and models, descriptions of predefined residential districts, key equations for the cooling degree-hour methodology, a listing of the sample case output, and appendix H, which contains the indices for supply input parameters, distribution input parameters, and AQUASTOR subroutines.

  20. Reducing Communication in Algebraic Multigrid Using Additive Variants

    SciTech Connect

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    2014-02-12

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for good performance on future exascale architectures.

  1. Cost-effective pediatric head and body phantoms for computed tomography dosimetry and its evaluation using pencil ion chamber and CT dose profiler.

    PubMed

    Saravanakumar, A; Vaideki, K; Govindarajan, K N; Jayakumar, S; Devanand, B

    2015-01-01

    In the present work, a pediatric head and body phantom was fabricated using polymethyl methacrylate (PMMA) at a low cost when compared to commercially available phantoms for the purpose of computed tomography (CT) dosimetry. The dimensions of head and body phantoms were 10 cm diameter, 15 cm length and 16 cm diameter, 15 cm length, respectively. The dose from a 128-slice CT machine received by the head and body phantom at the center and periphery were measured using a 100 mm pencil ion chamber and 150 mm CT dose profiler (CTDP). Using these values, the weighted computed tomography dose index (CTDIw) and in turn the volumetric CTDI (CTDIv) were calculated for various combinations of tube voltage and current-time product. A similar study was carried out using standard calibrated phantom and the results have been compared with the fabricated ones to ascertain that the performance of the latter is equivalent to that of the former. Finally, CTDIv measured using fabricated and standard phantoms were compared with respective values displayed on the console. The difference between the values was well within the limits specified by Atomic Energy Regulatory Board (AERB), India. These results indicate that the cost-effective pediatric phantom can be employed for CT dosimetry.

  2. Cost-effective pediatric head and body phantoms for computed tomography dosimetry and its evaluation using pencil ion chamber and CT dose profiler

    PubMed Central

    Saravanakumar, A.; Vaideki, K.; Govindarajan, K. N.; Jayakumar, S.; Devanand, B.

    2015-01-01

    In the present work, a pediatric head and body phantom was fabricated using polymethyl methacrylate (PMMA) at a low cost when compared to commercially available phantoms for the purpose of computed tomography (CT) dosimetry. The dimensions of head and body phantoms were 10 cm diameter, 15 cm length and 16 cm diameter, 15 cm length, respectively. The dose from a 128-slice CT machine received by the head and body phantom at the center and periphery were measured using a 100 mm pencil ion chamber and 150 mm CT dose profiler (CTDP). Using these values, the weighted computed tomography dose index (CTDIw) and in turn the volumetric CTDI (CTDIv) were calculated for various combinations of tube voltage and current-time product. A similar study was carried out using standard calibrated phantom and the results have been compared with the fabricated ones to ascertain that the performance of the latter is equivalent to that of the former. Finally, CTDIv measured using fabricated and standard phantoms were compared with respective values displayed on the console. The difference between the values was well within the limits specified by Atomic Energy Regulatory Board (AERB), India. These results indicate that the cost-effective pediatric phantom can be employed for CT dosimetry. PMID:26500404

  3. Construction and field test of a programmable and self-cleaning auto-sampler controlled by a low-cost one-board computer

    NASA Astrophysics Data System (ADS)

    Stadler, Philipp; Farnleitner, Andreas H.; Zessner, Matthias

    2016-04-01

    This presentation describes in-depth how a low cost micro-computer was used for substantial improvement of established measuring systems due to the construction and implementation of a purposeful complementary device for on-site sample pretreatment. A fully automated on-site device was developed and field-tested, that enables water sampling with simultaneous filtration as well as effective cleaning procedure of the devicés components. The described auto-sampler is controlled by a low-cost one-board computer and designed for sample pre-treatment, with minimal sample alteration, to meet requirements of on-site measurement devices that cannot handle coarse suspended solids within the measurement procedure or -cycle. The automated sample pretreatment was tested for over one year for rapid and on-site enzymatic activity (beta-D-glucuronidase, GLUC) determination in sediment laden stream water. The formerly used proprietary sampling set-up was assumed to lead to a significant damping of the measurement signal due to its susceptibility to clogging, debris- and bio film accumulation. Results show that the installation of the developed apparatus considerably enhanced error-free running time of connected measurement devices and increased the measurement accuracy to an up-to-now unmatched quality.

  4. Development and implementation of a low-cost phantom for quality control in cone beam computed tomography.

    PubMed

    Batista, W O; Navarro, M V T; Maia, A F

    2013-12-01

    A phantom for quality control in cone beam computed tomography (CBCT) scanners was designed and constructed, and a methodology for testing was developed. The phantom had a polymethyl methacrylate structure filled with water and plastic objects that allowed the assessment of parameters related to quality control. The phantom allowed the evaluation of essential parameters in CBCT as well as the evaluation of linear and angular dimensions. The plastics used in the phantom were chosen so that their density and linear attenuation coefficient were similar to those of human facial structures. Three types of CBCT equipment, with two different technological concepts, were evaluated. The results of the assessment of the accuracy of linear and angular dimensions agreed with the existing standards. However, other parameters such as computed tomography number accuracy, uniformity and high-contrast detail did not meet the tolerances established in current regulations or the manufacturer's specifications. The results demonstrate the importance of establishing specific protocols and phantoms, which meet the specificities of CBCT. The practicality of implementation, the quality control test results for the proposed phantom and the consistency of the results using different equipment demonstrate its adequacy.

  5. Performance, throughput, and cost of in-home training for the Army Reserve: Using asynchronous computer conferencing as an alternative to resident training

    SciTech Connect

    Hahn, H.A. ); Ashworth, R.L. Jr.; Phelps, R.H. ); Byers, J.C. )

    1990-01-01

    Asynchronous computer conferencing (ACC) was investigated as an alternative to resident training for the Army Reserve Component (RC). Specifically, the goals were to (1) evaluate the performance and throughput of ACC as compared with traditional Resident School instruction and (2) determine the cost-effectiveness of developing and implementing ACC. Fourteen RC students took a module of the Army Engineer Officer Advanced Course (EOAC) via ACC. Course topics included Army doctrine, technical engineering subjects, leadership, and presentation skills. Resident content was adapted for presentation via ACC. The programs of instruction for ACC and the equivalent resident course were identical; only the media used for presentation were changed. Performance on tests, homework, and practical exercises; self-assessments of learning; throughput; and cost data wee the measures of interest. Comparison data were collected on RC students taking the course in residence. Results indicated that there were no performance differences between the two groups. Students taking the course via ACC perceived greater learning benefit than did students taking the course in residence. Resident throughput was superior to ACC throughput, both in terms of numbers of students completing and time to complete the course. In spite of this fact, however, ACC was more cost-effective than resident training.

  6. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  7. GME: at what cost?

    PubMed

    Young, David W

    2003-11-01

    Current computing methods impede determining the real cost of graduate medical education. However, a more accurate estimate could be obtained if policy makers would allow for the application of basic cost-accounting principles, including consideration of department-level costs, unbundling of joint costs, and other factors.

  8. Troubleshooting Costs

    NASA Astrophysics Data System (ADS)

    Kornacki, Jeffrey L.

    Seventy-six million cases of foodborne disease occur each year in the United States alone. Medical and lost productivity costs of the most common pathogens are estimated to be 5.6-9.4 billion. Product recalls, whether from foodborne illness or spoilage, result in added costs to manufacturers in a variety of ways. These may include expenses associated with lawsuits from real or allegedly stricken individuals and lawsuits from shorted customers. Other costs include those associated with efforts involved in finding the source of the contamination and eliminating it and include time when lines are shut down and therefore non-productive, additional non-routine testing, consultant fees, time and personnel required to overhaul the entire food safety system, lost market share to competitors, and the cost associated with redesign of the factory and redesign or acquisition of more hygienic equipment. The cost associated with an effective quality assurance plan is well worth the effort to prevent the situations described.

  9. Cost of energy evaluation

    NASA Technical Reports Server (NTRS)

    Hasbrouck, T. M.

    1979-01-01

    The estimated cost per kilowatt hour, the wind resources in the utilities service area, and the reliability of the units are considered in computing the cost of energy of the wind turbine generator system.

  10. The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

    PubMed Central

    Thomas, Peter B. M.; Baltrušaitis, Tadas; Robinson, Peter; Vivian, Anthony J.

    2016-01-01

    Purpose We validate a video-based method of head posture measurement. Methods The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. Results The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the Cervical Range of Motion Device gave correlation coefficients of 0.99 (P < 0.0001), 0.96 (P < 0.0001), and 0.99 (P < 0.0001) for yaw, pitch, and roll, respectively. Conclusions The Cambridge Face Tracker performs well under real-world conditions and within the range of normally-encountered head posture. It allows useful quantification of head posture in real time or from precaptured video. Its performance is similar to that of a clinically validated mechanical device. It has significant advantages over other approaches in that subjects do not need to wear any apparatus, and it requires only low cost, easy-to-setup consumer electronics. Translational Relevance Noncontact assessment of head posture allows more complete clinical assessment of patients, and could benefit surgical planning in future. PMID:27730008

  11. A Low-Cost, Computer-Interfaced Drawing Pad for fMRI Studies of Dysgraphia and Dyslexia

    PubMed Central

    Reitz, Frederick; Richards, Todd; Wu, Kelvin; Boord, Peter; Askren, Mary; Lewis, Thomas; Berninger, Virginia

    2013-01-01

    We have developed a pen and writing tablet for use by subjects during fMRI scanning. The pen consists of two jacketed, multi-mode optical fibers routed to the tip of a hollowed-out ball-point pen. The pen has been further modified by addition of a plastic plate to maintain a perpendicular pen-tablet orientation. The tablet is simply a non-metallic frame holding a paper print of continuously varying color gradients. The optical fibers are routed out of the MRI bore to a light-tight box in an adjacent control room. Within the box, light from a high intensity LED is coupled into one of the fibers, while the other fiber abuts a color sensor. Light from the LED exits the pen tip, illuminating a small spot on the tablet, and the resulting reflected light is routed to the color sensor. Given a lookup table of position for each color on the tablet, the coordinates of the pen on the tablet may be displayed and digitized in real-time. While simple and inexpensive, the system achieves sufficient resolution to grade writing tasks testing dysgraphic and dyslexic phenomena. PMID:23595203

  12. Ramjet cost estimating handbook

    NASA Technical Reports Server (NTRS)

    Emmons, H. T.; Norwood, D. L.; Rasmusen, J. E.; Reynolds, H. E.

    1978-01-01

    Research conducted under Air Force Contract F33615-76-C-2043 to generate cost data and to establish a cost methodology that accurately predicts the production costs of ramjet engines is presented. The cost handbook contains a description of over one hundred and twenty-five different components which are defined as baseline components. The cost estimator selects from the handbook the appropriate components to fit his ramjet assembly, computes the cost from cost computation data sheets in the handbook, and totals all of the appropriate cost elements to arrive at the total engine cost. The methodology described in the cost handbook addresses many different ramjet types from simple podded arrangements of the liquid fuel ramjet to the more complex integral rocket/ramjet configurations including solid fuel ramjets and solid ducted rockets. It is applicable to a range of sizes from 6 in diameter to 18 in diameter and to production quantities up to 5000 engines.

  13. The Frozen Cage Model: A Computationally Low-Cost Tool for Predicting the Exohedral Regioselectivity of Cycloaddition Reactions Involving Endohedral Metallofullerenes.

    PubMed

    Garcia-Borràs, Marc; Romero-Rivera, Adrian; Osuna, Sílvia; Luis, Josep M; Swart, Marcel; Solà, Miquel

    2012-05-01

    Functionalization of endohedral metallofullerenes (EMFs) is an active line of research that is important for obtaining nanomaterials with unique properties that might be used in a variety of fields, ranging from molecular electronics to biomedical applications. Such functionalization is commonly achieved by means of cycloaddition reactions. The scarcity of both experimental and theoretical studies analyzing the exohedral regioselectivity of cycloaddition reactions involving EMFs translates into a poor understanding of the EMF reactivity. From a theoretical point of view, the main obstacle is the high computational cost associated with this kind of studies. To alleviate the situation, we propose an approach named the frozen cage model (FCM) based on single point energy calculations at the optimized geometries of the empty cage products. The FCM represents a fast and computationally inexpensive way to perform accurate qualitative predictions of the exohedral regioselectivity of cycloaddition reactions in EMFs. Analysis of the Dimroth approximation, the activation strain or distortion/interaction model, and the noncluster energies in the Diels-Alder cycloaddition of s-cis-1,3-butadiene to X@D3h-C78 (X = Ti2C2, Sc3N, and Y3N) EMFs provides a justification of the method.

  14. Balancing act of template bank construction: Inspiral waveform template banks for gravitational-wave detectors and optimizations at fixed computational cost

    NASA Astrophysics Data System (ADS)

    Keppel, Drew

    2013-06-01

    Gravitational-wave searches for signals from inspiraling compact binaries have relied on matched filtering banks of waveforms (called template banks) to try to extract the signal waveforms from the detector data. These template banks have been constructed using four main considerations, the region of parameter space of interest, the sensitivity of the detector, the matched filtering bandwidth, and the sensitivity one is willing to lose due to the granularity of template placement, the latter of which is governed by the minimal match. In this work we describe how the choice of the lower frequency cutoff, the lower end of the matched filter frequency band, can be optimized for detection. We also show how the minimal match can be optimally chosen in the case of limited computational resources. These techniques are applied to searches for binary neutron star signals that have been previously performed when analyzing Initial LIGO and Virgo data and will be performed analyzing Advanced LIGO and Advanced Virgo data using the expected detector sensitivity. By following the algorithms put forward here, the volume sensitivity of these searches is predicted to improve without increasing the computational cost of performing the search.

  15. Food additives

    PubMed Central

    Spencer, Michael

    1974-01-01

    Food additives are discussed from the food technology point of view. The reasons for their use are summarized: (1) to protect food from chemical and microbiological attack; (2) to even out seasonal supplies; (3) to improve their eating quality; (4) to improve their nutritional value. The various types of food additives are considered, e.g. colours, flavours, emulsifiers, bread and flour additives, preservatives, and nutritional additives. The paper concludes with consideration of those circumstances in which the use of additives is (a) justified and (b) unjustified. PMID:4467857

  16. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  17. Program Demand Cost Model for Alaskan Schools. Eighth Edition.

    ERIC Educational Resources Information Center

    Morgan, Michael; Mearig, Tim; Coffee, Nathan

    The State of Alaska Department of Education has created a handbook for establishing budgets for the following three types of construction projects: new schools or additions; renovations; and combined new work and renovations. The handbook supports a demand cost model computer program that includes detailed renovation cost data, itemized by…

  18. Breaking Barriers in Polymer Additive Manufacturing

    SciTech Connect

    Love, Lonnie J; Duty, Chad E; Post, Brian K; Lind, Randall F; Lloyd, Peter D; Kunc, Vlastimil; Peter, William H; Blue, Craig A

    2015-01-01

    Additive Manufacturing (AM) enables the creation of complex structures directly from a computer-aided design (CAD). There are limitations that prevent the technology from realizing its full potential. AM has been criticized for being slow and expensive with limited build size. Oak Ridge National Laboratory (ORNL) has developed a large scale AM system that improves upon each of these areas by more than an order of magnitude. The Big Area Additive Manufacturing (BAAM) system directly converts low cost pellets into a large, three-dimensional part at a rate exceeding 25 kg/h. By breaking these traditional barriers, it is possible for polymer AM to penetrate new manufacturing markets.

  19. Magnetic fusion energy and computers

    SciTech Connect

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups.

  20. 24 CFR 908.108 - Cost.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... computer hardware or software, or both, the cost of contracting for those services, or the cost of... operating budget. At the HA's option, the cost of the computer software may include service contracts...

  1. 24 CFR 908.108 - Cost.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... computer hardware or software, or both, the cost of contracting for those services, or the cost of... operating budget. At the HA's option, the cost of the computer software may include service contracts...

  2. Additivity of Factor Effects in Reading Tasks Is Still a Challenge for Computational Models: Reply to Ziegler, Perry, and Zorzi (2009)

    ERIC Educational Resources Information Center

    Besner, Derek; O'Malley, Shannon

    2009-01-01

    J. C. Ziegler, C. Perry, and M. Zorzi (2009) have claimed that their connectionist dual process model (CDP+) can simulate the data reported by S. O'Malley and D. Besner. Most centrally, they have claimed that the model simulates additive effects of stimulus quality and word frequency on the time to read aloud when words and nonwords are randomly…

  3. Computer modelling integrated with micro-CT and material testing provides additional insight to evaluate bone treatments: Application to a beta-glycan derived whey protein mice model.

    PubMed

    Sreenivasan, D; Tu, P T; Dickinson, M; Watson, M; Blais, A; Das, R; Cornish, J; Fernandez, J

    2016-01-01

    The primary aim of this study was to evaluate the influence of a whey protein diet on computationally predicted mechanical strength of murine bones in both trabecular and cortical regions of the femur. There was no significant influence on mechanical strength in cortical bone observed with increasing whey protein treatment, consistent with cortical tissue mineral density (TMD) and bone volume changes observed. Trabecular bone showed a significant decline in strength with increasing whey protein treatment when nanoindentation derived Young׳s moduli were used in the model. When microindentation, micro-CT phantom density or normalised Young׳s moduli were included in the model a non-significant decline in strength was exhibited. These results for trabecular bone were consistent with both trabecular bone mineral density (BMD) and micro-CT indices obtained independently. The secondary aim of this study was to characterise the influence of different sources of Young׳s moduli on computational prediction. This study aimed to quantify the predicted mechanical strength in 3D from these sources and evaluate if trends and conclusions remained consistent. For cortical bone, predicted mechanical strength behaviour was consistent across all sources of Young׳s moduli. There was no difference in treatment trend observed when Young׳s moduli were normalised. In contrast, trabecular strength due to whey protein treatment significantly reduced when material properties from nanoindentation were introduced. Other material property sources were not significant but emphasised the strength trend over normalised material properties. This shows strength at the trabecular level was attributed to both changes in bone architecture and material properties.

  4. New Federal Cost Accounting Regulations

    ERIC Educational Resources Information Center

    Wolff, George J.; Handzo, Joseph J.

    1973-01-01

    Discusses a new set of indirect cost accounting procedures which must be followed by school districts wishing to recover any indirect costs of administering federal grants and contracts. Also discusses the amount of indirect costs that may be recovered, computing indirect costs, classifying project costs, and restricted grants. (Author/DN)

  5. CD-ROM & Computer Software.

    ERIC Educational Resources Information Center

    Mews, Alison

    1997-01-01

    Reviews six CD-ROMs and computer software packages that can serve as resource materials for education. In addition to costs, hardware requirements, and thematic links, includes features such as interface and screen design, ease of access, installation, educational uses, animation and sound, educational challenges and benefits, and student…

  6. Do We Really Need Additional Contrast-Enhanced Abdominal Computed Tomography for Differential Diagnosis in Triage of Middle-Aged Subjects With Suspected Biliary Pain

    PubMed Central

    Hwang, In Kyeom; Lee, Yoon Suk; Kim, Jaihwan; Lee, Yoon Jin; Park, Ji Hoon; Hwang, Jin-Hyeok

    2015-01-01

    Abstract Enhanced computed tomography (CT) is widely used for evaluating acute biliary pain in the emergency department (ED). However, concern about radiation exposure from CT has also increased. We investigated the usefulness of pre-contrast CT for differential diagnosis in middle-aged subjects with suspected biliary pain. A total of 183 subjects, who visited the ED for suspected biliary pain from January 2011 to December 2012, were included. Retrospectively, pre-contrast phase and multiphase CT findings were reviewed and the detection rate of findings suggesting disease requiring significant treatment by noncontrast CT (NCCT) was compared with cases detected by multiphase CT. Approximately 70% of total subjects had a significant condition, including 1 case of gallbladder cancer and 126 (68.8%) cases requiring intervention (122 biliary stone-related diseases, 3 liver abscesses, and 1 liver hemangioma). The rate of overlooking malignancy without contrast enhancement was calculated to be 0% to 1.5%. Biliary stones and liver space-occupying lesions were found equally on NCCT and multiphase CT. Calculated probable rates of overlooking acute cholecystitis and biliary obstruction were maximally 6.8% and 4.2% respectively. Incidental significant finding unrelated with pain consisted of 1 case of adrenal incidentaloma, which was also observed in NCCT. NCCT might be sufficient to detect life-threatening or significant disease requiring early treatment in young adults with biliary pain. PMID:25700321

  7. Computer Recreations.

    ERIC Educational Resources Information Center

    Dewdney, A. K.

    1989-01-01

    Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)

  8. Phosphazene additives

    DOEpatents

    Harrup, Mason K; Rollins, Harry W

    2013-11-26

    An additive comprising a phosphazene compound that has at least two reactive functional groups and at least one capping functional group bonded to phosphorus atoms of the phosphazene compound. One of the at least two reactive functional groups is configured to react with cellulose and the other of the at least two reactive functional groups is configured to react with a resin, such as an amine resin of a polycarboxylic acid resin. The at least one capping functional group is selected from the group consisting of a short chain ether group, an alkoxy group, or an aryloxy group. Also disclosed are an additive-resin admixture, a method of treating a wood product, and a wood product.

  9. Computer simulation for the growing probability of additional offspring with an advantageous reversal allele in the decoupled continuous-time mutation-selection model

    NASA Astrophysics Data System (ADS)

    Gill, Wonpyong

    2016-01-01

    This study calculated the growing probability of additional offspring with the advantageous reversal allele in an asymmetric sharply-peaked landscape using the decoupled continuous-time mutation-selection model. The growing probability was calculated for various population sizes, N, sequence lengths, L, selective advantages, s, fitness parameters, k and measuring parameters, C. The saturated growing probability in the stochastic region was approximately the effective selective advantage, s*, when C≫1/Ns* and s*≪1. The present study suggests that the growing probability in the stochastic region in the decoupled continuous-time mutation-selection model can be described using the theoretical formula for the growing probability in the Moran two-allele model. The selective advantage ratio, which represents the ratio of the effective selective advantage to the selective advantage, does not depend on the population size, selective advantage, measuring parameter and fitness parameter; instead the selective advantage ratio decreases with the increasing sequence length.

  10. Indirect Costs of Health Research--How They are Computed, What Actions are Needed. Report by the Comptroller General of the United States.

    ERIC Educational Resources Information Center

    Comptroller General of the U.S., Washington, DC.

    A review by the General Accounting Office of various aspects of indirect costs associated with federal health research grants is presented. After an introduction detailing the scope of the review and defining indirect costs and federal participation, the report focuses on the causes of the rapid increase of indirect costs. Among findings was that…

  11. The Hidden Costs of Owning a Microcomputer.

    ERIC Educational Resources Information Center

    McDole, Thomas L.

    Before purchasing computer hardware, individuals must consider the costs associated with the setup and operation of a microcomputer system. Included among the initial costs of purchasing a computer are the costs of the computer, one or more disk drives, a monitor, and a printer as well as the costs of such optional peripheral devices as a plotter…

  12. ADVANCED COMPUTATIONAL METHODS IN DOSE MODELING: APPLICATION OF COMPUTATIONAL BIOPHYSICAL TRANSPORT, COMPUTATIONAL CHEMISTRY, AND COMPUTATIONAL BIOLOGY

    EPA Science Inventory

    Computational toxicology (CompTox) leverages the significant gains in computing power and computational techniques (e.g., numerical approaches, structure-activity relationships, bioinformatics) realized over the last few years, thereby reducing costs and increasing efficiency i...

  13. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are

  14. Reducing Communication in Algebraic Multigrid Using Additive Variants

    DOE PAGES

    Vassilevski, Panayot S.; Yang, Ulrike Meier

    2014-02-12

    Algebraic multigrid (AMG) has proven to be an effective scalable solver on many high performance computers. However, its increasing communication complexity on coarser levels has shown to seriously impact its performance on computers with high communication cost. Moreover, additive AMG variants provide not only increased parallelism as well as decreased numbers of messages per cycle but also generally exhibit slower convergence. Here we present various new additive variants with convergence rates that are significantly improved compared to the classical additive algebraic multigrid method and investigate their potential for decreased communication, and improved communication-computation overlap, features that are essential for goodmore » performance on future exascale architectures.« less

  15. Automated Water Analyser Computer Supported System (AWACSS) Part II: Intelligent, remote-controlled, cost-effective, on-line, water-monitoring measurement system.

    PubMed

    Tschmelak, Jens; Proll, Guenther; Riedt, Johannes; Kaiser, Joachim; Kraemmer, Peter; Bárzaga, Luis; Wilkinson, James S; Hua, Ping; Hole, J Patrick; Nudd, Richard; Jackson, Michael; Abuknesha, Ram; Barceló, Damià; Rodriguez-Mozaz, Sara; de Alda, Maria J López; Sacher, Frank; Stien, Jan; Slobodník, Jaroslav; Oswald, Peter; Kozmenko, Helena; Korenková, Eva; Tóthová, Lívia; Krascsenits, Zoltan; Gauglitz, Guenter

    2005-02-15

    A novel analytical system AWACSS (Automated Water Analyser Computer Supported System) based on immunochemical technology has been evaluated that can measure several organic pollutants at low nanogram per litre level in a single few-minutes analysis without any prior sample pre-concentration or pre-treatment steps. Having in mind actual needs of water-sector managers related to the implementation of the Drinking Water Directive (DWD) [98/83/EC, 1998. Council Directive (98/83/EC) of 3 November 1998 relating to the quality of water intended for human consumption. Off. J. Eur. Commun. L330, 32-54] and Water Framework Directive (WFD) [2000/60/EC, 2000. Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy. Off. J. Eur. Commun. L327, 1-72], drinking, ground, surface, and waste waters were major media used for the evaluation of the system performance. The first part article gave the reader an overview of the aims and scope of the AWACSS project as well as details about basic technology, immunoassays, software, and networking developed and utilised within the research project. The second part reports on the system performance, first real sample measurements, and an international collaborative trial (inter-laboratory tests) to compare the biosensor with conventional anayltical methods. The systems' capability for analysing a wide range of environmental organic micro-pollutants, such as modern pesticides, endocrine disrupting compounds and pharmaceuticals in surface, ground, drinking and waste water is shown. In addition, a protocol using reconstitution of extracts of solid samples, developed and applied for analysis of river sediments and food samples, is presented. Finally, the overall performance of the AWACSS system in comparison to the conventional analytical techniques, which included liquid and gas chromatographic systems with diode-array UV and mass

  16. Inventory of available methods and processes for assessing the benefits, costs, and impacts of demand-side options: Volume 3 -- Description and review of computer tools for integrated planning. Final report

    SciTech Connect

    Heffner, G.; Johansen, S.; Limaye, D.; Rose, M.; McDonald, C.

    1998-03-01

    The purpose of this review is to describe computer models and tools that are being used in different countries by utilities and governments to address various issues related to the planning, analysis, and forecasting of the benefits, costs, and impacts of demand-side management (DSM) options. A wide range of models are surveyed, including those for: load forecasting (energy and peak, load shapes, and consumption); identification of DSM options; identity planning/assessment criteria; screening of DSM options; assessing technical and economic DSM potential; collecting data on customer needs and characteristics; market assessment and market penetration analysis; estimating achievable DSM potential; designing DSM programs; measuring and evaluating DSM program impacts; performing benefit/cost analysis; conducting production costing and capacity expansion analysis of supply options; and integration of supply and demand options. Many of the tools described are used for several of these activities. Most tools are commercial available.

  17. Results of an Experimental Program to Provide Low Cost Computer Searches of the NASA Information File to University Graduate Students in the Southeast. Final Report.

    ERIC Educational Resources Information Center

    Smetana, Frederick O.; Phillips, Dennis M.

    In an effort to increase dissemination of scientific and technological information, a program was undertaken whereby graduate students in science and engineering could request a computer-produced bibliography and/or abstracts of documents identified by the computer. The principal resource was the National Aeronautics and Space Administration…

  18. Cost-effectiveness of colorectal cancer screening - an overview.

    PubMed

    Lansdorp-Vogelaar, Iris; Knudsen, Amy B; Brenner, Hermann

    2010-08-01

    There are several modalities available for a colorectal cancer (CRC) screening program. When determining which CRC screening program to implement, the costs of such programs should be considered in comparison to the health benefits they are expected to provide. Cost-effectiveness analysis provides a tool to do this. In this paper we review the evidence on the cost-effectiveness of CRC screening. Published studies universally indicate that when compared with no CRC screening, all screening modalities provide additional years of life at a cost that is deemed acceptable by most industrialized nations. Many recent studies even find CRC screening to be cost-saving. However, when the alternative CRC screening strategies are compared against each other in an incremental cost-effectiveness analysis, no single optimal strategy emerges across the studies. There is consensus that the new technologies of stool DNA testing, computed tomographic colonography and capsule endoscopy are not yet cost-effective compared with the established CRC screening tests.

  19. GeoComputation 2009

    SciTech Connect

    Xue, Yong; Hoffman, Forrest M; Liu, Dingsheng

    2009-01-01

    The tremendous computing requirements of today's algorithms and the high costs of high-performance supercomputers drive us to share computing resources. The emerging computational Grid technologies are expected to make feasible the creation of a computational environment handling many PetaBytes of distributed data, tens of thousands of heterogeneous computing resources, and thousands of simultaneous users from multiple research institutions.

  20. Coping with distributed computing

    SciTech Connect

    Cormell, L.

    1992-09-01

    The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by providing some examples of the approaches taken at various HEP institutions. In addition, a brief review of commercial directions or products for distributed computing and management will be given.

  1. FeO2/MgO(1 0 0) supported cluster: Computational pursual for a low-cost and low-temperature CO nanocatalyst

    NASA Astrophysics Data System (ADS)

    Zamora, A. Y.; Reveles, J. U.; Mejia-Olvera, R.; Baruah, T.; Zope, R. R.

    2014-09-01

    CO oxidation of only 0.23 eV.Regarding the CO catalytic activity of iron oxide species at low-temperatures, it has been experimentally observed that thin oxide films on metals may indeed exhibit greatly enhanced catalytic activity compared to the underlying metal substrate under the same conditions [24]. In addition to the above studies, low temperature CO oxidation over Ag supported ultrathin MgO films was recently reported [17]. In this case, the activation barrier (0.7 eV) resulted lower than the corresponding barrier of CO oxidation on Pt(1 1 1) of 0.9 eV. The gas phase reaction (½O2 + CO → CO2) was calculated to present an overall exothermicity of 3.2 eV. Importantly, this study showed the possibility to generate a catalyst in which the CO adsorption energy of only 0.4 eV is low enough to prevent CO poisoning, therefore enabling a low temperature CO oxidation route, and addressing the cold start problem [25].Despite the above mentioned studies, the development of active and stable catalysts, without noble metals, for low-temperature CO oxidation under an ambient atmosphere remains a significant challenge. Earlier reports, as mentioned above, indicate that the Fe2O3 is the most active iron oxide surface toward CO oxidation at high temperatures (∼300 °C) [8]. Furthermore, a number of theoretical and experimental cluster studies have also shown that selected iron oxide compositions and charge states are the most reactive toward CO oxidation, i.e. FeO2, Fe2O3, FeO2- Fe2O3- FeO+, FeO2+, Fe2O+, Fe2O2+ and Fe2O3+[26,27].The aim of the proposed work is to carry out a detailed investigation that will provide information about the electronic, geometrical, and catalytic properties of the iron oxide FeO2 cluster adsorbed on defect-free MgO(1 0 0) surfaces on the quest for a low-cost and low-temperature CO nano-catalysts. Note that thin oxide films have been found more active at low temperature [24] as compared to the iron oxide surfaces [14]. Our objective is to show

  2. Magnetic-fusion energy and computers

    SciTech Connect

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups.

  3. User manual for GEOCITY: a computer model for cost analysis of geothermal district-heating-and-cooling systems. Volume I. Main text

    SciTech Connect

    Huber, H.D.; Fassbender, L.L.; Bloomster, C.H.

    1982-09-01

    The purpose of this model is to calculate the costs of residential space heating, space cooling, and sanitary water heating or process heating (cooling) using geothermal energy from a hydrothermal reservoir. The model can calculate geothermal heating and cooling costs for residential developments, a multi-district city, or a point demand such as an industrial factory or commercial building. GEOCITY simulates the complete geothermal heating and cooling system, which consists of two principal parts: the reservoir and fluid transmission system and the distribution system. The reservoir and fluid transmission submodel calculates the life-cycle cost of thermal energy supplied to the distribution system by simulating the technical design and cash flows for the exploration, development, and operation of the reservoir and fluid transmission system. The distribution system submodel calculates the life-cycle cost of heat (chill) delivered by the distribution system to the end-users by simulating the technical design and cash flows for the construction and operation of the distribution system. Geothermal space heating is assumed to be provided by circulating hot water through radiators, convectors, fan-coil units, or other in-house heating systems. Geothermal process heating is provided by directly using the hot water or by circulating it through a process heat exchanger. Geothermal space or process cooling is simulated by circulating hot water through lithium bromide/water absorption chillers located at each building. Retrofit costs for both heating and cooling applications can be input by the user. The life-cycle cost of thermal energy from the reservoir and fluid transmission system to the distribution system and the life-cycle cost of heat (chill) to the end-users are calculated using discounted cash flow analysis.

  4. Computation Directorate 2008 Annual Report

    SciTech Connect

    Crawford, D L

    2009-03-25

    Whether a computer is simulating the aging and performance of a nuclear weapon, the folding of a protein, or the probability of rainfall over a particular mountain range, the necessary calculations can be enormous. Our computers help researchers answer these and other complex problems, and each new generation of system hardware and software widens the realm of possibilities. Building on Livermore's historical excellence and leadership in high-performance computing, Computation added more than 331 trillion floating-point operations per second (teraFLOPS) of power to LLNL's computer room floors in 2008. In addition, Livermore's next big supercomputer, Sequoia, advanced ever closer to its 2011-2012 delivery date, as architecture plans and the procurement contract were finalized. Hyperion, an advanced technology cluster test bed that teams Livermore with 10 industry leaders, made a big splash when it was announced during Michael Dell's keynote speech at the 2008 Supercomputing Conference. The Wall Street Journal touted Hyperion as a 'bright spot amid turmoil' in the computer industry. Computation continues to measure and improve the costs of operating LLNL's high-performance computing systems by moving hardware support in-house, by measuring causes of outages to apply resources asymmetrically, and by automating most of the account and access authorization and management processes. These improvements enable more dollars to go toward fielding the best supercomputers for science, while operating them at less cost and greater responsiveness to the customers.

  5. Automated Water Analyser Computer Supported System (AWACSS) Part II: Intelligent, remote-controlled, cost-effective, on-line, water-monitoring measurement system.

    PubMed

    Tschmelak, Jens; Proll, Guenther; Riedt, Johannes; Kaiser, Joachim; Kraemmer, Peter; Bárzaga, Luis; Wilkinson, James S; Hua, Ping; Hole, J Patrick; Nudd, Richard; Jackson, Michael; Abuknesha, Ram; Barceló, Damià; Rodriguez-Mozaz, Sara; de Alda, Maria J López; Sacher, Frank; Stien, Jan; Slobodník, Jaroslav; Oswald, Peter; Kozmenko, Helena; Korenková, Eva; Tóthová, Lívia; Krascsenits, Zoltan; Gauglitz, Guenter

    2005-02-15

    A novel analytical system AWACSS (Automated Water Analyser Computer Supported System) based on immunochemical technology has been evaluated that can measure several organic pollutants at low nanogram per litre level in a single few-minutes analysis without any prior sample pre-concentration or pre-treatment steps. Having in mind actual needs of water-sector managers related to the implementation of the Drinking Water Directive (DWD) [98/83/EC, 1998. Council Directive (98/83/EC) of 3 November 1998 relating to the quality of water intended for human consumption. Off. J. Eur. Commun. L330, 32-54] and Water Framework Directive (WFD) [2000/60/EC, 2000. Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy. Off. J. Eur. Commun. L327, 1-72], drinking, ground, surface, and waste waters were major media used for the evaluation of the system performance. The first part article gave the reader an overview of the aims and scope of the AWACSS project as well as details about basic technology, immunoassays, software, and networking developed and utilised within the research project. The second part reports on the system performance, first real sample measurements, and an international collaborative trial (inter-laboratory tests) to compare the biosensor with conventional anayltical methods. The systems' capability for analysing a wide range of environmental organic micro-pollutants, such as modern pesticides, endocrine disrupting compounds and pharmaceuticals in surface, ground, drinking and waste water is shown. In addition, a protocol using reconstitution of extracts of solid samples, developed and applied for analysis of river sediments and food samples, is presented. Finally, the overall performance of the AWACSS system in comparison to the conventional analytical techniques, which included liquid and gas chromatographic systems with diode-array UV and mass

  6. The social and psychological costs of punishing.

    PubMed

    Adams, Gabrielle S; Mullen, Elizabeth

    2012-02-01

    We review evidence of the psychological and social costs associated with punishing. We propose that these psychological and social costs should be considered (in addition to material costs) when searching for evidence of costly punishment "in the wild."

  7. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  8. Opportunities for Success: Cost-Effective Programs for Children, Update, 1990. Report together with Additional Minority Views and Dissenting Views of the Select Committee on Children, Youth, and Families, One Hundred First Congress, Second Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. House Select Committee on Children, Youth, and Families.

    This report on effective programs for children updates the 1988 report by providing new and stronger documentation of the programs' benefits and cost effectiveness. Eight programs and types of programs are discussed in Part I and four program areas that warrant attention are discussed in Part II. Part I reports on: (1) the Special Supplemental…

  9. FeO2/MgO(1 0 0) supported cluster: Computational pursual for a low-cost and low-temperature CO nanocatalyst

    NASA Astrophysics Data System (ADS)

    Zamora, A. Y.; Reveles, J. U.; Mejia-Olvera, R.; Baruah, T.; Zope, R. R.

    2014-09-01

    CO oxidation of only 0.23 eV.Regarding the CO catalytic activity of iron oxide species at low-temperatures, it has been experimentally observed that thin oxide films on metals may indeed exhibit greatly enhanced catalytic activity compared to the underlying metal substrate under the same conditions [24]. In addition to the above studies, low temperature CO oxidation over Ag supported ultrathin MgO films was recently reported [17]. In this case, the activation barrier (0.7 eV) resulted lower than the corresponding barrier of CO oxidation on Pt(1 1 1) of 0.9 eV. The gas phase reaction (½O2 + CO → CO2) was calculated to present an overall exothermicity of 3.2 eV. Importantly, this study showed the possibility to generate a catalyst in which the CO adsorption energy of only 0.4 eV is low enough to prevent CO poisoning, therefore enabling a low temperature CO oxidation route, and addressing the cold start problem [25].Despite the above mentioned studies, the development of active and stable catalysts, without noble metals, for low-temperature CO oxidation under an ambient atmosphere remains a significant challenge. Earlier reports, as mentioned above, indicate that the Fe2O3 is the most active iron oxide surface toward CO oxidation at high temperatures (∼300 °C) [8]. Furthermore, a number of theoretical and experimental cluster studies have also shown that selected iron oxide compositions and charge states are the most reactive toward CO oxidation, i.e. FeO2, Fe2O3, FeO2- Fe2O3- FeO+, FeO2+, Fe2O+, Fe2O2+ and Fe2O3+[26,27].The aim of the proposed work is to carry out a detailed investigation that will provide information about the electronic, geometrical, and catalytic properties of the iron oxide FeO2 cluster adsorbed on defect-free MgO(1 0 0) surfaces on the quest for a low-cost and low-temperature CO nano-catalysts. Note that thin oxide films have been found more active at low temperature [24] as compared to the iron oxide surfaces [14]. Our objective is to show

  10. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell

    2014-01-01

    Additive manufacturing is a rapid prototyping technology that allows parts to be built in a series of thin layers from plastic, ceramics, and metallics. Metallic additive manufacturing is an emerging form of rapid prototyping that allows complex structures to be built using various metallic powders. Significant time and cost savings have also been observed using the metallic additive manufacturing compared with traditional techniques. Development of the metallic additive manufacturing technology has advanced significantly over the last decade, although many of the techniques to inspect parts made from these processes have not advanced significantly or have limitations. Several external geometry inspection techniques exist such as Coordinate Measurement Machines (CMM), Laser Scanners, Structured Light Scanning Systems, or even traditional calipers and gages. All of the aforementioned techniques are limited to external geometry and contours or must use a contact probe to inspect limited internal dimensions. This presentation will document the development of a process for real-time dimensional inspection technique and digital quality record of the additive manufacturing process using Infrared camera imaging and processing techniques.

  11. Educational Costs.

    ERIC Educational Resources Information Center

    Arnold, Robert

    Problems in educational cost accounting and a new cost accounting approach are described in this paper. The limitations of the individualized cost (student units) approach and the comparative cost approach (in the form of fund-function-object) are illustrated. A new strategy, an activity-based system of accounting, is advocated. Borrowed from…

  12. Tackifier for addition polyimides

    NASA Technical Reports Server (NTRS)

    Butler, J. M.; St.clair, T. L.

    1980-01-01

    A modification to the addition polyimide, LaRC-160, was prepared to improve tack and drape and increase prepeg out-time. The essentially solventless, high viscosity laminating resin is synthesized from low cost liquid monomers. The modified version takes advantage of a reactive, liquid plasticizer which is used in place of solvent and helps solve a major problem of maintaining good prepeg tack and drape, or the ability of the prepeg to adhere to adjacent plies and conform to a desired shape during the lay up process. This alternate solventless approach allows both longer life of the polymer prepeg and the processing of low void laminates. This approach appears to be applicable to all addition polyimide systems.

  13. ENERGY COSTS OF IAQ CONTROL THROUGH INCREASED VENTILATION IN A SMALL OFFICE IN A WARM, HUMID CLIMATE: PARAMETRIC ANALYSIS USING THE DOE-2 COMPUTER MODEL

    EPA Science Inventory

    The report gives results of a series of computer runs using the DOE-2.1E building energy model, simulating a small office in a hot, humid climate (Miami). These simulations assessed the energy and relative humidity (RH) penalties when the outdoor air (OA) ventilation rate is inc...

  14. Benefits and Costs of Ultraviolet Fluorescent Lighting

    PubMed Central

    Lestina, Diane C.; Miller, Ted R.; Knoblauch, Richard; Nitzburg, Marcia

    1999-01-01

    Objective To demonstrate the improvements in detection and recognition distances using fluorescent roadway delineation and auxiliary ultra-violet (UVA) headlights and determine the reduction in crashes needed to recover increased costs of the UVA/flourescent technology. Methods Field study comparisons with and without UVA headlights. Crash types potentially reduced by UVA/flourescent technology were estimated using annual crash and injury incidence data from the General Estimates System (1995–1996) and the 1996 Fatality Analysis Reporting System. Crash costs were computed based on body region and threat-to-life injury severity. Results Significant improvements in detection and recognition distances for pedestrian scenarios, ranging from 34% to 117%. A 19% reduction in nighttime motor vehicle crashes involving pedestrians or pedal-cycles will pay for the additional UVA headlight costs. Alternatively, a 5.5% reduction in all relevant nighttime crashes will pay for the additional costs of UVA headlights and fluorescent highway paint combined. Conclusions If the increased detection and recognition distances resulting from using UVA/flourescent technology as shown in this field study reduce relevant crashes by even small percentages, the benefit cost ratios will still be greater than 2; thus, the UVA/flourescent technology is very cost-effective and a definite priority for crash reductions.

  15. Outline of cost-benefit analysis and a case study

    NASA Technical Reports Server (NTRS)

    Kellizy, A.

    1978-01-01

    The methodology of cost-benefit analysis is reviewed and a case study involving solar cell technology is presented. Emphasis is placed on simplifying the technique in order to permit a technical person not trained in economics to undertake a cost-benefit study comparing alternative approaches to a given problem. The role of economic analysis in management decision making is discussed. In simplifying the methodology it was necessary to restrict the scope and applicability of this report. Additional considerations and constraints are outlined. Examples are worked out to demonstrate the principles. A computer program which performs the computational aspects appears in the appendix.

  16. Cost Validation Using PRICE H

    NASA Technical Reports Server (NTRS)

    Jack, John; Kwan, Eric; Wood, Milana

    2011-01-01

    PRICE H was introduced into the JPL cost estimation tool set circa 2003. It became more available at JPL when IPAO funded the NASA-wide site license for all NASA centers. PRICE H was mainly used as one of the cost tools to validate proposal grassroots cost estimates. Program offices at JPL view PRICE H as an additional crosscheck to Team X (JPL Concurrent Engineering Design Center) estimates. PRICE H became widely accepted ca, 2007 at JPL when the program offices moved away from grassroots cost estimation for Step 1 proposals. PRICE H is now one of the key cost tools used for cost validation, cost trades, and independent cost estimates.

  17. What does an MRI scan cost?

    PubMed

    Young, David W

    2015-11-01

    Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs. PMID:26685437

  18. 24 CFR 908.108 - Cost.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Cost. 908.108 Section 908.108..., RENTAL VOUCHER, AND MODERATE REHABILITATION PROGRAMS § 908.108 Cost. (a) General. The costs of the... computer hardware or software, or both, the cost of contracting for those services, or the cost...

  19. 24 CFR 908.108 - Cost.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Cost. 908.108 Section 908.108..., RENTAL VOUCHER, AND MODERATE REHABILITATION PROGRAMS § 908.108 Cost. (a) General. The costs of the... computer hardware or software, or both, the cost of contracting for those services, or the cost...

  20. 42 CFR 413.24 - Adequate cost data and cost finding.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... percentages of cost in relation to total costs will be allocated this way. The combined total costs of... Reimbursement Questionnaire. Additionally, a cost report for a teaching hospital is rejected for lack...

  1. 42 CFR 413.24 - Adequate cost data and cost finding.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... percentages of cost in relation to total costs will be allocated this way. The combined total costs of... Reimbursement Questionnaire. Additionally, a cost report for a teaching hospital is rejected for lack...

  2. Cost goals

    NASA Technical Reports Server (NTRS)

    Hoag, J.

    1981-01-01

    Cost goal activities for the point focusing parabolic dish program are reported. Cost goals involve three tasks: (1) determination of the value of the dish systems to potential users; (2) the cost targets of the dish system are set out; (3) the value side and cost side are integrated to provide information concerning the potential size of the market for parabolic dishes. The latter two activities are emphasized.

  3. Tracking Costs

    ERIC Educational Resources Information Center

    Erickson, Paul W.

    2010-01-01

    Even though there's been a slight reprieve in energy costs, the reality is that the cost of non-renewable energy is increasing, and state education budgets are shrinking. One way to keep energy and operations costs from overshadowing education budgets is to develop a 10-year energy audit plan to eliminate waste. First, facility managers should…

  4. On the accuracy of the GIAO-DFT calculation of 15N NMR chemical shifts of the nitrogen-containing heterocycles--a gateway to better agreement with experiment at lower computational cost.

    PubMed

    Samultsev, Dmitry O; Semenov, Valentin A; Krivdin, Leonid B

    2014-05-01

    The main factors affecting the accuracy and computational cost of the gauge-independent atomic orbital density functional theory (GIAO-DFT) calculation of (15)N NMR chemical shifts in the representative series of key nitrogen-containing heterocycles--azoles and azines--have been systematically analyzed. In the calculation of (15)N NMR chemical shifts, the best result has been achieved with the KT3 functional used in combination with Jensen's pcS-3 basis set (GIAO-DFT-KT3/pcS-3) resulting in the value of mean absolute error as small as 5 ppm for a range exceeding 270 ppm in a benchmark series of 23 compounds with an overall number of 41 different (15)N NMR chemical shifts. Another essential finding is that basically, the application of the locally dense basis set approach is justified in the calculation of (15)N NMR chemical shifts within the 3-4 ppm error that results in a dramatic decrease in computational cost. Based on the present data, we recommend GIAO-DFT-KT3/pcS-3//pc-2 as one of the most effective locally dense basis set schemes for the calculation of (15)N NMR chemical shifts.

  5. Computer-assisted assignment of functional domains in the nonstructural polyprotein of hepatitis E virus: delineation of an additional group of positive-strand RNA plant and animal viruses.

    PubMed

    Koonin, E V; Gorbalenya, A E; Purdy, M A; Rozanov, M N; Reyes, G R; Bradley, D W

    1992-09-01

    Computer-assisted comparison of the nonstructural polyprotein of hepatitis E virus (HEV) with proteins of other positive-strand RNA viruses allowed the identification of the following putative functional domains: (i) RNA-dependent RNA polymerase, (ii) RNA helicase, (iii) methyltransferase, (iv) a domain of unknown function ("X" domain) flanking the papain-like protease domains in the polyproteins of animal positive-strand RNA viruses, and (v) papain-like cysteine protease domain distantly related to the putative papain-like protease of rubella virus (RubV). Comparative analysis of the polymerase and helicase sequences of positive-strand RNA viruses belonging to the so-called "alpha-like" supergroup revealed grouping between HEV, RubV, and beet necrotic yellow vein virus (BNYVV), a plant furovirus. Two additional domains have been identified: one showed significant conservation between HEV, RubV, and BNYVV, and the other showed conservation specifically between HEV and RubV. The large nonstructural proteins of HEV, RubV, and BNYVV retained similar domain organization, with the exceptions of relocation of the putative protease domain in HEV as compared to RubV and the absence of the protease and X domains in BNYVV. These observations show that HEV, RubV, and BNYVV encompass partially conserved arrays of distinctive putative functional domains, suggesting that these viruses constitute a distinct monophyletic group within the alpha-like supergroup of positive-strand RNA viruses. PMID:1518855

  6. Designers' unified cost model

    NASA Technical Reports Server (NTRS)

    Freeman, W.; Ilcewicz, L.; Swanson, G.; Gutowski, T.

    1992-01-01

    The Structures Technology Program Office (STPO) at NASA LaRC has initiated development of a conceptual and preliminary designers' cost prediction model. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state-of-the-art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a database and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. This paper presents the team members, approach, goals, plans, and progress to date for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).

  7. ''When Cost Measures Contradict''

    SciTech Connect

    Montgomery, W. D.; Smith, A. E.; Biggar, S. L.; Bernstein, P. M.

    2003-05-09

    When regulators put forward new economic or regulatory policies, there is a need to compare the costs and benefits of these new policies to existing policies and other alternatives to determine which policy is most cost-effective. For command and control policies, it is quite difficult to compute costs, but for more market-based policies, economists have had a great deal of success employing general equilibrium models to assess a policy's costs. Not all cost measures, however, arrive at the same ranking. Furthermore, cost measures can produce contradictory results for a specific policy. These problems make it difficult for a policy-maker to determine the best policy. For a cost measures to be of value, one would like to be confident of two things. First one wants to be sure whether the policy is a winner or loser. Second, one wants to be confident that a measure produces the correct policy ranking. That is, one wants to have confidence in a policy measure's ability to correctly rank policies from most beneficial to most harmful. This paper analyzes empirically these two properties of different costs measures as they pertain to assessing the costs of the carbon abatement policies, especially the Kyoto Protocol, under alternative assumptions about implementation.

  8. The Economics of Computers.

    ERIC Educational Resources Information Center

    Sharpe, William F.

    A microeconomic theory is applied in this book to computer services and costs and for the benefit of those who are decision-makers in the selection, financing, and use of computers. Subtopics of the theory discussed include value and demand; revenue and profits; time and risk; and costs, inputs, and outputs. Application of the theory is explained…

  9. Performance Boosting Additive

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Mainstream Engineering Corporation was awarded Phase I and Phase II contracts from Goddard Space Flight Center's Small Business Innovation Research (SBIR) program in early 1990. With support from the SBIR program, Mainstream Engineering Corporation has developed a unique low cost additive, QwikBoost (TM), that increases the performance of air conditioners, heat pumps, refrigerators, and freezers. Because of the energy and environmental benefits of QwikBoost, Mainstream received the Tibbetts Award at a White House Ceremony on October 16, 1997. QwikBoost was introduced at the 1998 International Air Conditioning, Heating, and Refrigeration Exposition. QwikBoost is packaged in a handy 3-ounce can (pressurized with R-134a) and will be available for automotive air conditioning systems in summer 1998.

  10. Cutting Transportation Costs.

    ERIC Educational Resources Information Center

    Lewis, Barbara

    1982-01-01

    Beginning on the front cover, this article tells how school districts are reducing their transportation costs. Particularly effective measures include the use of computers for bus maintenance and scheduling, school board ownership of buses, and the conversion of gasoline-powered buses to alternative fuels. (Author/MLF)

  11. Modeling costs of plastics recycling

    SciTech Connect

    Not Available

    1993-10-01

    This article describes TCM, a computer spreadsheet technique to simulate process costs. In a technical cost model, cost is assigned to each unit operation in a process flow diagram. Costs are summarized corresponding to unit operations, each representing a single machine or station with an associated production rate. Each station is characterized by factors including number of laborers, equipment and tooling costs, and other investment and operating costs. Technical cost models can be used to: simulate costs of manufacturing; establish direct comparisons between material, process, and design alternatives; investigate the effect of changes in the process options on overall cost; identify limiting process steps and parameters; determine merits of specific process and design improvements.

  12. On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi

    2008-01-01

    Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.

  13. Computational Tracking of Mental Health in Youth: Latin American Contributions to a Low-Cost and Effective Solution for Early Psychiatric Diagnosis.

    PubMed

    Mota, Natália Bezerra; Copelli, Mauro; Ribeiro, Sidarta

    2016-06-01

    The early onset of mental disorders can lead to serious cognitive damage, and timely interventions are needed in order to prevent them. In patients of low socioeconomic status, as is common in Latin America, it can be hard to identify children at risk. Here, we briefly introduce the problem by reviewing the scarce epidemiological data from Latin America regarding the onset of mental disorders, and discussing the difficulties associated with early diagnosis. Then we present computational psychiatry, a new field to which we and other Latin American researchers have contributed methods particularly relevant for the quantitative investigation of psychopathologies manifested during childhood. We focus on new technologies that help to identify mental disease and provide prodromal evaluation, so as to promote early differential diagnosis and intervention. To conclude, we discuss the application of these methods to clinical and educational practice. A comprehensive and quantitative characterization of verbal behavior in children, from hospitals and laboratories to homes and schools, may lead to more effective pedagogical and medical intervention. PMID:27254827

  14. Computer security in DOE distributed computing systems

    SciTech Connect

    Hunteman, W.J.

    1990-01-01

    The modernization of DOE facilities amid limited funding is creating pressure on DOE facilities to find innovative approaches to their daily activities. Distributed computing systems are becoming cost-effective solutions to improved productivity. This paper defines and describes typical distributed computing systems in the DOE. The special computer security problems present in distributed computing systems are identified and compared with traditional computer systems. The existing DOE computer security policy supports only basic networks and traditional computer systems and does not address distributed computing systems. A review of the existing policy requirements is followed by an analysis of the policy as it applies to distributed computing systems. Suggested changes in the DOE computer security policy are identified and discussed. The long lead time in updating DOE policy will require guidelines for applying the existing policy to distributed systems. Some possible interim approaches are identified and discussed. 2 refs.

  15. Energy and cost analysis of a solar-hydrogen combined heat and power system for remote power supply using a computer simulation

    SciTech Connect

    Shabani, Bahman; Andrews, John; Watkins, Simon

    2010-01-15

    A simulation program, based on Visual Pascal, for sizing and techno-economic analysis of the performance of solar-hydrogen combined heat and power systems for remote applications is described. The accuracy of the submodels is checked by comparing the real performances of the system's components obtained from experimental measurements with model outputs. The use of the heat generated by the PEM fuel cell, and any unused excess hydrogen, is investigated for hot water production or space heating while the solar-hydrogen system is supplying electricity. A 5 kWh daily demand profile and the solar radiation profile of Melbourne have been used in a case study to investigate the typical techno-economic characteristics of the system to supply a remote household. The simulation shows that by harnessing both thermal load and excess hydrogen it is possible to increase the average yearly energy efficiency of the fuel cell in the solar-hydrogen system from just below 40% up to about 80% in both heat and power generation (based on the high heating value of hydrogen). The fuel cell in the system is conventionally sized to meet the peak of the demand profile. However, an economic optimisation analysis illustrates that installing a larger fuel cell could lead to up to a 15% reduction in the unit cost of the electricity to an average of just below 90 c/kWh over the assessment period of 30 years. Further, for an economically optimal size of the fuel cell, nearly a half the yearly energy demand for hot water of the remote household could be supplied by heat recovery from the fuel cell and utilising unused hydrogen in the exit stream. Such a system could then complement a conventional solar water heating system by providing the boosting energy (usually in the order of 40% of the total) normally obtained from gas or electricity. (author)

  16. 28 CFR 100.11 - Allowable costs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) COST RECOVERY REGULATIONS, COMMUNICATIONS... reimbursement under section 109(e) CALEA are: (1) All reasonable plant costs directly associated with the... undergoes major modifications; (2) Additional reasonable plant costs directly associated with making...

  17. Cloud Computing for radiologists.

    PubMed

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  18. Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  19. Testing the Cost Yardstick in Cost-Quality Studies.

    ERIC Educational Resources Information Center

    Finch, James N.

    1967-01-01

    To discover how costs affect quality, 16 different methods of computing educational costs are developed and correlated with a cluster of "quality related" factors (QRC). Data for the correlation were obtained from 1,055 city school districts in 48 states. The QRC is composed of staffing adequacy variables, measures of teacher quality, and…

  20. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    PubMed Central

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  1. A lightweight distributed framework for computational offloading in mobile cloud computing.

    PubMed

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  2. 49 CFR 1139.3 - Cost study.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (d) Where cost studies are developed through the use of computer processing techniques, there shall... 49 Transportation 8 2010-10-01 2010-10-01 false Cost study. 1139.3 Section 1139.3 Transportation... Commodities § 1139.3 Cost study. (a) The respondents shall submit a cost study. Highway Form B may be used...

  3. CAI: Its Cost and Its Role.

    ERIC Educational Resources Information Center

    Pressman, Israel; Rosenbloom, Bruce

    1984-01-01

    Describes and evaluates costs of hardware, software, training, and maintenance for computer assisted instruction (CAI) as they relate to total system cost. An example of an educational system provides an illustration of CAI cost analysis. Future developments, cost effectiveness, affordability, and applications in public and private environments…

  4. The Challenge of Computers.

    ERIC Educational Resources Information Center

    Leger, Guy

    Computers may change teachers' lifestyles, teaching styles, and perhaps even their personal values. A brief survey of the history of computers demonstrates the incredible pace at which computer technology is moving ahead. The cost and size of microchips will continue to decline dramatically over the next 20 years, while the capability and variety…

  5. Assessing the Costs of Adequacy in California Public Schools: A Cost Function Approach

    ERIC Educational Resources Information Center

    Imazeki, Jennifer

    2008-01-01

    In this study, a cost function is used to estimate the costs for California districts to meet the achievement goals set out for them by the state. I calculate estimates of base costs (i.e., per pupil costs in a district with relatively low levels of student need) and marginal costs (i.e., the additional costs associated with specific student…

  6. Distributed computing at the SSCL

    SciTech Connect

    Cormell, L.; White, R.

    1993-05-01

    The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given.

  7. Costing imaging procedures.

    PubMed

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting. PMID:3349241

  8. Costing imaging procedures.

    PubMed

    Bretland, P M

    1988-01-01

    The existing National Health Service financial system makes comprehensive costing of any service very difficult. A method of costing using modern commercial methods has been devised, classifying costs into variable, semi-variable and fixed and using the principle of overhead absorption for expenditure not readily allocated to individual procedures. It proved possible to establish a cost spectrum over the financial year 1984-85. The cheapest examinations were plain radiographs outside normal working hours, followed by plain radiographs, ultrasound, special procedures, fluoroscopy, nuclear medicine, angiography and angiographic interventional procedures in normal working hours. This differs from some published figures, particularly those in the Körner report. There was some overlap between fluoroscopic interventional and the cheaper nuclear medicine procedures, and between some of the more expensive nuclear medicine procedures and the cheaper angiographic ones. Only angiographic and the few more expensive nuclear medicine procedures exceed the cost of the inpatient day. The total cost of the imaging service to the district was about 4% of total hospital expenditure. It is shown that where more procedures are undertaken, the semi-variable and fixed (including capital) elements of the cost decrease (and vice versa) so that careful study is required to assess the value of proposed economies. The method is initially time-consuming and requires a computer system with 512 Kb of memory, but once the basic costing system is established in a department, detailed financial monitoring should become practicable. The necessity for a standard comprehensive costing procedure of this nature, based on sound cost accounting principles, appears inescapable, particularly in view of its potential application to management budgeting.

  9. The costs of asthma.

    PubMed

    Barnes, P J; Jonsson, B; Klim, J B

    1996-04-01

    At present, asthma represents a substantial burden on health care resources in all countries so far studied. The costs of asthma are largely due to uncontrolled disease, and are likely to rise as its prevalence and severity increase. Costs could be significantly reduced if disease control is improved. A large proportion of the total cost of illness is derived from treating the consequences of poor asthma control-direct costs, such as emergency room use and hospitalizations. Indirect costs, which include time off work or school and early retirement, are incurred when the disease is not fully controlled and becomes severe enough to have an effect on daily life. In addition, quality of life assessments show that asthma has a significant socioeconomic impact, not only on the patients themselves, but on the whole family. Underuse of prescribed therapy, which includes poor compliance, significantly contributes towards the poor control of asthma. The consequences of poor compliance in asthma include increased morbidity and sometimes mortality, and increased health care expenditure. To improve asthma management, international guidelines have been introduced which recommend an increase in the use of prophylactic therapy. The resulting improvements in the control of asthma will reduce the number of hospitalizations associated with asthma, and may ultimately produce a shift within direct costs, with subsequent reductions in indirect costs. In addition, costs may be reduced by improving therapeutic interventions and through effective patient education programmes. This paper reviews current literature on the costs of asthma to assess how effectively money is spent and, by estimating the proportion of the cost attributable to uncontrolled disease, will identify where financial savings might be made. PMID:8726924

  10. Progress Toward Automated Cost Estimation

    NASA Technical Reports Server (NTRS)

    Brown, Joseph A.

    1992-01-01

    Report discusses efforts to develop standard system of automated cost estimation (ACE) and computer-aided design (CAD). Advantage of system is time saved and accuracy enhanced by automating extraction of quantities from design drawings, consultation of price lists, and application of cost and markup formulas.

  11. Program Tracks Cost Of Travel

    NASA Technical Reports Server (NTRS)

    Mauldin, Lemuel E., III

    1993-01-01

    Travel Forecaster is menu-driven, easy-to-use computer program that plans, forecasts cost, and tracks actual vs. planned cost of business-related travel of division or branch of organization and compiles information into data base to aid travel planner. Ability of program to handle multiple trip entries makes it valuable time-saving device.

  12. Identification of cost effective energy conservation measures

    NASA Technical Reports Server (NTRS)

    Bierenbaum, H. S.; Boggs, W. H.

    1978-01-01

    In addition to a successful program of readily implemented conservation actions for reducing building energy consumption at Kennedy Space Center, recent detailed analyses have identified further substantial savings for buildings representative of technical facilities designed when energy costs were low. The techniques employed for determination of these energy savings consisted of facility configuration analysis, power and lighting measurements, detailed computer simulations and simulation verifications. Use of these methods resulted in identification of projected energy savings as large as $330,000 a year (approximately two year break-even period) in a single building. Application of these techniques to other commercial buildings is discussed

  13. Reducing reconditioning costs using computerized CP technology

    SciTech Connect

    Rizzo, M.E.; Wildman, T.A.

    1997-12-01

    New data collection technology and improved data interpretation diminish the need to spend hundreds of thousands or even millions of dollars to recondition poorly coated pipelines without compromising safety. Application of alternative cathodic protection criteria rewards companies with additional resources to remain competitive. This paper examines the results of applying a combination of technologies that matured throughout the 1980`s: Global Positioning Satellites, rugged field computers, fast analog-to-digital converters, solid state interruption devices, and interpretation of oscillographic cathodic protection waveprints. Cost effective application of sound engineering principles assure safe pipeline operation, exceed the letter and the spirit of NACE and DOT requirements, and yield significant financial returns.

  14. General aviation design synthesis utilizing interactive computer graphics

    NASA Technical Reports Server (NTRS)

    Galloway, T. L.; Smith, M. R.

    1976-01-01

    Interactive computer graphics is a fast growing area of computer application, due to such factors as substantial cost reductions in hardware, general availability of software, and expanded data communication networks. In addition to allowing faster and more meaningful input/output, computer graphics permits the use of data in graphic form to carry out parametric studies for configuration selection and for assessing the impact of advanced technologies on general aviation designs. The incorporation of interactive computer graphics into a NASA developed general aviation synthesis program is described, and the potential uses of the synthesis program in preliminary design are demonstrated.

  15. Mobile Computing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Swietek, Gregory E. (Technical Monitor)

    1994-01-01

    The use of commercial computer technology in specific aerospace mission applications can reduce the cost and project cycle time required for the development of special-purpose computer systems. Additionally, the pace of technological innovation in the commercial market has made new computer capabilities available for demonstrations and flight tests. Three areas of research and development being explored by the Portable Computer Technology Project at NASA Ames Research Center are the application of commercial client/server network computing solutions to crew support and payload operations, the analysis of requirements for portable computing devices, and testing of wireless data communication links as extensions to the wired network. This paper will present computer architectural solutions to portable workstation design including the use of standard interfaces, advanced flat-panel displays and network configurations incorporating both wired and wireless transmission media. It will describe the design tradeoffs used in selecting high-performance processors and memories, interfaces for communication and peripheral control, and high resolution displays. The packaging issues for safe and reliable operation aboard spacecraft and aircraft are presented. The current status of wireless data links for portable computers is discussed from a system design perspective. An end-to-end data flow model for payload science operations from the experiment flight rack to the principal investigator is analyzed using capabilities provided by the new generation of computer products. A future flight experiment on-board the Russian MIR space station will be described in detail including system configuration and function, the characteristics of the spacecraft operating environment, the flight qualification measures needed for safety review, and the specifications of the computing devices to be used in the experiment. The software architecture chosen shall be presented. An analysis of the

  16. Trends; Integrating computer systems

    SciTech Connect

    de Buyl, M. )

    1991-11-04

    This paper reports that computers are invaluable tools in assisting E and P managers with their information management and analysis tasks. Oil companies and software houses are striving to adapt their products and work practices to capitalize on the rapid evolution in computer hardware performance and affordability. Ironically, an investment in computers aimed at reducing risk and cost also contains an element of added risk and cost. Hundreds of millions of dollars have been spent by the oil industry in purchasing hardware and software and in developing software. Unfortunately, these investments may not have completely fulfilled the industry's expectations. The lower return on computer science investments is due to: Unmet expectations in productivity gains. Premature computer hardware and software obsolescence. Inefficient data transfer between software applications. Hidden costs of computer support personnel and vendors.

  17. Integration of High-Performance Computing into Cloud Computing Services

    NASA Astrophysics Data System (ADS)

    Vouk, Mladen A.; Sills, Eric; Dreher, Patrick

    High-Performance Computing (HPC) projects span a spectrum of computer hardware implementations ranging from peta-flop supercomputers, high-end tera-flop facilities running a variety of operating systems and applications, to mid-range and smaller computational clusters used for HPC application development, pilot runs and prototype staging clusters. What they all have in common is that they operate as a stand-alone system rather than a scalable and shared user re-configurable resource. The advent of cloud computing has changed the traditional HPC implementation. In this article, we will discuss a very successful production-level architecture and policy framework for supporting HPC services within a more general cloud computing infrastructure. This integrated environment, called Virtual Computing Lab (VCL), has been operating at NC State since fall 2004. Nearly 8,500,000 HPC CPU-Hrs were delivered by this environment to NC State faculty and students during 2009. In addition, we present and discuss operational data that show that integration of HPC and non-HPC (or general VCL) services in a cloud can substantially reduce the cost of delivering cloud services (down to cents per CPU hour).

  18. The Prevalence of Phosphorus Containing Food Additives in Top Selling Foods in Grocery Stores

    PubMed Central

    León, Janeen B.; Sullivan, Catherine M.; Sehgal, Ashwini R.

    2013-01-01

    Objective To determine the prevalence of phosphorus-containing food additives in best selling processed grocery products and to compare the phosphorus content of a subset of top selling foods with and without phosphorus additives. Design The labels of 2394 best selling branded grocery products in northeast Ohio were reviewed for phosphorus additives. The top 5 best selling products containing phosphorus additives from each food category were matched with similar products without phosphorus additives and analyzed for phosphorus content. Four days of sample meals consisting of foods with and without phosphorus additives were created and daily phosphorus and pricing differentials were computed. Setting Northeast Ohio Main outcome measures Presence of phosphorus-containing food additives, phosphorus content Results 44% of the best selling grocery items contained phosphorus additives. The additives were particularly common in prepared frozen foods (72%), dry food mixes (70%), packaged meat (65%), bread & baked goods (57%), soup (54%), and yogurt (51%) categories. Phosphorus additive containing foods averaged 67 mg phosphorus/100 gm more than matched non-additive containing foods (p=.03). Sample meals comprised mostly of phosphorus additive-containing foods had 736 mg more phosphorus per day compared to meals consisting of only additive-free foods. Phosphorus additive-free meals cost an average of $2.00 more per day. Conclusion Phosphorus additives are common in best selling processed groceries and contribute significantly to their phosphorus content. Moreover, phosphorus additive foods are less costly than phosphorus additive-free foods. As a result, persons with chronic kidney disease may purchase these popular low-cost groceries and unknowingly increase their intake of highly bioavailable phosphorus. PMID:23402914

  19. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  20. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    PubMed

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  1. Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets

    PubMed Central

    Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298

  2. An iron–oxygen intermediate formed during the catalytic cycle of cysteine dioxygenase† †Electronic supplementary information (ESI) available: Experimental and computational details. See DOI: 10.1039/c6cc03904a Click here for additional data file.

    PubMed Central

    Tchesnokov, E. P.; Faponle, A. S.; Davies, C. G.; Quesne, M. G.; Turner, R.; Fellner, M.; Souness, R. J.; Wilbanks, S. M.

    2016-01-01

    Cysteine dioxygenase is a key enzyme in the breakdown of cysteine, but its mechanism remains controversial. A combination of spectroscopic and computational studies provides the first evidence of a short-lived intermediate in the catalytic cycle. The intermediate decays within 20 ms and has absorption maxima at 500 and 640 nm. PMID:27297454

  3. SEASAT economic assessment. Volume 10: The SATIL 2 program (a program for the evaluation of the costs of an operational SEASAT system as a function of operational requirements and reliability. [computer programs for economic analysis and systems analysis of SEASAT satellite systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.

  4. Textbooks: Costs and Issues

    ERIC Educational Resources Information Center

    Mize, Rita

    2004-01-01

    As community colleges seek to be as accessible as possible to students and attempt to retain low enrollment fees, manageable parking fees, and waiver of fees for those with financial needs, an additional and significant cost ? for textbooks and supplies ? has not been addressed systematically. While fees for a full-time student are $390 per…

  5. Operating Dedicated Data Centers - Is It Cost-Effective?

    NASA Astrophysics Data System (ADS)

    Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.

  6. Distributed computing environments for future space control systems

    NASA Technical Reports Server (NTRS)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  7. 42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... by the cost to charge ratios applicable to operating and capital costs, respectively, as described in... capital cost-to-charge ratios used to adjust covered charges are computed annually by the intermediary for..., 2003, statewide cost-to-charge ratios are used in those instances in which a hospital's operating...

  8. 42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... by the cost to charge ratios applicable to operating and capital costs, respectively, as described in... capital cost-to-charge ratios used to adjust covered charges are computed annually by the intermediary for..., 2003, statewide cost-to-charge ratios are used in those instances in which a hospital's operating...

  9. 42 CFR 412.84 - Payment for extraordinarily high-cost cases (cost outliers).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... by the cost to charge ratios applicable to operating and capital costs, respectively, as described in... capital cost-to-charge ratios used to adjust covered charges are computed annually by the intermediary for..., 2003, statewide cost-to-charge ratios are used in those instances in which a hospital's operating...

  10. Cost Control

    ERIC Educational Resources Information Center

    Foreman, Phillip

    2009-01-01

    Education administrators involved in construction initiatives unanimously agree that when it comes to change orders, less is more. Change orders have a negative rippling effect of driving up building costs and producing expensive project delays that often interfere with school operations and schedules. Some change orders are initiated by schools…

  11. Low Cost Hydrogen Production Platform

    SciTech Connect

    Timothy M. Aaron, Jerome T. Jankowiak

    2009-10-16

    A technology and design evaluation was carried out for the development of a turnkey hydrogen production system in the range of 2.4 - 12 kg/h of hydrogen. The design is based on existing SMR technology and existing chemical processes and technologies to meet the design objectives. Consequently, the system design consists of a steam methane reformer, PSA system for hydrogen purification, natural gas compression, steam generation and all components and heat exchangers required for the production of hydrogen. The focus of the program is on packaging, system integration and an overall step change in the cost of capital required for the production of hydrogen at small scale. To assist in this effort, subcontractors were brought in to evaluate the design concepts and to assist in meeting the overall goals of the program. Praxair supplied the overall system and process design and the subcontractors were used to evaluate the components and system from a manufacturing and overall design optimization viewpoint. Design for manufacturing and assembly (DFMA) techniques, computer models and laboratory/full-scale testing of components were utilized to optimize the design during all phases of the design development. Early in the program evaluation, a review of existing Praxair hydrogen facilities showed that over 50% of the installed cost of a SMR based hydrogen plant is associated with the high temperature components (reformer, shift, steam generation, and various high temperature heat exchange). The main effort of the initial phase of the program was to develop an integrated high temperature component for these related functions. Initially, six independent concepts were developed and the processes were modeled to determine overall feasibility. The six concepts were eventually narrowed down to the highest potential concept. A US patent was awarded in February 2009 for the Praxair integrated high temperature component design. A risk analysis of the high temperature component was

  12. Consumer Security Perceptions and the Perceived Influence on Adopting Cloud Computing: A Quantitative Study Using the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Paquet, Katherine G.

    2013-01-01

    Cloud computing may provide cost benefits for organizations by eliminating the overhead costs of software, hardware, and maintenance (e.g., license renewals, upgrading software, servers and their physical storage space, administration along with funding a large IT department). In addition to the promised savings, the organization may require…

  13. The updated algorithm of the Energy Consumption Program (ECP): A computer model simulating heating and cooling energy loads in buildings

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.

    1979-01-01

    The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.

  14. Cost considerations for interstellar missions

    NASA Astrophysics Data System (ADS)

    Andrews, Dana G.

    This paper examines the technical and economic feasibility of interstellar exploration. Three candidate interstellar propulsion systems are evaluated with respect to technical viability and compared on an estimated cost basis. Two of the systems, the laser-propelled lightsail (LPL) and the particle-beam propelled magsail (PBPM), appear to be technically feasible and capable supporting one-way probes to nearby star systems within the lifetime of the principal investigators, if enough energy is available. The third propulsion system, the antimatter rocket, requires additional proof of concept demonstrations before its feasibility can be evaluated. Computer simulations of the acceleration and deceleration interactions of LPL and PBPM were completed and spacecraft configurations optimized for minimum energy usage are noted. The optimum LPL transfers about ten percent of the laser beam energy into kinetic energy of the spacecraft while the optimum PBPM transfers about thirty percent. Since particle beam generators are roughly twice as energy efficient as large lasers, the PBPM propulsion system requires roughly one-sixth the busbar electrical energy a LPL system would require to launch an identical payload. The total beam energy requirement for an interstellar probe mission is roughly 10 20 joules, which would require the complete fissioning of one thousand tons of Uranium assuming thirty-five percent powerplant efficiency. This is roughly equivalent to a recurring cost per flight of 3.0 Billion dollars in reactor grade enriched uranium using today's prices. Therefore, interstellar flight is an expensive proposition, but not unaffordable, if the nonrecurring costs of building the powerplant can be minimized.

  15. Computer viruses

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    The worm, Trojan horse, bacterium, and virus are destructive programs that attack information stored in a computer's memory. Virus programs, which propagate by incorporating copies of themselves into other programs, are a growing menace in the late-1980s world of unprotected, networked workstations and personal computers. Limited immunity is offered by memory protection hardware, digitally authenticated object programs,and antibody programs that kill specific viruses. Additional immunity can be gained from the practice of digital hygiene, primarily the refusal to use software from untrusted sources. Full immunity requires attention in a social dimension, the accountability of programmers.

  16. Computer systems

    NASA Technical Reports Server (NTRS)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  17. Computing technology in the 1980's. [computers

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Advances in computing technology have been led by consistently improving semiconductor technology. The semiconductor industry has turned out ever faster, smaller, and less expensive devices since transistorized computers were first introduced 20 years ago. For the next decade, there appear to be new advances possible, with the rate of introduction of improved devices at least equal to the historic trends. The implication of these projections is that computers will enter new markets and will truly be pervasive in business, home, and factory as their cost diminishes and their computational power expands to new levels. The computer industry as we know it today will be greatly altered in the next decade, primarily because the raw computer system will give way to computer-based turn-key information and control systems.

  18. Computational chemistry

    NASA Technical Reports Server (NTRS)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  19. Additive Manufacturing of Hybrid Circuits

    NASA Astrophysics Data System (ADS)

    Sarobol, Pylin; Cook, Adam; Clem, Paul G.; Keicher, David; Hirschfeld, Deidre; Hall, Aaron C.; Bell, Nelson S.

    2016-07-01

    There is a rising interest in developing functional electronics using additively manufactured components. Considerations in materials selection and pathways to forming hybrid circuits and devices must demonstrate useful electronic function; must enable integration; and must complement the complex shape, low cost, high volume, and high functionality of structural but generally electronically passive additively manufactured components. This article reviews several emerging technologies being used in industry and research/development to provide integration advantages of fabricating multilayer hybrid circuits or devices. First, we review a maskless, noncontact, direct write (DW) technology that excels in the deposition of metallic colloid inks for electrical interconnects. Second, we review a complementary technology, aerosol deposition (AD), which excels in the deposition of metallic and ceramic powder as consolidated, thick conformal coatings and is additionally patternable through masking. Finally, we show examples of hybrid circuits/devices integrated beyond 2-D planes, using combinations of DW or AD processes and conventional, established processes.

  20. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  1. Computational Biology, Advanced Scientific Computing, and Emerging Computational Architectures

    SciTech Connect

    2007-06-27

    This CRADA was established at the start of FY02 with $200 K from IBM and matching funds from DOE to support post-doctoral fellows in collaborative research between International Business Machines and Oak Ridge National Laboratory to explore effective use of emerging petascale computational architectures for the solution of computational biology problems. 'No cost' extensions of the CRADA were negotiated with IBM for FY03 and FY04.

  2. A Home Computer Primer.

    ERIC Educational Resources Information Center

    Stone, Antonia

    1982-01-01

    Provides general information on currently available microcomputers, computer programs (software), hardware requirements, software sources, costs, computer games, and programing. Includes a list of popular microcomputers, providing price category, model, list price, software (cassette, tape, disk), monitor specifications, amount of random access…

  3. Using Technology to Control Costs

    ERIC Educational Resources Information Center

    Ho, Simon; Schoenberg, Doug; Richards, Dan; Morath, Michael

    2009-01-01

    In this article, the authors examines the use of technology to control costs in the child care industry. One of these technology solutions is Software-as-a-Service (SaaS). SaaS solutions can help child care providers save money in many aspects of center management. In addition to cost savings, SaaS solutions are also particularly appealing to…

  4. Using technology to support investigations in the electronic age: tracking hackers to large scale international computer fraud

    NASA Astrophysics Data System (ADS)

    McFall, Steve

    1994-03-01

    With the increase in business automation and the widespread availability and low cost of computer systems, law enforcement agencies have seen a corresponding increase in criminal acts involving computers. The examination of computer evidence is a new field of forensic science with numerous opportunities for research and development. Research is needed to develop new software utilities to examine computer storage media, expert systems capable of finding criminal activity in large amounts of data, and to find methods of recovering data from chemically and physically damaged computer storage media. In addition, defeating encryption and password protection of computer files is also a topic requiring more research and development.

  5. Advanced Fuel Cycle Cost Basis

    SciTech Connect

    D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert

    2007-04-01

    This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modules—24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.

  6. Advanced Fuel Cycle Cost Basis

    SciTech Connect

    D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider

    2008-03-01

    This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.

  7. Advanced Fuel Cycle Cost Basis

    SciTech Connect

    D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider

    2009-12-01

    This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.

  8. Additive Manufacturing: Making Imagination the Major Limitation

    NASA Astrophysics Data System (ADS)

    Zhai, Yuwei; Lados, Diana A.; LaGoy, Jane L.

    2014-05-01

    Additive manufacturing (AM) refers to an advanced technology used for the fabrication of three-dimensional near-net-shaped functional components directly from computer models, using unit materials. The fundamentals and working principle of AM offer several advantages, including near-net-shape capabilities, superior design and geometrical flexibility, innovative multi-material fabrication, reduced tooling and fixturing, shorter cycle time for design and manufacturing, instant local production at a global scale, and material, energy, and cost efficiency. Well suiting the requests of modern manufacturing climate, AM is viewed as the new industrial revolution, making its way into a continuously increasing number of industries, such as aerospace, defense, automotive, medical, architecture, art, jewelry, and food. This overview was created to relate the historical evolution of the AM technology to its state-of-the-art developments and emerging applications. Generic thoughts on the microstructural characteristics, properties, and performance of AM-fabricated materials will also be discussed, primarily related to metallic materials. This write-up will introduce the general reader to specifics of the AM field vis-à-vis advantages and common techniques, materials and properties, current applications, and future opportunities.

  9. Computational mechanics

    SciTech Connect

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  10. Metal Additive Manufacturing: A Review

    NASA Astrophysics Data System (ADS)

    Frazier, William E.

    2014-06-01

    This paper reviews the state-of-the-art of an important, rapidly emerging, manufacturing technology that is alternatively called additive manufacturing (AM), direct digital manufacturing, free form fabrication, or 3D printing, etc. A broad contextual overview of metallic AM is provided. AM has the potential to revolutionize the global parts manufacturing and logistics landscape. It enables distributed manufacturing and the productions of parts-on-demand while offering the potential to reduce cost, energy consumption, and carbon footprint. This paper explores the material science, processes, and business consideration associated with achieving these performance gains. It is concluded that a paradigm shift is required in order to fully exploit AM potential.

  11. Impact of Classroom Computer Use on Computer Anxiety.

    ERIC Educational Resources Information Center

    Lambert, Matthew E.; And Others

    Increasing use of computer programs for undergraduate psychology education has raised concern over the impact of computer anxiety on educational performance. Additionally, some researchers have indicated that classroom computer use can exacerbate pre-existing computer anxiety. To evaluate the relationship between in-class computer use and computer…

  12. [Food additives and healthiness].

    PubMed

    Heinonen, Marina

    2014-01-01

    Additives are used for improving food structure or preventing its spoilage, for example. Many substances used as additives are also naturally present in food. The safety of additives is evaluated according to commonly agreed principles. If high concentrations of an additive cause adverse health effects for humans, a limit of acceptable daily intake (ADI) is set for it. An additive is a risk only when ADI is exceeded. The healthiness of food is measured on the basis of nutrient density and scientifically proven effects.

  13. Cost-Effectiveness of Clinical Decision Support System in Improving Maternal Health Care in Ghana

    PubMed Central

    Dalaba, Maxwell Ayindenaba; Akweongo, Patricia; Aborigo, Raymond Akawire; Saronga, Happiness Pius; Williams, John; Blank, Antje; Kaltschmidt, Jens; Sauerborn, Rainer; Loukanova, Svetla

    2015-01-01

    Objective This paper investigated the cost-effectiveness of a computer-assisted Clinical Decision Support System (CDSS) in the identification of maternal complications in Ghana. Methods A cost-effectiveness analysis was performed in a before- and after-intervention study. Analysis was conducted from the provider’s perspective. The intervention area was the Kassena- Nankana district where computer-assisted CDSS was used by midwives in maternal care in six selected health centres. Six selected health centers in the Builsa district served as the non-intervention group, where the normal Ghana Health Service activities were being carried out. Results Computer-assisted CDSS increased the detection of pregnancy complications during antenatal care (ANC) in the intervention health centres (before-intervention= 9 /1,000 ANC attendance; after-intervention= 12/1,000 ANC attendance; P-value=0.010). In the intervention health centres, there was a decrease in the number of complications during labour by 1.1%, though the difference was not statistically significant (before-intervention =107/1,000 labour clients; after-intervention= 96/1,000 labour clients; P-value=0.305). Also, at the intervention health centres, the average cost per pregnancy complication detected during ANC (cost –effectiveness ratio) decreased from US$17,017.58 (before-intervention) to US$15,207.5 (after-intervention). Incremental cost –effectiveness ratio (ICER) was estimated at US$1,142. Considering only additional costs (cost of computer-assisted CDSS), cost per pregnancy complication detected was US$285. Conclusions Computer –assisted CDSS has the potential to identify complications during pregnancy and marginal reduction in labour complications. Implementing computer-assisted CDSS is more costly but more effective in the detection of pregnancy complications compared to routine maternal care, hence making the decision to implement CDSS very complex. Policy makers should however be guided by whether

  14. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  15. Mobile HIV Screening in Cape Town, South Africa: Clinical Impact, Cost and Cost-Effectiveness

    PubMed Central

    Bassett, Ingrid V.; Govindasamy, Darshini; Erlwanger, Alison S.; Hyle, Emily P.; Kranzer, Katharina; van Schaik, Nienke; Noubary, Farzad; Paltiel, A. David; Wood, Robin; Walensky, Rochelle P.; Losina, Elena; Bekker, Linda-Gail; Freedberg, Kenneth A.

    2014-01-01

    Background Mobile HIV screening may facilitate early HIV diagnosis. Our objective was to examine the cost-effectiveness of adding a mobile screening unit to current medical facility-based HIV testing in Cape Town, South Africa. Methods and Findings We used the Cost Effectiveness of Preventing AIDS Complications International (CEPAC-I) computer simulation model to evaluate two HIV screening strategies in Cape Town: 1) medical facility-based testing (the current standard of care) and 2) addition of a mobile HIV-testing unit intervention in the same community. Baseline input parameters were derived from a Cape Town-based mobile unit that tested 18,870 individuals over 2 years: prevalence of previously undiagnosed HIV (6.6%), mean CD4 count at diagnosis (males 423/µL, females 516/µL), CD4 count-dependent linkage to care rates (males 31%–58%, females 49%–58%), mobile unit intervention cost (includes acquisition, operation and HIV test costs, $29.30 per negative result and $31.30 per positive result). We conducted extensive sensitivity analyses to evaluate input uncertainty. Model outcomes included site of HIV diagnosis, life expectancy, medical costs, and the incremental cost-effectiveness ratio (ICER) of the intervention compared to medical facility-based testing. We considered the intervention to be “very cost-effective” when the ICER was less than South Africa's annual per capita Gross Domestic Product (GDP) ($8,200 in 2012). We projected that, with medical facility-based testing, the discounted (undiscounted) HIV-infected population life expectancy was 132.2 (197.7) months; this increased to 140.7 (211.7) months with the addition of the mobile unit. The ICER for the mobile unit was $2,400/year of life saved (YLS). Results were most sensitive to the previously undiagnosed HIV prevalence, linkage to care rates, and frequency of HIV testing at medical facilities. Conclusion The addition of mobile HIV screening to current testing programs can improve survival

  16. Artist meets computer

    NASA Astrophysics Data System (ADS)

    Faggin, Marzia

    1997-04-01

    I would like to share my experience ofusing the computer for creating art. I am a graphic designer originally trained without any exposure to the computer. I graduated in July of 1994 from a four-year curriculum of graphic design at the Istituto Europeo di Design in Milan Italy. Italy is famous for its excellent design capability. Art and beauty influence the life ofnearly every Italian. Everywhere you look on the streets there is art from grandiose architecture to the displays in shop windows. A keen esthetic sense and a search and appreciation for quality permeate all aspects of Italian life, manifesting in the way people cut their hair, the style ofthe clothes and how furniture and everyday objects are designed. Italian taste is fine-tuned to the appreciation ofrefined textiles and quality materials are often enhanced by simple design. The Italian culture has a long history ofexcellent artisanship and good craftsmanship is highly appreciated. Gadgets have never been popular in Italian society. Gadgets are considered useless objects which add nothing to a person's life, and since they cost money they are actually viewed as a waste. The same is true for food, exception made in the big cities filled with tourists, fast food chains have never survived. Genuine and simple food is what people truly desire. A typical Italian sandwich, for example, is minimalist, the essential ingredients are left alone without additional sauces because if something is delicious by itselfwhy would anyone want to disgnise its taste?

  17. Cincinnati Big Area Additive Manufacturing (BAAM)

    SciTech Connect

    Duty, Chad E.; Love, Lonnie J.

    2015-03-04

    Oak Ridge National Laboratory (ORNL) worked with Cincinnati Incorporated (CI) to demonstrate Big Area Additive Manufacturing which increases the speed of the additive manufacturing (AM) process by over 1000X, increases the size of parts by over 10X and shows a cost reduction of over 100X. ORNL worked with CI to transition the Big Area Additive Manufacturing (BAAM) technology from a proof-of-principle (TRL 2-3) demonstration to a prototype product stage (TRL 7-8).

  18. Commodity-Based Computing Clusters at PPPL.

    NASA Astrophysics Data System (ADS)

    Wah, Darren; Davis, Steven L.; Johansson, Marques; Klasky, Scott; Tang, William; Valeo, Ernest

    2002-11-01

    In order to cost-effectively facilitate mid-scale serial and parallel computations and code development, a number of commodity-based clusters have been built at PPPL. A recent addition is the PETREL cluster, consisting of 100 dual-processor machines, both Intel and AMD, interconnected by a 100Mbit switch. Sixteen machines have an additional Myrinet 2000 interconnect. Also underway is the implementation of a Prototype Topical Computing Facility which will explore the effectiveness and scaling of cluster computing for larger scale fusion codes, specifically including those being developed under the SCIDAC auspices. This facility will consist of two parts: a 64 dual-processor node cluster, with high speed interconnect, and a 16 dual-processor node cluster, utilizing gigabit networking, built for the purpose of exploring grid-enabled computing. The initial grid explorations will be in collaboration with the Princeton University Institute for Computational Science and Engineering (PICSciE), where a 16 processor cluster dedicated to investigation of grid computing is being built. The initial objectives are to (1) grid-enable the GTC code and an MHD code, making use of MPICH-G2 and (2) implement grid-enabled interactive visualization using DXMPI and the Chromium API.

  19. Computers and Computer Resources.

    ERIC Educational Resources Information Center

    Bitter, Gary

    1980-01-01

    This resource directory provides brief evaluative descriptions of six popular home computers and lists selected sources of educational software, computer books, and magazines. For a related article on microcomputers in the schools, see p53-58 of this journal issue. (SJL)

  20. Ubiquitous Computing: The Universal Use of Computers on College Campuses.

    ERIC Educational Resources Information Center

    Brown, David G., Ed.

    This book is a collection of vignettes from 13 universities where everyone on campus has his or her own computer. These 13 institutions have instituted "ubiquitous computing" in very different ways at very different costs. The chapters are: (1) "Introduction: The Ubiquitous Computing Movement" (David G. Brown); (2) "Dartmouth College" (Malcolm…

  1. Amorphous Computing

    NASA Astrophysics Data System (ADS)

    Sussman, Gerald

    2002-03-01

    Digital computers have always been constructed to behave as precise arrangements of reliable parts, and our techniques for organizing computations depend upon this precision and reliability. Two emerging technologies, however, are begnning to undercut these assumptions about constructing and programming computers. These technologies -- microfabrication and bioengineering -- will make it possible to assemble systems composed of myriad information- processing units at almost no cost, provided: 1) that not all the units need to work correctly; and 2) that there is no need to manufacture precise geometrical arrangements or interconnection patterns among them. Microelectronic mechanical components are becoming so inexpensive to manufacture that we can anticipate combining logic circuits, microsensors, actuators, and communications devices integrated on the same chip to produce particles that could be mixed with bulk materials, such as paints, gels, and concrete. Imagine coating bridges or buildings with smart paint that can sense and report on traffic and wind loads and monitor structural integrity of the bridge. A smart paint coating on a wall could sense vibrations, monitor the premises for intruders, or cancel noise. Even more striking, there has been such astounding progress in understanding the biochemical mechanisms in individual cells, that it appears we'll be able to harness these mechanisms to construct digital- logic circuits. Imagine a discipline of cellular engineering that could tailor-make biological cells that function as sensors and actuators, as programmable delivery vehicles for pharmaceuticals, as chemical factories for the assembly of nanoscale structures. Fabricating such systems seem to be within our reach, even if it is not yet within our grasp Fabrication, however, is only part of the story. We can envision producing vast quantities of individual computing elements, whether microfabricated particles, engineered cells, or macromolecular computing

  2. Transmetalation from B to Rh in the course of the catalytic asymmetric 1,4-addition reaction of phenylboronic acid to enones: a computational comparison of diphosphane and diene ligands.

    PubMed

    Li, You-Gui; He, Gang; Qin, Hua-Li; Kantchev, Eric Assen B

    2015-02-14

    Transmetalation is a key elementary reaction of many important catalytic reactions. Among these, 1,4-addition of arylboronic acids to organic acceptors such as α,β-unsaturated ketones has emerged as one of the most important methods for asymmetric C-C bond formation. A key intermediate for the B-to-Rh transfer arising from quaternization on a boronic acid by a Rh-bound hydroxide (the active catalyst) has been proposed. Herein, DFT calculations (IEFPCM/PBE0/DGDZVP level of theory) establish the viability of this proposal, and characterize the associated pathways. The delivery of phenylboronic acid in the orientation suited for the B-to-Rh transfer from the very beginning is energetically preferable, and occurs with expulsion of Rh-coordinated water molecules. For the bulkier binap ligand, the barriers are higher (particularly for the phenylboronic acid activation step) due to a less favourable entropy term to the free energy, in accordance with the experimentally observed slower transmetalation rate. PMID:25422851

  3. Transmetalation from B to Rh in the course of the catalytic asymmetric 1,4-addition reaction of phenylboronic acid to enones: a computational comparison of diphosphane and diene ligands.

    PubMed

    Li, You-Gui; He, Gang; Qin, Hua-Li; Kantchev, Eric Assen B

    2015-02-14

    Transmetalation is a key elementary reaction of many important catalytic reactions. Among these, 1,4-addition of arylboronic acids to organic acceptors such as α,β-unsaturated ketones has emerged as one of the most important methods for asymmetric C-C bond formation. A key intermediate for the B-to-Rh transfer arising from quaternization on a boronic acid by a Rh-bound hydroxide (the active catalyst) has been proposed. Herein, DFT calculations (IEFPCM/PBE0/DGDZVP level of theory) establish the viability of this proposal, and characterize the associated pathways. The delivery of phenylboronic acid in the orientation suited for the B-to-Rh transfer from the very beginning is energetically preferable, and occurs with expulsion of Rh-coordinated water molecules. For the bulkier binap ligand, the barriers are higher (particularly for the phenylboronic acid activation step) due to a less favourable entropy term to the free energy, in accordance with the experimentally observed slower transmetalation rate.

  4. Automatic Computer Mapping of Terrain

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  5. Software for Tracking Costs of Mars Projects

    NASA Technical Reports Server (NTRS)

    Wong, Alvin; Warfield, Keith

    2003-01-01

    The Mars Cost Tracking Model is a computer program that administers a system set up for tracking the costs of future NASA projects that pertain to Mars. Previously, no such tracking system existed, and documentation was written in a variety of formats and scattered in various places. It was difficult to justify costs or even track the history of costs of a spacecraft mission to Mars. The present software enables users to maintain all cost-model definitions, documentation, and justifications of cost estimates in one computer system that is accessible via the Internet. The software provides sign-off safeguards to ensure the reliability of information entered into the system. This system may eventually be used to track the costs of projects other than only those that pertain to Mars.

  6. Additional Security Considerations for Grid Management

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  7. Cloud Computing: An Overview

    NASA Astrophysics Data System (ADS)

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  8. Parametric Cost Deployment

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1995-01-01

    Parametric cost analysis is a mathematical approach to estimating cost. Parametric cost analysis uses non-cost parameters, such as quality characteristics, to estimate the cost to bring forth, sustain, and retire a product. This paper reviews parametric cost analysis and shows how it can be used within the cost deployment process.

  9. Costing climate change.

    PubMed

    Reay, David S

    2002-12-15

    Debate over how, when, and even whether man-made greenhouse-gas emissions should be controlled has grown in intensity even faster than the levels of greenhouse gas in our atmosphere. Many argue that the costs involved in reducing emissions outweigh the potential economic damage of human-induced climate change. Here, existing cost-benefit analyses of greenhouse-gas reduction policies are examined, with a view to establishing whether any such global reductions are currently worthwhile. Potential for, and cost of, cutting our own individual greenhouse-gas emissions is then assessed. I find that many abatement strategies are able to deliver significant emission reductions at little or no net cost. Additionally, I find that there is huge potential for individuals to simultaneously cut their own greenhouse-gas emissions and save money. I conclude that cuts in global greenhouse-gas emissions, such as those of the Kyoto Protocol, cannot be justifiably dismissed as posing too large an economic burden.

  10. Parallel Processing Creates a Low-Cost Growth Path.

    ERIC Educational Resources Information Center

    Shekhel, Alex; Freeman, Eva

    1987-01-01

    Discusses the advantages of parallel processor computers in terms of expandibility, cost, performance and reliability, and suggests that such computers be used in library automation systems as a cost effective approach to planning for the growth of information services and computer applications. (CLB)

  11. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Fletcher, James C. (Inventor); Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1992-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  12. Polyimide processing additives

    NASA Technical Reports Server (NTRS)

    Pratt, J. Richard (Inventor); St.clair, Terry L. (Inventor); Stoakley, Diane M. (Inventor); Burks, Harold D. (Inventor)

    1993-01-01

    A process for preparing polyimides having enhanced melt flow properties is described. The process consists of heating a mixture of a high molecular weight poly-(amic acid) or polyimide with a low molecular weight amic acid or imide additive in the range of 0.05 to 15 percent by weight of the additive. The polyimide powders so obtained show improved processability, as evidenced by lower melt viscosity by capillary rheometry. Likewise, films prepared from mixtures of polymers with additives show improved processability with earlier onset of stretching by TMA.

  13. Adaptive computation algorithm for RBF neural network.

    PubMed

    Han, Hong-Gui; Qiao, Jun-Fei

    2012-02-01

    A novel learning algorithm is proposed for nonlinear modelling and identification using radial basis function neural networks. The proposed method simplifies neural network training through the use of an adaptive computation algorithm (ACA). In addition, the convergence of the ACA is analyzed by the Lyapunov criterion. The proposed algorithm offers two important advantages. First, the model performance can be significantly improved through ACA, and the modelling error is uniformly ultimately bounded. Secondly, the proposed ACA can reduce computational cost and accelerate the training speed. The proposed method is then employed to model classical nonlinear system with limit cycle and to identify nonlinear dynamic system, exhibiting the effectiveness of the proposed algorithm. Computational complexity analysis and simulation results demonstrate its effectiveness.

  14. Realistic costs of carbon capture

    SciTech Connect

    Al Juaied, Mohammed . Belfer Center for Science and International Affiaris); Whitmore, Adam )

    2009-07-01

    There is a growing interest in carbon capture and storage (CCS) as a means of reducing carbon dioxide (CO2) emissions. However there are substantial uncertainties about the costs of CCS. Costs for pre-combustion capture with compression (i.e. excluding costs of transport and storage and any revenue from EOR associated with storage) are examined in this discussion paper for First-of-a-Kind (FOAK) plant and for more mature technologies, or Nth-of-a-Kind plant (NOAK). For FOAK plant using solid fuels the levelised cost of electricity on a 2008 basis is approximately 10 cents/kWh higher with capture than for conventional plants (with a range of 8-12 cents/kWh). Costs of abatement are found typically to be approximately US$150/tCO2 avoided (with a range of US$120-180/tCO2 avoided). For NOAK plants the additional cost of electricity with capture is approximately 2-5 cents/kWh, with costs of the range of US$35-70/tCO2 avoided. Costs of abatement with carbon capture for other fuels and technologies are also estimated for NOAK plants. The costs of abatement are calculated with reference to conventional SCPC plant for both emissions and costs of electricity. Estimates for both FOAK and NOAK are mainly based on cost data from 2008, which was at the end of a period of sustained escalation in the costs of power generation plant and other large capital projects. There are now indications of costs falling from these levels. This may reduce the costs of abatement and costs presented here may be 'peak of the market' estimates. If general cost levels return, for example, to those prevailing in 2005 to 2006 (by which time significant cost escalation had already occurred from previous levels), then costs of capture and compression for FOAK plants are expected to be US$110/tCO2 avoided (with a range of US$90-135/tCO2 avoided). For NOAK plants costs are expected to be US$25-50/tCO2. Based on these considerations a likely representative range of costs of abatement from CCS excluding

  15. Additional Types of Neuropathy

    MedlinePlus

    ... A A Listen En Español Additional Types of Neuropathy Charcot's Joint Charcot's Joint, also called neuropathic arthropathy, ... can stop bone destruction and aid healing. Cranial Neuropathy Cranial neuropathy affects the 12 pairs of nerves ...

  16. Food Additives and Hyperkinesis

    ERIC Educational Resources Information Center

    Wender, Ester H.

    1977-01-01

    The hypothesis that food additives are causally associated with hyperkinesis and learning disabilities in children is reviewed, and available data are summarized. Available from: American Medical Association 535 North Dearborn Street Chicago, Illinois 60610. (JG)

  17. Smog control fuel additives

    SciTech Connect

    Lundby, W.

    1993-06-29

    A method is described of controlling, reducing or eliminating, ozone and related smog resulting from photochemical reactions between ozone and automotive or industrial gases comprising the addition of iodine or compounds of iodine to hydrocarbon-base fuels prior to or during combustion in an amount of about 1 part iodine per 240 to 10,000,000 parts fuel, by weight, to be accomplished by: (a) the addition of these inhibitors during or after the refining or manufacturing process of liquid fuels; (b) the production of these inhibitors for addition into fuel tanks, such as automotive or industrial tanks; or (c) the addition of these inhibitors into combustion chambers of equipment utilizing solid fuels for the purpose of reducing ozone.

  18. Additive manufacturing of hybrid circuits

    DOE PAGES

    Bell, Nelson S.; Sarobol, Pylin; Cook, Adam; Clem, Paul G.; Keicher, David M.; Hirschfeld, Deidre; Hall, Aaron Christopher

    2016-03-26

    There is a rising interest in developing functional electronics using additively manufactured components. Considerations in materials selection and pathways to forming hybrid circuits and devices must demonstrate useful electronic function; must enable integration; and must complement the complex shape, low cost, high volume, and high functionality of structural but generally electronically passive additively manufactured components. This article reviews several emerging technologies being used in industry and research/development to provide integration advantages of fabricating multilayer hybrid circuits or devices. First, we review a maskless, noncontact, direct write (DW) technology that excels in the deposition of metallic colloid inks for electrical interconnects.more » Second, we review a complementary technology, aerosol deposition (AD), which excels in the deposition of metallic and ceramic powder as consolidated, thick conformal coatings and is additionally patternable through masking. As a result, we show examples of hybrid circuits/devices integrated beyond 2-D planes, using combinations of DW or AD processes and conventional, established processes.« less

  19. A new application for food customization with additive manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Serenó, L.; Vallicrosa, G.; Delgado, J.; Ciurana, J.

    2012-04-01

    Additive Manufacturing (AM) technologies have emerged as a freeform approach capable of producing almost any complete three dimensional (3D) objects from computer-aided design (CAD) data by successively adding material layer by layer. Despite the broad range of possibilities, commercial AM technologies remain complex and expensive, making them suitable only for niche applications. The developments of the Fab@Home system as an open AM technology discovered a new range of possibilities of processing different materials such as edible products. The main objective of this work is to analyze and optimize the manufacturing capacity of this system when producing 3D edible objects. A new heated syringe deposition tool was developed and several process parameters were optimized to adapt this technology to consumers' needs. The results revealed in this study show the potential of this system to produce customized edible objects without qualified personnel knowledge, therefore saving manufacturing costs compared to traditional technologies.

  20. Health and economic costs of physical inactivity.

    PubMed

    Kruk, Joanna

    2014-01-01

    Physical inactivity has reached epidemic levels in developed countries and is being recognized as a serious public health problem. Recent evidence shows a high percentages of individuals worldwide who are physically inactive, i.e. do not achieve the WHO's present recommendation of 150 minutes of moderate to vigorous intensity per week in addition to usual activities. Living in sedentary lifestyle is one of the leading causes of deaths and a high risk factor for several chronic diseases, like cancer, cardiovascular disease, diabetes type 2, and osteoporosis. This article summarizes evidence for relative risk of the civilization diseases attributable to physical inactivity and the most important conclusions available from the recent investigations computing the economic costs specific to physical inactivity. The findings provide health and economic arguments needed for people to understand the meaning of a sedentary lifestyle. This may be also useful for public health policy in the creation of programmes for prevention of physical inactivity.

  1. 38 CFR 17.274 - Cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Cost sharing. 17.274 Section 17.274 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS MEDICAL Civilian... the beneficiary cost share. (b) In addition to the beneficiary cost share, an annual (calendar...

  2. 34 CFR 304.21 - Allowable costs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education Department General Administrative Regulations in 34 CFR 75.530 through 75.562, the following items are... 34 Education 2 2011-07-01 2010-07-01 true Allowable costs. 304.21 Section 304.21...

  3. 34 CFR 304.21 - Allowable costs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education Department General Administrative Regulations in 34 CFR 75.530 through 75.562, the following items are... 34 Education 2 2010-07-01 2010-07-01 false Allowable costs. 304.21 Section 304.21...

  4. Assessing the Cost Efficiency of Italian Universities

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Salerno, Carlo

    2007-01-01

    This study uses Data Envelopment Analysis to evaluate the cost efficiency of 52 Italian public universities. In addition to being one of the first such cost studies of the Italian system, it explicitly takes into account the internal cost structure of institutions' education programs; a task not prevalent in past Data Envelopment Analysis studies…

  5. 34 CFR 304.21 - Allowable costs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false Allowable costs. 304.21 Section 304.21 Education... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education Department General Administrative Regulations in 34 CFR 75.530 through 75.562, the following items...

  6. 34 CFR 304.21 - Allowable costs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 34 Education 2 2014-07-01 2013-07-01 true Allowable costs. 304.21 Section 304.21 Education... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education Department General Administrative Regulations in 34 CFR 75.530 through 75.562, the following items...

  7. 34 CFR 304.21 - Allowable costs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 34 Education 2 2013-07-01 2013-07-01 false Allowable costs. 304.21 Section 304.21 Education... Grantee § 304.21 Allowable costs. In addition to the allowable costs established in the Education Department General Administrative Regulations in 34 CFR 75.530 through 75.562, the following items...

  8. The cost of waste: Coatings

    SciTech Connect

    Rice, S.

    1996-06-01

    Some of the greatest opportunities for tapping into hidden profit potential at industrial coatings manufacturing plants may be in their waste or, rather, in their ability to eliminate the root causes of waste generation. This occurs because the total cost of waste (TCOW) does not appear only in a plant`s cost to dispose or recycle its waste. TCOW has four principal components, each of which are shown in different lines in the monthly financial accounting report. An additional potential component--the production plant capacity and personnel that are utilized producing controllable waste instead of product for sale and profit--fails to show up at all. Expanding the focus of waste reduction from merely reducing an individual component`s costs to eliminating the root causes of controllable waste generation provides significant additional profits and frees plant production equipment and people to: make more product for sale and profit, and reduce per-unit manufacturing costs.

  9. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  10. Phenylethynyl Containing Reactive Additives

    NASA Technical Reports Server (NTRS)

    Connell, John W. (Inventor); Smith, Joseph G., Jr. (Inventor); Hergenrother, Paul M. (Inventor)

    2002-01-01

    Phenylethynyl containing reactive additives were prepared from aromatic diamine, containing phenylethvnvl groups and various ratios of phthalic anhydride and 4-phenylethynviphthalic anhydride in glacial acetic acid to form the imide in one step or in N-methyl-2-pvrrolidinone to form the amide acid intermediate. The reactive additives were mixed in various amounts (10% to 90%) with oligomers containing either terminal or pendent phenylethynyl groups (or both) to reduce the melt viscosity and thereby enhance processability. Upon thermal cure, the additives react and become chemically incorporated into the matrix and effect an increase in crosslink density relative to that of the host resin. This resultant increase in crosslink density has advantageous consequences on the cured resin properties such as higher glass transition temperature and higher modulus as compared to that of the host resin.

  11. Computer-based Approaches to Patient Education

    PubMed Central

    Lewis, Deborah

    1999-01-01

    All articles indexed in MEDLINE or CINAHL, related to the use of computer technology in patient education, and published in peer-reviewed journals between 1971 and 1998 were selected for review. Sixty-six articles, including 21 research-based reports, were identified. Forty-five percent of the studies were related to the management of chronic disease. Thirteen studies described an improvement in knowledge scores or clinical outcomes when computer-based patient education was compared with traditional instruction. Additional articles examined patients' computer experience, socioeconomic status, race, and gender and found no significant differences when compared with program outcomes. Sixteen of the 21 research-based studies had effect sizes greater than 0.5, indicating a significant change in the described outcome when the study subjects participated in computer-based patient education. The findings from this review support computer-based education as an effective strategy for transfer of knowledge and skill development for patients. The limited number of research studies (N = 21) points to the need for additional research. Recommendations for new studies include cost-benefit analysis and the impact of these new technologies on health outcomes over time. PMID:10428001

  12. Cloud Computing and Its Applications in GIS

    NASA Astrophysics Data System (ADS)

    Kang, Cao

    2011-12-01

    of cloud computing. This paper presents a parallel Euclidean distance algorithm that works seamlessly with the distributed nature of cloud computing infrastructures. The mechanism of this algorithm is to subdivide a raster image into sub-images and wrap them with a one pixel deep edge layer of individually computed distance information. Each sub-image is then processed by a separate node, after which the resulting sub-images are reassembled into the final output. It is shown that while any rectangular sub-image shape can be used, those approximating squares are computationally optimal. This study also serves as a demonstration of this subdivide and layer-wrap strategy, which would enable the migration of many truly spatial GIS algorithms to cloud computing infrastructures. However, this research also indicates that certain spatial GIS algorithms such as cost distance cannot be migrated by adopting this mechanism, which presents significant challenges for the development of cloud-based GIS systems. The third article is entitled "A Distributed Storage Schema for Cloud Computing based Raster GIS Systems". This paper proposes a NoSQL Database Management System (NDDBMS) based raster GIS data storage schema. NDDBMS has good scalability and is able to use distributed commodity computers, which make it superior to Relational Database Management Systems (RDBMS) in a cloud computing environment. In order to provide optimized data service performance, the proposed storage schema analyzes the nature of commonly used raster GIS data sets. It discriminates two categories of commonly used data sets, and then designs corresponding data storage models for both categories. As a result, the proposed storage schema is capable of hosting and serving enormous volumes of raster GIS data speedily and efficiently on cloud computing infrastructures. In addition, the scheme also takes advantage of the data compression characteristics of Quadtrees, thus promoting efficient data storage. Through

  13. Neutron Characterization for Additive Manufacturing

    NASA Technical Reports Server (NTRS)

    Watkins, Thomas; Bilheux, Hassina; An, Ke; Payzant, Andrew; DeHoff, Ryan; Duty, Chad; Peter, William; Blue, Craig; Brice, Craig A.

    2013-01-01

    Oak Ridge National Laboratory (ORNL) is leveraging decades of experience in neutron characterization of advanced materials together with resources such as the Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) shown in Fig. 1 to solve challenging problems in additive manufacturing (AM). Additive manufacturing, or three-dimensional (3-D) printing, is a rapidly maturing technology wherein components are built by selectively adding feedstock material at locations specified by a computer model. The majority of these technologies use thermally driven phase change mechanisms to convert the feedstock into functioning material. As the molten material cools and solidifies, the component is subjected to significant thermal gradients, generating significant internal stresses throughout the part (Fig. 2). As layers are added, inherent residual stresses cause warping and distortions that lead to geometrical differences between the final part and the original computer generated design. This effect also limits geometries that can be fabricated using AM, such as thin-walled, high-aspect- ratio, and overhanging structures. Distortion may be minimized by intelligent toolpath planning or strategic placement of support structures, but these approaches are not well understood and often "Edisonian" in nature. Residual stresses can also impact component performance during operation. For example, in a thermally cycled environment such as a high-pressure turbine engine, residual stresses can cause components to distort unpredictably. Different thermal treatments on as-fabricated AM components have been used to minimize residual stress, but components still retain a nonhomogeneous stress state and/or demonstrate a relaxation-derived geometric distortion. Industry, federal laboratory, and university collaboration is needed to address these challenges and enable the U.S. to compete in the global market. Work is currently being conducted on AM technologies at the ORNL

  14. Unraveling Higher Education's Costs.

    ERIC Educational Resources Information Center

    Gordon, Gus; Charles, Maria

    1998-01-01

    The activity-based costing (ABC) method of analyzing institutional costs in higher education involves four procedures: determining the various discrete activities of the organization; calculating the cost of each; determining the cost drivers; tracing cost to the cost objective or consumer of each activity. Few American institutions have used the…

  15. Avoiding costly remediation

    SciTech Connect

    Scheels, R.H.

    1997-10-01

    Some oil and gas pipeline operations require equipment with hydraulic or oil circulation systems. These are subject to oil leaks or spills due to equipment malfunctions as well as normal operation. The potential liability and actual remediation and shutdown costs helped create the need for more environmentally friendly hydraulic fluids. Mobil has developed readily biodegradable, virtually nontoxic hydraulic fluids, Mobil EAL 224H and Mobil EAL Syndrajoc Series oils (EAL stands for Environmental Awareness Lubricants). The first is vegetable oil-based, while the others are formulated from high viscosity-index synthetic ester base stocks. Both use virtually nontoxic additive packages. These hydraulic fluids are described.

  16. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    National Engineering and Environmental Laboratory (INEEL), it is crucial to know the capabilities of a software package’s SMP (shared memory processor) version or cluster (distributed memory) version. Of utmost importance is knowledge of a software package’s cost and implementation challenges. Additionally, it is important to determine the hardware performance of a computing workstation. The level of performance of software is inextricably tied to the computer hardware upon which it is run. Bechtel can do more for its clients in the same amount of time and/or solve more complex problems if computer workstations and associated software are optimized. As a Bechtel Management and Operations Facility, INEEL engineers and scientists find solutions to problems important to Bechtel. Both INEEL engineers and managers must be informed and educated in high performance computing (HPC) techniques and issues to better accomplish their research.

  17. Multifunctional fuel additives

    SciTech Connect

    Baillargeon, D.J.; Cardis, A.B.; Heck, D.B.

    1991-03-26

    This paper discusses a composition comprising a major amount of a liquid hydrocarbyl fuel and a minor low-temperature flow properties improving amount of an additive product of the reaction of a suitable diol and product of a benzophenone tetracarboxylic dianhydride and a long-chain hydrocarbyl aminoalcohol.

  18. Biobased lubricant additives

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Fully biobased lubricants are those formulated using all biobased ingredients, i.e. biobased base oils and biobased additives. Such formulations provide the maximum environmental, safety, and economic benefits expected from a biobased product. Currently, there are a number of biobased base oils that...

  19. Speed test results and hardware/software study of computational speed problem, appendix D

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The HP9845C is a desktop computer which is tested and evaluated for processing speed. A study was made to determine the availability and approximate cost of computers and/or hardware accessories necessary to meet the 20 ms sample period speed requirements. Additional requirements were that the control algorithm could be programmed in a high language and that the machine have sufficient storage to store the data from a complete experiment.

  20. Calculator program aids well cost management

    SciTech Connect

    Doyle, C.J.

    1982-01-18

    A TI-59 calculator program designed to track well costs on daily and weekly bases can dramatically facilitate the task of monitoring well expenses. The program computes the day total, cumulative total, cumulative item-row totals, and day-week total. For carrying these costs throughout the drilling project, magnetic cards can store the individual and total cumulative well expenses.