1979-12-01
team progranming in reducing software dleveloup- ment costs relative to ad hoc approaches and improving software product quality relative to...are interpreted as demonstrating the advantages of disciplined team programming in reducing software development costs relative to ad hoc approaches...is due oartialty to the cost and imoracticality of a valiI experimental setup within a oroauct ion environment. Thus the question remains, are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Travel costs associated with flood closures of state highways near Centralia/Chehalis, Washington.
DOT National Transportation Integrated Search
2014-09-01
This report discusses the travel costs associated with the closure of roads in the greater : Centralia/Chehalis, Washington region due to 100-year flood conditions starting on the Chehalis River. The costs : were computed for roadway closures on I-5,...
Reliability and cost analysis methods
NASA Technical Reports Server (NTRS)
Suich, Ronald C.
1991-01-01
In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.
DOT National Transportation Integrated Search
1978-05-01
The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...
Collaborative Autonomous Unmanned Aerial - Ground Vehicle Systems for Field Operations
2007-08-31
very limited payload capabilities of small UVs, sacrificing minimal computational power and run time, adhering at the same time to the low cost...configuration has been chosen because of its high computational capabilities, low power consumption, multiple I/O ports, size, low heat emission and cost. This...due to their high power to weight ratio, small packaging, and wide operating temperatures. Power distribution is controlled by the 120 Watt ATX power
Balancing reliability and cost to choose the best power subsystem
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.
Low-Cost Magnetic Stirrer from Recycled Computer Parts with Optional Hot Plate
ERIC Educational Resources Information Center
Guidote, Armando M., Jr.; Pacot, Giselle Mae M.; Cabacungan, Paul M.
2015-01-01
Magnetic stirrers and hot plates are key components of science laboratories. However, these are not readily available in many developing countries due to their high cost. This article describes the design of a low-cost magnetic stirrer with hot plate from recycled materials. Some of the materials used are neodymium magnets and CPU fans from…
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Modelling DC responses of 3D complex fracture networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beskardes, Gungor Didem; Weiss, Chester Joseph
Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.
Modelling DC responses of 3D complex fracture networks
Beskardes, Gungor Didem; Weiss, Chester Joseph
2018-03-01
Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.
ERIC Educational Resources Information Center
Liao, Yuan
2011-01-01
The virtualization of computing resources, as represented by the sustained growth of cloud computing, continues to thrive. Information Technology departments are building their private clouds due to the perception of significant cost savings by managing all physical computing resources from a single point and assigning them to applications or…
Lessons Learned on Management of CAS Development.
ERIC Educational Resources Information Center
Boyadjieff, Kiril
1995-01-01
Computer-assisted studies (CAS) attract foreign language professionals' attention due to the reliability of personal computers, the decreasing cost of available technology, and the new generation of students for whom electronic media are a familiar habitat. This article focuses on a project of the Defense Language Institute that produced over…
Herd-Level Mastitis-Associated Costs on Canadian Dairy Farms
Aghamohammadi, Mahjoob; Haine, Denis; Kelton, David F.; Barkema, Herman W.; Hogeveen, Henk; Keefe, Gregory P.; Dufour, Simon
2018-01-01
Mastitis imposes considerable and recurring economic losses on the dairy industry worldwide. The main objective of this study was to estimate herd-level costs incurred by expenditures and production losses associated with mastitis on Canadian dairy farms in 2015, based on producer reports. Previously, published mastitis economic frameworks were used to develop an economic model with the most important cost components. Components investigated were divided between clinical mastitis (CM), subclinical mastitis (SCM), and other costs components (i.e., preventive measures and product quality). A questionnaire was mailed to 374 dairy producers randomly selected from the (Canadian National Dairy Study 2015) to collect data on these costs components, and 145 dairy producers returned a completed questionnaire. For each herd, costs due to the different mastitis-related components were computed by applying the values reported by the dairy producer to the developed economic model. Then, for each herd, a proportion of the costs attributable to a specific component was computed by dividing absolute costs for this component by total herd mastitis-related costs. Median self-reported CM incidence was 19 cases/100 cow-year and mean self-reported bulk milk somatic cell count was 184,000 cells/mL. Most producers reported using post-milking teat disinfection (97%) and dry cow therapy (93%), and a substantial proportion of producers reported using pre-milking teat disinfection (79%) and wearing gloves during milking (77%). Mastitis costs were substantial (662 CAD per milking cow per year for a typical Canadian dairy farm), with a large portion of the costs (48%) being attributed to SCM, and 34 and 15% due to CM and implementation of preventive measures, respectively. For SCM, the two most important cost components were the subsequent milk yield reduction and culling (72 and 25% of SCM costs, respectively). For CM, first, second, and third most important cost components were culling (48% of CM costs), milk yield reduction following the CM events (34%), and discarded milk (11%), respectively. This study is the first since 1990 to investigate costs of mastitis in Canada. The model developed in the current study can be used to compute mastitis costs at the herd and national level in Canada. PMID:29868620
Methods for evaluating and ranking transportation energy conservation programs
NASA Astrophysics Data System (ADS)
Santone, L. C.
1981-04-01
The energy conservation programs are assessed in terms of petroleum savings, incremental costs to consumers probability of technical and market success, and external impacts due to environmental, economic, and social factors. Three ranking functions and a policy matrix are used to evaluate the programs. The net present value measure which computes the present worth of petroleum savings less the present worth of costs is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar. The comprehensive ranking function takes external impacts into account. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program.
Authentication of Radio Frequency Identification Devices Using Electronic Characteristics
ERIC Educational Resources Information Center
Chinnappa Gounder Periaswamy, Senthilkumar
2010-01-01
Radio frequency identification (RFID) tags are low-cost devices that are used to uniquely identify the objects to which they are attached. Due to the low cost and size that is driving the technology, a tag has limited computational capabilities and resources. This limitation makes the implementation of conventional security protocols to prevent…
2013-08-01
cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort
Lahiri, Supriya; Tempesti, Tommaso; Gangopadhyay, Somnath
2016-02-01
To estimate cost-effectiveness ratios and net costs of a training intervention to reduce morbidity among porters who carry loads without mechanical assistance in a developing country informal sector setting. Pre- and post-intervention survey data (n = 100) were collected in a prospective study: differences in physical/mental composite scores and pain scale scores were computed. Costs and economic benefits of the intervention were monetized with a net-cost model. Significant changes in physical composite scores (2.5), mental composite scores (3.2), and pain scale scores (-1.0) led to cost-effectiveness ratios of $6.97, $5.41, and $17.91, respectively. Multivariate analysis showed that program adherence enhanced effectiveness. The net cost of the intervention was -$5979.00 due to a reduction in absenteeism. Workplace ergonomic training is cost-effective and should be implemented wherein other engineering-control interventions are precluded due to infrastructural constraints.
Solving wood chip transport problems with computer simulation.
Dennis P. Bradley; Sharon A. Winsauer
1976-01-01
Efficient chip transport operations are difficult to achieve due to frequent and often unpredictable changes in distance to market, chipping rate, time spent at the mill, and equipment costs. This paper describes a computer simulation model that allows a logger to design an efficient transport system in response to these changing factors.
Due to the computational cost of running regional-scale numerical air quality models, reduced form models (RFM) have been proposed as computationally efficient simulation tools for characterizing the pollutant response to many different types of emission reductions. The U.S. Envi...
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Can low-cost VOR and Omega receivers suffice for RNAV - A new computer-based navigation technique
NASA Technical Reports Server (NTRS)
Hollaar, L. A.
1978-01-01
It is shown that although RNAV is particularly valuable for the personal transportation segment of general aviation, it has not gained complete acceptance. This is due, in part, to its high cost and the necessary special-handling air traffic control. VOR/DME RNAV calculations are ideally suited for analog computers, and the use of microprocessor technology has been suggested for reducing RNAV costs. Three navigation systems, VOR, Omega, and DR, are compared for common navigational difficulties, such as station geometry, siting errors, ground disturbances, and terminal area coverage. The Kalman filtering technique is described with reference to the disadvantages when using a system including standard microprocessors. An integrated navigation system, using input data from various low-cost sensor systems, is presented and current simulation studies are noted.
Carel, R S
1982-04-01
The cost-effectiveness of a computerized ECG interpretation system in an ambulatory health care organization has been evaluated in comparison with a conventional (manual) system. The automated system was shown to be more cost-effective at a minimum load of 2,500 patients/month. At larger monthly loads an even greater cost-effectiveness was found, the average cost/ECG being about $2. In the manual system the cost/unit is practically independent of patient load. This is primarily due to the fact that 87% of the cost/ECG is attributable to wages and fees of highly trained personnel. In the automated system, on the other hand, the cost/ECG is heavily dependent on examinee load. This is due to the relatively large impact of equipment depreciation on fixed (and total) cost. Utilization of a computer-assisted system leads to marked reduction in cardiologists' interpretation time, substantially shorter turnaround time (of unconfirmed reports), and potential provision of simultaneous service at several remotely located "heart stations."
MARC and the Library Service Center: Automation at Bargain Rates.
ERIC Educational Resources Information Center
Pearson, Karl M.
Despite recent research and development in the field of library automation, libraries have been unable to reap the benefits promised by technology due to the high cost of building and maintaining their own computer-based systems. Time-sharing and disc mass storage devices will bring automation costs, if spread over a number of users, within the…
Quanbeck, Andrew; Lang, Katharine; Enami, Kohei; Brown, Richard L
2010-02-01
A previous cost-benefit analysis found Screening, Brief Intervention, and Referral to Treatment (SBIRT) to be cost-beneficial from a societal perspective. This paper develops a cost-benefit model that includes the employer's perspective by considering the costs of absenteeism and impaired presenteeism due to problem drinking. We developed a Monte Carlo simulation model to estimate the costs and benefits of SBIRT implementation to an employer. We first presented the likely costs of problem drinking to a theoretical Wisconsin firm that does not currently provide SBIRT services. We then constructed a cost-benefit model in which the firm funds SBIRT for its employees. The net present value of SBIRT adoption was computed by comparing costs due to problem drinking both with and without the program. When absenteeism and impaired presenteeism costs were considered from the employer's perspective, the net present value of SBIRT adoption was $771 per employee. We concluded that implementing SBIRT is cost-beneficial from the employer's perspective and recommend that Wisconsin employers consider covering SBIRT services for their employees.
Identifying High Potential Well Targets with 3D Seismic and Mineralogy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R. J.
2015-10-30
Seismic reflection the primary tool used in petroleum exploration and production, but use in geothermal exploration is less standard, in part due to cost but also due to the challenges in identifying the highly-permeable zones essential for economic hydrothermal systems [e.g. Louie et al., 2011; Majer, 2003]. Newer technology, such as wireless sensors and low-cost high performance computing, has helped reduce the cost and effort needed to conduct 3D surveys. The second difficulty, identifying permeable zones, has been less tractable so far. Here we report on the use of seismic attributes from a 3D seismic survey to identify and mapmore » permeable zones in a hydrothermal area.« less
Low-cost space-varying FIR filter architecture for computational imaging systems
NASA Astrophysics Data System (ADS)
Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.
2010-01-01
Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.
Current state and future direction of computer systems at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Rogers, James L. (Editor); Tucker, Jerry H. (Editor)
1992-01-01
Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.
Secure data sharing in public cloud
NASA Astrophysics Data System (ADS)
Venkataramana, Kanaparti; Naveen Kumar, R.; Tatekalva, Sandhya; Padmavathamma, M.
2012-04-01
Secure multi-party protocols have been proposed for entities (organizations or individuals) that don't fully trust each other to share sensitive information. Many types of entities need to collect, analyze, and disseminate data rapidly and accurately, without exposing sensitive information to unauthorized or untrusted parties. Solutions based on secure multiparty computation guarantee privacy and correctness, at an extra communication (too costly in communication to be practical) and computation cost. The high overhead motivates us to extend this SMC to cloud environment which provides large computation and communication capacity which makes SMC to be used between multiple clouds (i.e., it may between private or public or hybrid clouds).Cloud may encompass many high capacity servers which acts as a hosts which participate in computation (IaaS and PaaS) for final result, which is controlled by Cloud Trusted Authority (CTA) for secret sharing within the cloud. The communication between two clouds is controlled by High Level Trusted Authority (HLTA) which is one of the hosts in a cloud which provides MgaaS (Management as a Service). Due to high risk for security in clouds, HLTA generates and distributes public keys and private keys by using Carmichael-R-Prime- RSA algorithm for exchange of private data in SMC between itself and clouds. In cloud, CTA creates Group key for Secure communication between the hosts in cloud based on keys sent by HLTA for exchange of Intermediate values and shares for computation of final result. Since this scheme is extended to be used in clouds( due to high availability and scalability to increase computation power) it is possible to implement SMC practically for privacy preserving in data mining at low cost for the clients.
Optical Design Using Small Dedicated Computers
NASA Astrophysics Data System (ADS)
Sinclair, Douglas C.
1980-09-01
Since the time of the 1975 International Lens Design Conference, we have developed a series of optical design programs for Hewlett-Packard desktop computers. The latest programs in the series, OSLO-25G and OSLO-45G, have most of the capabilities of general-purpose optical design programs, including optimization based on exact ray-trace data. The computational techniques used in the programs are similar to ones used in other programs, but the creative environment experienced by a designer working directly with these small dedicated systems is typically much different from that obtained with shared-computer systems. Some of the differences are due to the psychological factors associated with using a system having zero running cost, while others are due to the design of the program, which emphasizes graphical output and ease of use, as opposed to computational speed.
NASA Astrophysics Data System (ADS)
Stadler, Philipp; Farnleitner, Andreas H.; Zessner, Matthias
2016-04-01
This presentation describes in-depth how a low cost micro-computer was used for substantial improvement of established measuring systems due to the construction and implementation of a purposeful complementary device for on-site sample pretreatment. A fully automated on-site device was developed and field-tested, that enables water sampling with simultaneous filtration as well as effective cleaning procedure of the devicés components. The described auto-sampler is controlled by a low-cost one-board computer and designed for sample pre-treatment, with minimal sample alteration, to meet requirements of on-site measurement devices that cannot handle coarse suspended solids within the measurement procedure or -cycle. The automated sample pretreatment was tested for over one year for rapid and on-site enzymatic activity (beta-D-glucuronidase, GLUC) determination in sediment laden stream water. The formerly used proprietary sampling set-up was assumed to lead to a significant damping of the measurement signal due to its susceptibility to clogging, debris- and bio film accumulation. Results show that the installation of the developed apparatus considerably enhanced error-free running time of connected measurement devices and increased the measurement accuracy to an up-to-now unmatched quality.
Value Engineering: An Application to Computer Software
1995-06-01
Ref. 4: P.2081 [Parentheses added] Figure S shows the cost function C(x) graphed with the Total Value function TV(x). It can be seen that for any...to be meaningful and ae-c-nrarp far tIha use V4a 4& trvite #nr- all nrnswn4_v%# s * 4 nna 33 since cost structures for each sotware de....o..n. project...maintainability quality characteristics due to longterm considerations affecting life-cycle costs .(0) S . VE applications provide alternative
NASA Technical Reports Server (NTRS)
Garrocq, C. A.; Hurley, M. J.; Dublin, M.
1973-01-01
A baseline implementation plan, including alternative implementation approaches for critical software elements and variants to the plan, was developed. The basic philosophy was aimed at: (1) a progressive release of capability for three major computing systems, (2) an end product that was a working tool, (3) giving participation to industry, government agencies, and universities, and (4) emphasizing the development of critical elements of the IPAD framework software. The results of these tasks indicate an IPAD first release capability 45 months after go-ahead, a five year total implementation schedule, and a total developmental cost of 2027 man-months and 1074 computer hours. Several areas of operational cost increases were identified mainly due to the impact of additional equipment needed and additional computer overhead. The benefits of an IPAD system were related mainly to potential savings in engineering man-hours, reduction of design-cycle calendar time, and indirect upgrading of product quality and performance.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
General aviation design synthesis utilizing interactive computer graphics
NASA Technical Reports Server (NTRS)
Galloway, T. L.; Smith, M. R.
1976-01-01
Interactive computer graphics is a fast growing area of computer application, due to such factors as substantial cost reductions in hardware, general availability of software, and expanded data communication networks. In addition to allowing faster and more meaningful input/output, computer graphics permits the use of data in graphic form to carry out parametric studies for configuration selection and for assessing the impact of advanced technologies on general aviation designs. The incorporation of interactive computer graphics into a NASA developed general aviation synthesis program is described, and the potential uses of the synthesis program in preliminary design are demonstrated.
An efficient hybrid pseudospectral/finite-difference scheme for solving the TTI pure P-wave equation
NASA Astrophysics Data System (ADS)
Zhan, Ge; Pestana, Reynam C.; Stoffa, Paul L.
2013-04-01
The pure P-wave equation for modelling and migration in tilted transversely isotropic (TTI) media has attracted more and more attention in imaging seismic data with anisotropy. The desirable feature is that it is absolutely free of shear-wave artefacts and the consequent alleviation of numerical instabilities generally suffered by some systems of coupled equations. However, due to several forward-backward Fourier transforms in wavefield updating at each time step, the computational cost is significant, and thereby hampers its prevalence. We propose to use a hybrid pseudospectral (PS) and finite-difference (FD) scheme to solve the pure P-wave equation. In the hybrid solution, most of the cost-consuming wavenumber terms in the equation are replaced by inexpensive FD operators, which in turn accelerates the computation and reduces the computational cost. To demonstrate the benefit in cost saving of the new scheme, 2D and 3D reverse-time migration (RTM) examples using the hybrid solution to the pure P-wave equation are carried out, and respective runtimes are listed and compared. Numerical results show that the hybrid strategy demands less computation time and is faster than using the PS method alone. Furthermore, this new TTI RTM algorithm with the hybrid method is computationally less expensive than that with the FD solution to conventional TTI coupled equations.
Pyshkin, P V; Luo, Da-Wei; Jing, Jun; You, J Q; Wu, Lian-Ao
2016-11-25
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol.
Pyshkin, P. V.; Luo, Da-Wei; Jing, Jun; You, J. Q.; Wu, Lian-Ao
2016-01-01
Holonomic quantum computation (HQC) may not show its full potential in quantum speedup due to the prerequisite of a long coherent runtime imposed by the adiabatic condition. Here we show that the conventional HQC can be dramatically accelerated by using external control fields, of which the effectiveness is exclusively determined by the integral of the control fields in the time domain. This control scheme can be realized with net zero energy cost and it is fault-tolerant against fluctuation and noise, significantly relaxing the experimental constraints. We demonstrate how to realize the scheme via decoherence-free subspaces. In this way we unify quantum robustness merits of this fault-tolerant control scheme, the conventional HQC and decoherence-free subspace, and propose an expedited holonomic quantum computation protocol. PMID:27886234
Strapdown cost trend study and forecast
NASA Technical Reports Server (NTRS)
Eberlein, A. J.; Savage, P. G.
1975-01-01
The potential cost advantages offered by advanced strapdown inertial technology in future commercial short-haul aircraft are summarized. The initial procurement cost and six year cost-of-ownership, which includes spares and direct maintenance cost were calculated for kinematic and inertial navigation systems such that traditional and strapdown mechanization costs could be compared. Cost results for the inertial navigation systems showed that initial costs and the cost of ownership for traditional triple redundant gimbaled inertial navigators are three times the cost of the equivalent skewed redundant strapdown inertial navigator. The net cost advantage for the strapdown kinematic system is directly attributable to the reduction in sensor count for strapdown. The strapdown kinematic system has the added advantage of providing a fail-operational inertial navigation capability for no additional cost due to the use of inertial grade sensors and attitude reference computers.
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Guidelines and Options for Computer Access from a Reclined Position.
Grott, Ray
2015-01-01
Many people can benefit from working in a reclined position when accessing a computer. This can be due to disabilities involving musculoskeletal weakness, or the need to offload pressure on the spine or elevate the legs. Although there are "reclining workstations" on the market that work for some people, potentially better solutions tailored to individual needs can be configured at modest cost by following some basic principles.
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
The science of computing - Parallel computation
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
Multi-strategy based quantum cost reduction of linear nearest-neighbor quantum circuit
NASA Astrophysics Data System (ADS)
Tan, Ying-ying; Cheng, Xue-yun; Guan, Zhi-jin; Liu, Yang; Ma, Haiying
2018-03-01
With the development of reversible and quantum computing, study of reversible and quantum circuits has also developed rapidly. Due to physical constraints, most quantum circuits require quantum gates to interact on adjacent quantum bits. However, many existing quantum circuits nearest-neighbor have large quantum cost. Therefore, how to effectively reduce quantum cost is becoming a popular research topic. In this paper, we proposed multiple optimization strategies to reduce the quantum cost of the circuit, that is, we reduce quantum cost from MCT gates decomposition, nearest neighbor and circuit simplification, respectively. The experimental results show that the proposed strategies can effectively reduce the quantum cost, and the maximum optimization rate is 30.61% compared to the corresponding results.
Cone beam computed tomography: basics and applications in dentistry.
Venkatesh, Elluru; Elluru, Snehal Venkatesh
2017-01-01
The introduction of cone beam computed tomography (CBCT) devices, changed the way oral and maxillofacial radiology is practiced. CBCT was embraced into the dental settings very rapidly due to its compact size, low cost, low ionizing radiation exposure when compared to medical computed tomography. Alike medical CT, 3 dimensional evaluation of the maxillofacial region with minimal distortion is offered by the CBCT. This article provides an overview of basics of CBCT technology and reviews the specific application of CBCT technology to oral and maxillofacial region with few illustrations.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Integrated risk/cost planning models for the US Air Traffic system
NASA Technical Reports Server (NTRS)
Mulvey, J. M.; Zenios, S. A.
1985-01-01
A prototype network planning model for the U.S. Air Traffic control system is described. The model encompasses the dual objectives of managing collision risks and transportation costs where traffic flows can be related to these objectives. The underlying structure is a network graph with nonseparable convex costs; the model is solved efficiently by capitalizing on its intrinsic characteristics. Two specialized algorithms for solving the resulting problems are described: (1) truncated Newton, and (2) simplicial decomposition. The feasibility of the approach is demonstrated using data collected from a control center in the Midwest. Computational results with different computer systems are presented, including a vector supercomputer (CRAY-XMP). The risk/cost model has two primary uses: (1) as a strategic planning tool using aggregate flight information, and (2) as an integrated operational system for forecasting congestion and monitoring (controlling) flow throughout the U.S. In the latter case, access to a supercomputer is required due to the model's enormous size.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
NASA Astrophysics Data System (ADS)
Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto
2016-02-01
Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.
High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing
NASA Astrophysics Data System (ADS)
Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.
2015-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.
Security and Cloud Outsourcing Framework for Economic Dispatch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
Security and Cloud Outsourcing Framework for Economic Dispatch
Sarker, Mushfiqur R.; Wang, Jianhui; Li, Zuyi; ...
2017-04-24
The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for themore » Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.« less
The vehicle design evaluation program - A computer-aided design procedure for transport aircraft
NASA Technical Reports Server (NTRS)
Oman, B. H.; Kruse, G. S.; Schrader, O. E.
1977-01-01
The vehicle design evaluation program is described. This program is a computer-aided design procedure that provides a vehicle synthesis capability for vehicle sizing, external load analysis, structural analysis, and cost evaluation. The vehicle sizing subprogram provides geometry, weight, and balance data for aircraft using JP, hydrogen, or methane fuels. The structural synthesis subprogram uses a multistation analysis for aerodynamic surfaces and fuselages to develop theoretical weights and geometric dimensions. The parts definition subprogram uses the geometric data from the structural analysis and develops the predicted fabrication dimensions, parts material raw stock buy requirements, and predicted actual weights. The cost analysis subprogram uses detail part data in conjunction with standard hours, realization factors, labor rates, and material data to develop the manufacturing costs. The program is used to evaluate overall design effects on subsonic commercial type aircraft due to parameter variations.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.
1975-01-01
An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.
The economic burden of meningitis to households in Kassena-Nankana district of Northern Ghana.
Akweongo, Patricia; Dalaba, Maxwell A; Hayden, Mary H; Awine, Timothy; Nyaaba, Gertrude N; Anaseba, Dominic; Hodgson, Abraham; Forgor, Abdulai A; Pandya, Rajul
2013-01-01
To estimate the direct and indirect costs of meningitis to households in the Kassena-Nankana District of Ghana. A Cost of illness (COI) survey was conducted between 2010 and 2011. The COI was computed from a retrospective review of 80 meningitis cases answers to questions about direct medical costs, direct non-medical costs incurred and productivity losses due to recent meningitis incident. The average direct and indirect costs of treating meningitis in the district was GH¢152.55 (US$101.7) per household. This is equivalent to about two months minimum wage earned by Ghanaians in unskilled paid jobs in 2009. Households lost 29 days of work per meningitis case and thus those in minimum wage paid jobs lost a monthly minimum wage of GH¢76.85 (US$51.23) due to the illness. Patients who were insured spent an average of GH¢38.5 (US$25.67) in direct medical costs whiles the uninsured patients spent as much as GH¢177.9 (US$118.6) per case. Patients with sequelae incurred additional costs of GH¢22.63 (US$15.08) per case. The least poor were more exposed to meningitis than the poorest. Meningitis is a debilitating but preventable disease that affects people living in the Sahel and in poorer conditions. The cost of meningitis treatment may further lead to impoverishment for these households. Widespread mass vaccination will save households' an equivalent of GH¢175.18 (US$117) and impairment due to meningitis.
Lou, Der-Chyuan; Lee, Tian-Fu; Lin, Tsung-Hung
2015-05-01
Authenticated key agreements for telecare medicine information systems provide patients, doctors, nurses and health visitors with accessing medical information systems and getting remote services efficiently and conveniently through an open network. In order to have higher security, many authenticated key agreement schemes appended biometric keys to realize identification except for using passwords and smartcards. Due to too many transmissions and computational costs, these authenticated key agreement schemes are inefficient in communication and computation. This investigation develops two secure and efficient authenticated key agreement schemes for telecare medicine information systems by using biometric key and extended chaotic maps. One scheme is synchronization-based, while the other nonce-based. Compared to related approaches, the proposed schemes not only retain the same security properties with previous schemes, but also provide users with privacy protection and have fewer transmissions and lower computational cost.
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Minnis, P.; Palikonda, R.; Smith, W. L., Jr.; Spangenberg, D.
2016-12-01
The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) processes and derives near real-time (NRT) global cloud products from operational geostationary satellite imager datasets. These products are being used in NRT to improve forecast model, aircraft icing warnings, and support aircraft field campaigns. Next generation satellites, such as the Japanese Himawari-8 and the upcoming NOAA GOES-R, present challenges for NRT data processing and product dissemination due to the increase in temporal and spatial resolution. The volume of data is expected to increase to approximately 10 folds. This increase in data volume will require additional IT resources to keep up with the processing demands to satisfy NRT requirements. In addition, these resources are not readily available due to cost and other technical limitations. To anticipate and meet these computing resource requirements, we have employed a hybrid cloud computing environment to augment the generation of SatCORPS products. This paper will describe the workflow to ingest, process, and distribute SatCORPS products and the technologies used. Lessons learn from working on both AWS Clouds and GovCloud will be discussed: benefits, similarities, and differences that could impact decision to use cloud computing and storage. A detail cost analysis will be presented. In addition, future cloud utilization, parallelization, and architecture layout will be discussed for GOES-R.
Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers
Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.; ...
2017-11-08
We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less
Simplified Models for Accelerated Structural Prediction of Conjugated Semiconducting Polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, Michael M.; Jones, Matthew L.; Oosterhout, Stefan D.
We perform molecular dynamics simulations of poly(benzodithiophene-thienopyrrolodione) (BDT-TPD) oligomers in order to evaluate the accuracy with which unoptimized molecular models can predict experimentally characterized morphologies. The predicted morphologies are characterized using simulated grazing-incidence X-ray scattering (GIXS) and compared to the experimental scattering patterns. We find that approximating the aromatic rings in BDT-TPD with rigid bodies, rather than combinations of bond, angle, and dihedral constraints, results in 14% lower computational cost and provides nearly equivalent structural predictions compared to the flexible model case. The predicted glass transition temperature of BDT-TPD (410 +/- 32 K) is found to be in agreement withmore » experiments. Predicted morphologies demonstrate short-range structural order due to stacking of the chain backbones (p-p stacking around 3.9 A), and long-range spatial correlations due to the self-organization of backbone stacks into 'ribbons' (lamellar ordering around 20.9 A), representing the best-to-date computational predictions of structure of complex conjugated oligomers. We find that expensive simulated annealing schedules are not needed to predict experimental structures here, with instantaneous quenches providing nearly equivalent predictions at a fraction of the computational cost of annealing. We therefore suggest utilizing rigid bodies and fast cooling schedules for high-throughput screening studies of semiflexible polymers and oligomers to utilize their significant computational benefits where appropriate.« less
Cost aware cache replacement policy in shared last-level cache for hybrid memory based fog computing
NASA Astrophysics Data System (ADS)
Jia, Gangyong; Han, Guangjie; Wang, Hao; Wang, Feng
2018-04-01
Fog computing requires a large main memory capacity to decrease latency and increase the Quality of Service (QoS). However, dynamic random access memory (DRAM), the commonly used random access memory, cannot be included into a fog computing system due to its high consumption of power. In recent years, non-volatile memories (NVM) such as Phase-Change Memory (PCM) and Spin-transfer torque RAM (STT-RAM) with their low power consumption have emerged to replace DRAM. Moreover, the currently proposed hybrid main memory, consisting of both DRAM and NVM, have shown promising advantages in terms of scalability and power consumption. However, the drawbacks of NVM, such as long read/write latency give rise to potential problems leading to asymmetric cache misses in the hybrid main memory. Current last level cache (LLC) policies are based on the unified miss cost, and result in poor performance in LLC and add to the cost of using NVM. In order to minimize the cache miss cost in the hybrid main memory, we propose a cost aware cache replacement policy (CACRP) that reduces the number of cache misses from NVM and improves the cache performance for a hybrid memory system. Experimental results show that our CACRP behaves better in LLC performance, improving performance up to 43.6% (15.5% on average) compared to LRU.
The Semiautomated Test System: A Tool for Standardized Performance Testing.
ERIC Educational Resources Information Center
Ramsey, H. Rudy
For performance tests to be truly standardized, they must be administered in a way that will minimize variation due to operator intervention and errors. Through such technological developments as low-cost digital computers and digital logic modules, automatic test administration without restriction of test content has become possible. A…
Beyond Passwords: Usage and Policy Transformation
2007-03-01
case scenario for lost productivity due to users leaving their CAC at work, in their computer, is costing 261 work years per year with an estimated ...one for your CAC) are you currently using? ..................................................................................................... 43...PASSWORDS: USAGE AND POLICY TRANSFORMATION I. Introduction Background Currently , the primary method for network authentication on the
Extreme-scale Algorithms and Solver Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack
A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less
Metal oxide resistive random access memory based synaptic devices for brain-inspired computing
NASA Astrophysics Data System (ADS)
Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan
2016-04-01
The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
Vallejo-Torres, Laura; Pujol, Miquel; Shaw, Evelyn; Wiegand, Irith; Vigo, Joan Miquel; Stoddart, Margaret; Grier, Sally; Gibbs, Julie; Vank, Christiane; Cuperus, Nienke; van den Heuvel, Leo; Eliakim-Raz, Noa; Carratala, Jordi; Vuong, Cuong; MacGowan, Alasdair; Babich, Tanya; Leibovici, Leonard; Addy, Ibironke; Morris, Stephen
2018-04-12
Complicated urinary tract infections (cUTIs) impose a high burden on healthcare systems and are a frequent cause of hospitalisation. The aims of this paper are to estimate the cost per episode of patients hospitalised due to cUTI and to explore the factors associated with cUTI-related healthcare costs in eight countries with high prevalence of multidrug resistance (MDR). This is a multinational observational, retrospective study. The mean cost per episode was computed by multiplying the volume of healthcare use for each patient by the unit cost of each item of care and summing across all components. Costs were measured from the hospital perspective. Patient-level regression analyses were used to identify the factors explaining variation in cUTI-related costs. The study was conducted in 20 hospitals in eight countries with high prevalence of multidrug resistant Gram-negative bacteria (Bulgaria, Greece, Hungary, Israel, Italy, Romania, Spain and Turkey). Data were obtained from 644 episodes of patients hospitalised due to cUTI. The mean cost per case was €5700, with considerable variation between countries (largest value €7740 in Turkey; lowest value €4028 in Israel), mainly due to differences in length of hospital stay. Factors associated with higher costs per patient were: type of admission, infection source, infection severity, the Charlson comorbidity index and presence of MDR. The mean cost per hospitalised case of cUTI was substantial and varied significantly between countries. A better knowledge of the reasons for variations in length of stays could facilitate a better standardised quality of care for patients with cUTI and allow a more efficient allocation of healthcare resources. Urgent admissions, infections due to an indwelling urinary catheterisation, resulting in septic shock or severe sepsis, in patients with comorbidities and presenting MDR were related to a higher cost. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Cost effective campaigning in social networks
NASA Astrophysics Data System (ADS)
Kotnis, Bhushan; Kuri, Joy
2016-05-01
Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind.
Prevalence of temporary social security benefits due to respiratory disease in Brazil.
Ildefonso, Simone de Andrade Goulart; Barbosa-Branco, Anadergh; Albuquerque-Oliveira, Paulo Rogério
2009-01-01
To determine the prevalence of temporary social security benefits due to respiratory disease granted to employees, as well as the number of lost workdays and costs resulting from those in Brazil between 2003 and 2004. Cross-sectional study using data obtained from the Unified System of Benefits of the Brazilian Institute of Social Security (INSS, Instituto Nacional de Seguro Social) and the Brazilian Social Registry Database. Data regarding gender, age, diagnosis and type of economic activity, as well as type, duration and cost of benefits, were compiled. Respiratory diseases accounted for 1.3% of the total number of temporary social security benefits granted by INSS, with a prevalence rate of 9.92 (per 10,000 employment contracts). Females and individuals older than 50 years of age were the most affected. Non-work-related benefits were more common than were work-related benefits. The most prevalent diseases were pneumonia, asthma and COPD, followed by laryngeal and vocal cord diseases. The most prevalent types of economic activity were auxiliary transportation equipment manufacturing, tobacco product manufacturing and computer-related activities. The mean duration of benefits was 209.68 days, with a mean cost of R$ 4,495.30 per occurrence. Respiratory diseases caused by exogenous agents demanded longer sick leave (mean, 296.72 days) and greater cost (mean, R$ 7,105.74). The most prevalent diseases were airway diseases and pneumonia. Workers from auxiliary transportation equipment manufacturing, tobacco product manufacturing and computer-related activities were the most affected. Diseases caused by exogenous agents demanded longer sick leaves and resulted in greater costs.
Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin
NASA Technical Reports Server (NTRS)
Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.
1981-01-01
Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.
Cost/performance of solar reflective surfaces for parabolic dish concentrators
NASA Technical Reports Server (NTRS)
Bouquet, F.
1980-01-01
Materials for highly reflective surfaces for use in parabolic dish solar concentrators are discussed. Some important factors concerning performance of the mirrors are summarized, and typical costs are treated briefly. Capital investment cost/performance ratios for various materials are computed specifically for the double curvature parabolic concentrators using a mathematical model. The results are given in terms of initial investment cost for reflective surfaces per thermal kilowatt delivered to the receiver cavity for various operating temperatures from 400 to 1400 C. Although second surface glass mirrors are emphasized, first surface, chemically brightened and anodized aluminum surfaces as well as second surface, metallized polymeric films are treated. Conventional glass mirrors have the lowest cost/performance ratios, followed closely by aluminum reflectors. Ranges in the data due to uncertainties in cost and mirror reflectance factors are given.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
USDA-ARS?s Scientific Manuscript database
Immunoassay for low molecular weight food contaminants, such as pesticides, veterinary drugs, and mycotoxins is now a well-established technique which meets the demands for a rapid, reliable, and cost-effective analytical method. However, due to limited understanding of the fundamental aspects of i...
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-25
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krogel, Jaron T.; Reboredo, Fernando A.
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this paper, we explore alternatives to reduce the memory usage of splined orbitalsmore » without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. Finally, for production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.« less
Kinetic energy classification and smoothing for compact B-spline basis sets in quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Krogel, Jaron T.; Reboredo, Fernando A.
2018-01-01
Quantum Monte Carlo calculations of defect properties of transition metal oxides have become feasible in recent years due to increases in computing power. As the system size has grown, availability of on-node memory has become a limiting factor. Saving memory while minimizing computational cost is now a priority. The main growth in memory demand stems from the B-spline representation of the single particle orbitals, especially for heavier elements such as transition metals where semi-core states are present. Despite the associated memory costs, splines are computationally efficient. In this work, we explore alternatives to reduce the memory usage of splined orbitals without significantly affecting numerical fidelity or computational efficiency. We make use of the kinetic energy operator to both classify and smooth the occupied set of orbitals prior to splining. By using a partitioning scheme based on the per-orbital kinetic energy distributions, we show that memory savings of about 50% is possible for select transition metal oxide systems. For production supercells of practical interest, our scheme incurs a performance penalty of less than 5%.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Kaushal, Rajendra Kumar; Nema, Arvind K
2013-09-01
This article deals with assessment of the potential health risk posed by carcinogenic and non-carcinogenic substances, namely lead (Pb), cadmium (Cd), copper, chromium (CrVI), zinc, nickel and mercury, present in e-waste. A multi-objective, multi-stakeholder approach based on strategic game theory model has been developed considering cost, as well as human health risk. The trade-off due to cost difference between a hazardous substances-free (HSF) and a hazardous substance (HS)-containing desktop computer, and the risk posed by them at the time of disposal, has been analyzed. The cancer risk due to dust inhalation for workers at a recycling site in Bangalore for Pb, Cr(VI) and Cd was found to be 4, 33 and 101 in 1 million respectively. Pb and Cr(VI) result in a very high risk owing to dust ingestion at slums near the recycling site--175 and 81 in 1 million for children, and 24 and 11 in 1 million for adults respectively. The concentration of Pb at a battery workshop in Mayapuri, Delhi (hazard quotient = 3.178) was found to pose adverse health hazards. The government may impose an appropriate penalty on the land disposal of computer waste and/or may give an incentive to manufacturer for producing HSF computers through, for example, relaxing taxes, but there should be no such incentive for manufacturing HS-containing computers.
Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2004-01-01
We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis. PMID:26884678
HTSFinder: Powerful Pipeline of DNA Signature Discovery by Parallel and Distributed Computing.
Karimi, Ramin; Hajdu, Andras
2016-01-01
Comprehensive effort for low-cost sequencing in the past few years has led to the growth of complete genome databases. In parallel with this effort, a strong need, fast and cost-effective methods and applications have been developed to accelerate sequence analysis. Identification is the very first step of this task. Due to the difficulties, high costs, and computational challenges of alignment-based approaches, an alternative universal identification method is highly required. Like an alignment-free approach, DNA signatures have provided new opportunities for the rapid identification of species. In this paper, we present an effective pipeline HTSFinder (high-throughput signature finder) with a corresponding k-mer generator GkmerG (genome k-mers generator). Using this pipeline, we determine the frequency of k-mers from the available complete genome databases for the detection of extensive DNA signatures in a reasonably short time. Our application can detect both unique and common signatures in the arbitrarily selected target and nontarget databases. Hadoop and MapReduce as parallel and distributed computing tools with commodity hardware are used in this pipeline. This approach brings the power of high-performance computing into the ordinary desktop personal computers for discovering DNA signatures in large databases such as bacterial genome. A considerable number of detected unique and common DNA signatures of the target database bring the opportunities to improve the identification process not only for polymerase chain reaction and microarray assays but also for more complex scenarios such as metagenomics and next-generation sequencing analysis.
NASA Astrophysics Data System (ADS)
Xue, ShiChuan; Wu, JunJie; Xu, Ping; Yang, XueJun
2018-02-01
Quantum computing is a significant computing capability which is superior to classical computing because of its superposition feature. Distinguishing several quantum states from quantum algorithm outputs is often a vital computational task. In most cases, the quantum states tend to be non-orthogonal due to superposition; quantum mechanics has proved that perfect outcomes could not be achieved by measurements, forcing repetitive measurement. Hence, it is important to determine the optimum measuring method which requires fewer repetitions and a lower error rate. However, extending current measurement approaches mainly aiming at quantum cryptography to multi-qubit situations for quantum computing confronts challenges, such as conducting global operations which has considerable costs in the experimental realm. Therefore, in this study, we have proposed an optimum subsystem method to avoid these difficulties. We have provided an analysis of the comparison between the reduced subsystem method and the global minimum error method for two-qubit problems; the conclusions have been verified experimentally. The results showed that the subsystem method could effectively discriminate non-orthogonal two-qubit states, such as separable states, entangled pure states, and mixed states; the cost of the experimental process had been significantly reduced, in most circumstances, with acceptable error rate. We believe the optimal subsystem method is the most valuable and promising approach for multi-qubit quantum computing applications.
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Pope, Stephen B.
2014-05-01
A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.
Multicore Programming Challenges
NASA Astrophysics Data System (ADS)
Perrone, Michael
The computer industry is facing fundamental challenges that are driving a major change in the design of computer processors. Due to restrictions imposed by quantum physics, one historical path to higher computer processor performance - by increased clock frequency - has come to an end. Increasing clock frequency now leads to power consumption costs that are too high to justify. As a result, we have seen in recent years that the processor frequencies have peaked and are receding from their high point. At the same time, competitive market conditions are giving business advantage to those companies that can field new streaming applications, handle larger data sets, and update their models to market conditions faster. The desire for newer, faster and larger is driving continued demand for higher computer performance.
An 'electronic' extramural course in epidemiology and medical statistics.
Ostbye, T
1989-03-01
This article describes an extramural university course in epidemiology and medical statistics taught using a computer conferencing system, microcomputers and data communications. Computer conferencing was shown to be a powerful, yet quite easily mastered, vehicle for distance education. It allows health personnel unable to attend regular classes due to geographical or time constraints, to take part in an interactive learning environment at low cost. This overcomes part of the intellectual and social isolation associated with traditional correspondence courses. Teaching of epidemiology and medical statistics is well suited to computer conferencing, even if the asynchronicity of the medium makes discussion of the most complex statistical concepts a little cumbersome. Computer conferencing may also prove to be a useful tool for teaching other medical and health related subjects.
Implementing Computer Based Laboratories
NASA Astrophysics Data System (ADS)
Peterson, David
2001-11-01
Physics students at Francis Marion University will complete several required laboratory exercises utilizing computer-based Vernier probes. The simple pendulum, the acceleration due to gravity, simple harmonic motion, radioactive half lives, and radiation inverse square law experiments will be incorporated into calculus-based and algebra-based physics courses. Assessment of student learning and faculty satisfaction will be carried out by surveys and test results. Cost effectiveness and time effectiveness assessments will be presented. Majors in Computational Physics, Health Physics, Engineering, Chemistry, Mathematics and Biology take these courses, and assessments will be categorized by major. To enhance the computer skills of students enrolled in the courses, MAPLE will be used for further analysis of the data acquired during the experiments. Assessment of these enhancement exercises will also be presented.
Operating Dedicated Data Centers - Is It Cost-Effective?
NASA Astrophysics Data System (ADS)
Ernst, M.; Hogue, R.; Hollowell, C.; Strecker-Kellog, W.; Wong, A.; Zaytsev, A.
2014-06-01
The advent of cloud computing centres such as Amazon's EC2 and Google's Computing Engine has elicited comparisons with dedicated computing clusters. Discussions on appropriate usage of cloud resources (both academic and commercial) and costs have ensued. This presentation discusses a detailed analysis of the costs of operating and maintaining the RACF (RHIC and ATLAS Computing Facility) compute cluster at Brookhaven National Lab and compares them with the cost of cloud computing resources under various usage scenarios. An extrapolation of likely future cost effectiveness of dedicated computing resources is also presented.
User/Tutor Optimal Learning Path in E-Learning Using Comprehensive Neuro-Fuzzy Approach
ERIC Educational Resources Information Center
Fazlollahtabar, Hamed; Mahdavi, Iraj
2009-01-01
Internet evolution has affected all industrial, commercial, and especially learning activities in the new context of e-learning. Due to cost, time, or flexibility e-learning has been adopted by participators as an alternative training method. By development of computer-based devices and new methods of teaching, e-learning has emerged. The…
Faries, Douglas E; Nyhuis, Allen W; Ascher-Svanum, Haya
2009-05-27
Schizophrenia is a severe, chronic, and costly illness that adversely impacts patients' lives and health care payer budgets. Cost comparisons of treatment regimens are, therefore, important to health care payers and researchers. Pre-Post analyses ("mirror-image"), where outcomes prior to a medication switch are compared to outcomes post-switch, are commonly used in such research. However, medication changes often occur during a costly crisis event. Patients may relapse, be hospitalized, have a medication change, and then spend a period of time with intense use of costly resources (post-medication switch). While many advantages and disadvantages of Pre-Post methodology have been discussed, issues regarding the attributability of costs incurred around the time of medication switching have not been fully investigated. Medical resource use data, including medications and acute-care services (hospitalizations, partial hospitalizations, emergency department) were collected for patients with schizophrenia who switched antipsychotics (n = 105) during a 1-year randomized, naturalistic, antipsychotic cost-effectiveness schizophrenia trial. Within-patient changes in total costs per day were computed during the pre- and post-medication change periods. In addition to the standard Pre-Post analysis comparing costs pre- and post-medication change, we investigated the sensitivity of results to varying assumptions regarding the attributability of acute care service costs occurring just after a medication switch that were likely due to initial medication failure. Fifty-six percent of all costs incurred during the first week on the newly initiated antipsychotic were likely due to treatment failure with the previous antipsychotic. Standard analyses suggested an average increase in cost-per-day for each patient of $2.40 after switching medications. However, sensitivity analyses removing costs incurred post-switch that were potentially due to the failure of the initial medication suggested decreases in costs in the range of $4.77 to $9.69 per day post-switch. Pre-Post cost analyses are sensitive to the approach used to handle acute-service costs occurring just after a medication change. Given the importance of quality economic research on the cost of switching treatments, thorough sensitivity analyses should be performed to identify the impact of crisis events around the time of medication change.
Wieser, Simon; Plessow, Rafael; Eichler, Klaus; Malek, Olivia; Capanzana, Mario V; Agdeppa, Imelda; Bruegger, Urs
2013-12-11
Micronutrient deficiencies (MNDs) are a chronic lack of vitamins and minerals and constitute a huge public health problem. MNDs have severe health consequences and are particularly harmful during early childhood due to their impact on the physical and cognitive development. We estimate the costs of illness due to iron deficiency (IDA), vitamin A deficiency (VAD) and zinc deficiency (ZnD) in 2 age groups (6-23 and 24-59 months) of Filipino children by socio-economic strata in 2008. We build a health economic model simulating the consequences of MNDs in childhood over the entire lifetime. The model is based on a health survey and a nutrition survey carried out in 2008. The sample populations are first structured into 10 socio-economic strata (SES) and 2 age groups. Health consequences of MNDs are modelled based on information extracted from literature. Direct medical costs, production losses and intangible costs are computed and long term costs are discounted to present value. Total lifetime costs of IDA, VAD and ZnD amounted to direct medical costs of 30 million dollars, production losses of 618 million dollars and intangible costs of 122,138 disability adjusted life years (DALYs). These costs can be interpreted as the lifetime costs of a 1-year cohort affected by MNDs between the age of 6-59 months. Direct medical costs are dominated by costs due to ZnD (89% of total), production losses by losses in future lifetime (90% of total) and intangible costs by premature death (47% of total DALY losses) and losses in future lifetime (43%). Costs of MNDs differ considerably between SES as costs in the poorest third of the households are 5 times higher than in the wealthiest third. MNDs lead to substantial costs in 6-59-month-old children in the Philippines. Costs are highly concentrated in the lower SES and in children 6-23 months old. These results may have important implications for the design, evaluation and choice of the most effective and cost-effective policies aimed at the reduction of MNDs.
2013-01-01
Background Micronutrient deficiencies (MNDs) are a chronic lack of vitamins and minerals and constitute a huge public health problem. MNDs have severe health consequences and are particularly harmful during early childhood due to their impact on the physical and cognitive development. We estimate the costs of illness due to iron deficiency (IDA), vitamin A deficiency (VAD) and zinc deficiency (ZnD) in 2 age groups (6–23 and 24–59 months) of Filipino children by socio-economic strata in 2008. Methods We build a health economic model simulating the consequences of MNDs in childhood over the entire lifetime. The model is based on a health survey and a nutrition survey carried out in 2008. The sample populations are first structured into 10 socio-economic strata (SES) and 2 age groups. Health consequences of MNDs are modelled based on information extracted from literature. Direct medical costs, production losses and intangible costs are computed and long term costs are discounted to present value. Results Total lifetime costs of IDA, VAD and ZnD amounted to direct medical costs of 30 million dollars, production losses of 618 million dollars and intangible costs of 122,138 disability adjusted life years (DALYs). These costs can be interpreted as the lifetime costs of a 1-year cohort affected by MNDs between the age of 6–59 months. Direct medical costs are dominated by costs due to ZnD (89% of total), production losses by losses in future lifetime (90% of total) and intangible costs by premature death (47% of total DALY losses) and losses in future lifetime (43%). Costs of MNDs differ considerably between SES as costs in the poorest third of the households are 5 times higher than in the wealthiest third. Conclusions MNDs lead to substantial costs in 6-59-month-old children in the Philippines. Costs are highly concentrated in the lower SES and in children 6–23 months old. These results may have important implications for the design, evaluation and choice of the most effective and cost-effective policies aimed at the reduction of MNDs. PMID:24330481
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
Zeindlhofer, Veronika; Schröder, Christian
2018-06-01
Based on their tunable properties, ionic liquids attracted significant interest to replace conventional, organic solvents in biomolecular applications. Following a Gartner cycle, the expectations on this new class of solvents dropped after the initial hype due to the high viscosity, hydrolysis, and toxicity problems as well as their high cost. Since not all possible combinations of cations and anions can be tested experimentally, fundamental knowledge on the interaction of the ionic liquid ions with water and with biomolecules is mandatory to optimize the solvation behavior, the biodegradability, and the costs of the ionic liquid. Here, we report on current computational approaches to characterize the impact of the ionic liquid ions on the structure and dynamics of the biomolecule and its solvation layer to explore the full potential of ionic liquids.
Optimized 4-bit Quantum Reversible Arithmetic Logic Unit
NASA Astrophysics Data System (ADS)
Ayyoub, Slimani; Achour, Benslama
2017-08-01
Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.
Li, Sean S; Copeland-Halperin, Libby R; Kaminsky, Alexander J; Li, Jihui; Lodhi, Fahad K; Miraliakbari, Reza
2018-06-01
Computer-aided surgical simulation (CASS) has redefined surgery, improved precision and reduced the reliance on intraoperative trial-and-error manipulations. CASS is provided by third-party services; however, it may be cost-effective for some hospitals to develop in-house programs. This study provides the first cost analysis comparison among traditional (no CASS), commercial CASS, and in-house CASS for head and neck reconstruction. The costs of three-dimensional (3D) pre-operative planning for mandibular and maxillary reconstructions were obtained from an in-house CASS program at our large tertiary care hospital in Northern Virginia, as well as a commercial provider (Synthes, Paoli, PA). A cost comparison was performed among these modalities and extrapolated in-house CASS costs were derived. The calculations were based on estimated CASS use with cost structures similar to our institution and sunk costs were amortized over 10 years. Average operating room time was estimated at 10 hours, with an average of 2 hours saved with CASS. The hourly cost to the hospital for the operating room (including anesthesia and other ancillary costs) was estimated at $4,614/hour. Per case, traditional cases were $46,140, commercial CASS cases were $40,951, and in-house CASS cases were $38,212. Annual in-house CASS costs were $39,590. CASS reduced operating room time, likely due to improved efficiency and accuracy. Our data demonstrate that hospitals with similar cost structure as ours, performing greater than 27 cases of 3D head and neck reconstructions per year can see a financial benefit from developing an in-house CASS program. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
Super-resolution using a light inception layer in convolutional neural network
NASA Astrophysics Data System (ADS)
Mou, Qinyang; Guo, Jun
2018-04-01
Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.
Network Community Detection based on the Physarum-inspired Computational Framework.
Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili
2016-12-13
Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
Are You Afraid of Taking an Online Foreign Language Test?
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Robles, Valencia
2017-01-01
Computer based testing has become a prevailing tendency in education. Each year, a large number of students take online language tests everywhere in the world. In fact, there is a tendency to make these tests more and more used due to their low cost of delivery. However, many students are forced to take them despite their interests, feelings and…
The Brain Computer Interface Future: Time for a Strategy
2013-02-14
electrophysiological activity can be measured by electroencepholography ( EEG ), electrocorticography (ECoG), magnetoencephalography (MEG), or signal activity...magnetic resonance imaging (MRI) or near infrared spectroscopy. Currently EEG is most the most widely used BCI interface due to high temporal...resolution, less user risk, and lower costs.12 EEG technology has been widely available for many decades but has significantly expanded as researchers
Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dekeyser, W., E-mail: Wouter.Dekeyser@kuleuven.be; Reiter, D.; Baelmans, M.
As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation ofmore » the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.« less
Assessment of the risk due to release of carbon fiber in civil aircraft accidents, phase 2
NASA Technical Reports Server (NTRS)
Pocinki, L.; Cornell, M. E.; Kaplan, L.
1980-01-01
The risk associated with the potential use of carbon fiber composite material in commercial jet aircraft is investigated. A simulation model developed to generate risk profiles for several airports is described. The risk profiles show the probability that the cost due to accidents in any year exceeds a given amount. The computer model simulates aircraft accidents with fire, release of fibers, their downwind transport and infiltration of buildings, equipment failures, and resulting ecomomic impact. The individual airport results were combined to yield the national risk profile.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
Hanly, Paul A; Sharp, Linda
2014-03-26
Most measures of the cancer burden take a public health perspective. Cancer also has a significant economic impact on society. To assess this economic burden, we estimated years of potential productive life lost (YPPLL) and costs of lost productivity due to premature cancer-related mortality in Ireland. All cancers combined and the 10 sites accounting for most deaths in men and in women were considered. To compute YPPLL, deaths in 5-year age-bands between 15 and 64 years were multiplied by average working-life expectancy. Valuation of costs, using the human capital approach, involved multiplying YPPLL by age-and-gender specific gross wages, and adjusting for unemployment and workforce participation. Sensitivity analyses were conducted around retirement age and wage growth, labour force participation, employment and discount rates, and to explore the impact of including household production and caring costs. Costs were expressed in €2009. Total YPPLL was lower in men than women (men = 10,873; women = 12,119). Premature cancer-related mortality costs were higher in men (men: total cost = €332 million, cost/death = €290,172, cost/YPPLL = €30,558; women: total cost = €177 million, cost/death = €159,959, cost/YPPLL = €14,628). Lung cancer had the highest premature mortality cost (€84.0 million; 16.5% of total costs), followed by cancers of the colorectum (€49.6 million; 9.7%), breast (€49.4 million; 9.7%) and brain & CNS (€42.4 million: 8.3%). The total economic cost of premature cancer-related mortality in Ireland amounted to €509.5 million or 0.3% of gross domestic product. An increase of one year in the retirement age increased the total all-cancer premature mortality cost by 9.9% for men and 5.9% for women. The inclusion of household production and caring costs increased the total cost to €945.7 million. Lost productivity costs due to cancer-related premature mortality are significant. The higher premature mortality cost in males than females reflects higher wages and rates of workforce participation. Productivity costs provide an alternative perspective on the cancer burden on society and may inform cancer control policy decisions.
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less
Code of Federal Regulations, 2011 CFR
2011-01-01
... of the employee doing the work. (2) For computer searches for records, the direct costs of computer... $15.00. Fee Amounts Table Type of fee Amount of fee Manual Search and Review Pro rated Salary Costs. Computer Search Direct Costs. Photocopy $0.15 a page. Other Reproduction Costs Direct Costs. Elective...
Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long
2016-04-01
Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms of runtime. Total registration time reduced runtime to 2.9min on the GPU version, compared to 12.8min on twelve-threaded CPU version and 112.5min on a single-threaded CPU. Furthermore, the GPU implementation discussed in this work can be adapted for use of other cost functions that require calculation of the first derivatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
An Integrated Data-Driven Strategy for Safe-by-Design Nanoparticles: The FP7 MODERN Project.
Brehm, Martin; Kafka, Alexander; Bamler, Markus; Kühne, Ralph; Schüürmann, Gerrit; Sikk, Lauri; Burk, Jaanus; Burk, Peeter; Tamm, Tarmo; Tämm, Kaido; Pokhrel, Suman; Mädler, Lutz; Kahru, Anne; Aruoja, Villem; Sihtmäe, Mariliis; Scott-Fordsmand, Janeck; Sorensen, Peter B; Escorihuela, Laura; Roca, Carlos P; Fernández, Alberto; Giralt, Francesc; Rallo, Robert
2017-01-01
The development and implementation of safe-by-design strategies is key for the safe development of future generations of nanotechnology enabled products. The safety testing of the huge variety of nanomaterials that can be synthetized is unfeasible due to time and cost constraints. Computational modeling facilitates the implementation of alternative testing strategies in a time and cost effective way. The development of predictive nanotoxicology models requires the use of high quality experimental data on the structure, physicochemical properties and bioactivity of nanomaterials. The FP7 Project MODERN has developed and evaluated the main components of a computational framework for the evaluation of the environmental and health impacts of nanoparticles. This chapter describes each of the elements of the framework including aspects related to data generation, management and integration; development of nanodescriptors; establishment of nanostructure-activity relationships; identification of nanoparticle categories; hazard ranking and risk assessment.
The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design
NASA Astrophysics Data System (ADS)
Borrmann, Dorit; Elseberg, Jan; Lingemann, Kai; Nüchter, Andreas
2011-03-01
The Hough Transform is a well-known method for detecting parameterized objects. It is the de facto standard for detecting lines and circles in 2-dimensional data sets. For 3D it has attained little attention so far. Even for the 2D case high computational costs have lead to the development of numerous variations for the Hough Transform. In this article we evaluate different variants of the Hough Transform with respect to their applicability to detect planes in 3D point clouds reliably. Apart from computational costs, the main problem is the representation of the accumulator. Usual implementations favor geometrical objects with certain parameters due to uneven sampling of the parameter space. We present a novel approach to design the accumulator focusing on achieving the same size for each cell and compare it to existing designs. [Figure not available: see fulltext.
From Phonomecanocardiography to Phonocardiography computer aided
NASA Astrophysics Data System (ADS)
Granados, J.; Tavera, F.; López, G.; Velázquez, J. M.; Hernández, R. T.; López, G. A.
2017-01-01
Due to lack of training doctors to identify many of the disorders in the heart by conventional listening, it is necessary to add an objective and methodological analysis to support this technique. In order to obtain information of the performance of the heart to be able to diagnose heart disease through a simple, cost-effective procedure by means of a data acquisition system, we have obtained Phonocardiograms (PCG), which are images of the sounds emitted by the heart. A program of acoustic, visual and artificial vision recognition was elaborated to interpret them. Based on the results of previous research of cardiologists a code of interpretation of PCG and associated diseases was elaborated. Also a site, within the university campus, of experimental sampling of cardiac data was created. Phonocardiography computer-aided is a viable and low cost procedure which provides additional medical information to make a diagnosis of complex heart diseases. We show some previous results.
Pupillary dynamics reveal computational cost in sentence planning.
Sevilla, Yamila; Maldonado, Mora; Shalóm, Diego E
2014-01-01
This study investigated the computational cost associated with grammatical planning in sentence production. We measured people's pupillary responses as they produced spoken descriptions of depicted events. We manipulated the syntactic structure of the target by training subjects to use different types of sentences following a colour cue. The results showed higher increase in pupil size for the production of passive and object dislocated sentences than for active canonical subject-verb-object sentences, indicating that more cognitive effort is associated with more complex noncanonical thematic order. We also manipulated the time at which the cue that triggered structure-building processes was presented. Differential increase in pupil diameter for more complex sentences was shown to rise earlier as the colour cue was presented earlier, suggesting that the observed pupillary changes are due to differential demands in relatively independent structure-building processes during grammatical planning. Task-evoked pupillary responses provide a reliable measure to study the cognitive processes involved in sentence production.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.
Quantifying uncertainty and computational complexity for pore-scale simulations
NASA Astrophysics Data System (ADS)
Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.
2016-12-01
Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.
A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation
NASA Astrophysics Data System (ADS)
Qiang, Z.; Zeng, L.; Wu, L.
2016-12-01
Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Nenov, Artur; Mukamel, Shaul; Garavelli, Marco; Rivalta, Ivan
2015-08-11
First-principles simulations of two-dimensional electronic spectroscopy in the ultraviolet region (2DUV) require computationally demanding multiconfigurational approaches that can resolve doubly excited and charge transfer states, the spectroscopic fingerprints of coupled UV-active chromophores. Here, we propose an efficient approach to reduce the computational cost of accurate simulations of 2DUV spectra of benzene, phenol, and their dimer (i.e., the minimal models for studying electronic coupling of UV-chromophores in proteins). We first establish the multiconfigurational recipe with the highest accuracy by comparison with experimental data, providing reference gas-phase transition energies and dipole moments that can be used to construct exciton Hamiltonians involving high-lying excited states. We show that by reducing the active spaces and the number of configuration state functions within restricted active space schemes, the computational cost can be significantly decreased without loss of accuracy in predicting 2DUV spectra. The proposed recipe has been successfully tested on a realistic model proteic system in water. Accounting for line broadening due to thermal and solvent-induced fluctuations allows for direct comparison with experiments.
Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.
Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E
2017-02-01
Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.
GPU-based High-Performance Computing for Radiation Therapy
Jia, Xun; Ziegenhein, Peter; Jiang, Steve B.
2014-01-01
Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. Graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past a few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of studies have been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this article, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. PMID:24486639
Control mechanism of double-rotator-structure ternary optical computer
NASA Astrophysics Data System (ADS)
Kai, SONG; Liping, YAN
2017-03-01
Double-rotator-structure ternary optical processor (DRSTOP) has two characteristics, namely, giant data-bits parallel computing and reconfigurable processor, which can handle thousands of data bits in parallel, and can run much faster than computers and other optical computer systems so far. In order to put DRSTOP into practical application, this paper established a series of methods, namely, task classification method, data-bits allocation method, control information generation method, control information formatting and sending method, and decoded results obtaining method and so on. These methods form the control mechanism of DRSTOP. This control mechanism makes DRSTOP become an automated computing platform. Compared with the traditional calculation tools, DRSTOP computing platform can ease the contradiction between high energy consumption and big data computing due to greatly reducing the cost of communications and I/O. Finally, the paper designed a set of experiments for DRSTOP control mechanism to verify its feasibility and correctness. Experimental results showed that the control mechanism is correct, feasible and efficient.
Burden of suicide in Poland in 2012: how could it be measured and how big is it?
Orlewska, Katarzyna; Orlewska, Ewa
2018-04-01
The aim of our study was to estimate the health-related and economic burden of suicide in Poland in 2012 and to demonstrate the effects of using different assumptions on the disease burden estimation. Years of life lost (YLL) were calculated by multiplying the number of deaths by the remaining life expectancy. Local expected YLL (LEYLL) and standard expected YLL (SEYLL) were computed using Polish life expectancy tables and WHO standards, respectively. In the base case analysis LEYLL and SEYLL were computed with 3.5 and 0% discount rates, respectively, and no age-weighting. Premature mortality costs were calculated using a human capital approach, with discounting at 5%, and are reported in Polish zloty (PLN) (1 euro = 4.3 PLN). The impact of applying different assumptions on base-case estimates was tested in sensitivity analyses. The total LEYLLs and SEYLLs due to suicide were 109,338 and 279,425, respectively, with 88% attributable to male deaths. The cost of male premature mortality (2,808,854,532 PLN) was substantially higher than for females (177,852,804 PLN). Discounting and age-weighting have a large effect on the base case estimates of LEYLLs. The greatest impact on the estimates of suicide-related premature mortality costs was due to the value of the discount rate. Our findings provide quantitative evidence on the burden of suicide. In our opinion each of the demonstrated methods brings something valuable to the evaluation of the impact of suicide on a given population, but LEYLLs and premature mortality costs estimated according to national guidelines have the potential to be useful for local public health policymakers.
The Hidden Costs of Owning a Microcomputer.
ERIC Educational Resources Information Center
McDole, Thomas L.
Before purchasing computer hardware, individuals must consider the costs associated with the setup and operation of a microcomputer system. Included among the initial costs of purchasing a computer are the costs of the computer, one or more disk drives, a monitor, and a printer as well as the costs of such optional peripheral devices as a plotter…
Faries, Douglas E; Nyhuis, Allen W; Ascher-Svanum, Haya
2009-01-01
Background Schizophrenia is a severe, chronic, and costly illness that adversely impacts patients' lives and health care payer budgets. Cost comparisons of treatment regimens are, therefore, important to health care payers and researchers. Pre-Post analyses ("mirror-image"), where outcomes prior to a medication switch are compared to outcomes post-switch, are commonly used in such research. However, medication changes often occur during a costly crisis event. Patients may relapse, be hospitalized, have a medication change, and then spend a period of time with intense use of costly resources (post-medication switch). While many advantages and disadvantages of Pre-Post methodology have been discussed, issues regarding the attributability of costs incurred around the time of medication switching have not been fully investigated. Methods Medical resource use data, including medications and acute-care services (hospitalizations, partial hospitalizations, emergency department) were collected for patients with schizophrenia who switched antipsychotics (n = 105) during a 1-year randomized, naturalistic, antipsychotic cost-effectiveness schizophrenia trial. Within-patient changes in total costs per day were computed during the pre- and post-medication change periods. In addition to the standard Pre-Post analysis comparing costs pre- and post-medication change, we investigated the sensitivity of results to varying assumptions regarding the attributability of acute care service costs occurring just after a medication switch that were likely due to initial medication failure. Results Fifty-six percent of all costs incurred during the first week on the newly initiated antipsychotic were likely due to treatment failure with the previous antipsychotic. Standard analyses suggested an average increase in cost-per-day for each patient of $2.40 after switching medications. However, sensitivity analyses removing costs incurred post-switch that were potentially due to the failure of the initial medication suggested decreases in costs in the range of $4.77 to $9.69 per day post-switch. Conclusion Pre-Post cost analyses are sensitive to the approach used to handle acute-service costs occurring just after a medication change. Given the importance of quality economic research on the cost of switching treatments, thorough sensitivity analyses should be performed to identify the impact of crisis events around the time of medication change. PMID:19473545
Filippini, D; Tejle, K; Lundström, I
2005-08-15
The computer screen photo-assisted technique (CSPT), a method for substance classification based on spectral fingerprinting, which involves just a computer screen and a web camera as measuring platform is used here for the evaluation of a prospective enzyme-linked immunosorbent assay (ELISA). A anti-neutrophil cytoplasm antibodies (ANCA-ELISA) test, typically used for diagnosing patients suffering from chronic inflammatory disorders in the skin, joints, blood vessels and other tissues is comparatively tested with a standard microplate reader and CSPT, yielding equivalent results at a fraction of the instrumental costs. The CSPT approach is discussed as a distributed measuring platform allowing decentralized measurements in routine applications, whereas keeping centralized information management due to its natural network embedded operation.
Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.
Avila, Gustavo; Carrington, Tucker
2017-08-14
In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.
Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.
Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro
2018-04-16
In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.
NASA Astrophysics Data System (ADS)
Abrahams, Rachel
2017-06-01
Intermediate alloy steels are widely used in applications where both high strength and toughness are required for extreme/dynamic loading environments. Steels containing greater than 10% Ni-Co-Mo are amongst the highest strength martensitic steels, due to their high levels of solution strengthening, and preservation of toughness through nano-scaled secondary hardening, semi-coherent hcp-M2 C carbides. While these steels have high yield strengths (σy 0.2 % >1200 MPa) with high impact toughness values (CVN@-40 >30J), they are often cost-prohibitive due to the material and processing cost of nickel and cobalt. Early stage-I steels such as ES-1 (Eglin Steel) were developed in response to the high cost of nickel-cobalt steels and performed well in extreme shock environments due to the presence of analogous nano-scaled hcp-Fe2.4 C epsilon carbides. Unfortunately, the persistence of W-bearing carbides limited the use of ES-1 to relatively thin sections. In this study, we discuss the background and accelerated development cycle of AF96, an alternative Cr-Mo-Ni-Si stage-I temper steel using low-cost heuristic and Integrated Computational Materials Engineering (ICME)-assisted methods. The microstructure of AF96 was tailored to mimic that of ES-1, while reducing stability of detrimental phases and improving ease of processing in industrial environments. AF96 is amenable to casting and forging, deeply hardenable, and scalable to 100,000 kg melt quantities. When produced at the industrial scale, it was found that AF96 exhibits near-statistically identical mechanical properties to ES-1 at 50% of the cost.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.
GT-WGS: an efficient and economic tool for large-scale WGS analyses based on the AWS cloud service.
Wang, Yiqi; Li, Gen; Ma, Mark; He, Fazhong; Song, Zhuo; Zhang, Wei; Wu, Chengkun
2018-01-19
Whole-genome sequencing (WGS) plays an increasingly important role in clinical practice and public health. Due to the big data size, WGS data analysis is usually compute-intensive and IO-intensive. Currently it usually takes 30 to 40 h to finish a 50× WGS analysis task, which is far from the ideal speed required by the industry. Furthermore, the high-end infrastructure required by WGS computing is costly in terms of time and money. In this paper, we aim to improve the time efficiency of WGS analysis and minimize the cost by elastic cloud computing. We developed a distributed system, GT-WGS, for large-scale WGS analyses utilizing the Amazon Web Services (AWS). Our system won the first prize on the Wind and Cloud challenge held by Genomics and Cloud Technology Alliance conference (GCTA) committee. The system makes full use of the dynamic pricing mechanism of AWS. We evaluate the performance of GT-WGS with a 55× WGS dataset (400GB fastq) provided by the GCTA 2017 competition. In the best case, it only took 18.4 min to finish the analysis and the AWS cost of the whole process is only 16.5 US dollars. The accuracy of GT-WGS is 99.9% consistent with that of the Genome Analysis Toolkit (GATK) best practice. We also evaluated the performance of GT-WGS performance on a real-world dataset provided by the XiangYa hospital, which consists of 5× whole-genome dataset with 500 samples, and on average GT-WGS managed to finish one 5× WGS analysis task in 2.4 min at a cost of $3.6. WGS is already playing an important role in guiding therapeutic intervention. However, its application is limited by the time cost and computing cost. GT-WGS excelled as an efficient and affordable WGS analyses tool to address this problem. The demo video and supplementary materials of GT-WGS can be accessed at https://github.com/Genetalks/wgs_analysis_demo .
An effective and secure key-management scheme for hierarchical access control in E-medicine system.
Odelu, Vanga; Das, Ashok Kumar; Goswami, Adrijit
2013-04-01
Recently several hierarchical access control schemes are proposed in the literature to provide security of e-medicine systems. However, most of them are either insecure against 'man-in-the-middle attack' or they require high storage and computational overheads. Wu and Chen proposed a key management method to solve dynamic access control problems in a user hierarchy based on hybrid cryptosystem. Though their scheme improves computational efficiency over Nikooghadam et al.'s approach, it suffers from large storage space for public parameters in public domain and computational inefficiency due to costly elliptic curve point multiplication. Recently, Nikooghadam and Zakerolhosseini showed that Wu-Chen's scheme is vulnerable to man-in-the-middle attack. In order to remedy this security weakness in Wu-Chen's scheme, they proposed a secure scheme which is again based on ECC (elliptic curve cryptography) and efficient one-way hash function. However, their scheme incurs huge computational cost for providing verification of public information in the public domain as their scheme uses ECC digital signature which is costly when compared to symmetric-key cryptosystem. In this paper, we propose an effective access control scheme in user hierarchy which is only based on symmetric-key cryptosystem and efficient one-way hash function. We show that our scheme reduces significantly the storage space for both public and private domains, and computational complexity when compared to Wu-Chen's scheme, Nikooghadam-Zakerolhosseini's scheme, and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against different attacks and also man-in-the-middle attack. Moreover, dynamic access control problems in our scheme are also solved efficiently compared to other related schemes, making our scheme is much suitable for practical applications of e-medicine systems.
Personal Computer Based Controller For Switched Reluctance Motor Drives
NASA Astrophysics Data System (ADS)
Mang, X.; Krishnan, R.; Adkar, S.; Chandramouli, G.
1987-10-01
Th9, switched reluctance motor (SRM) has recently gained considerable attention in the variable speed drive market. Two important factors that have contributed to this are, the simplicity of construction and the possibility of developing low cost con-trollers with minimum number of switching devices in the drive circuits. This is mainly due to the state-of-art of the present digital circuits technology and the low cost of switching devices. The control of this motor drive is under research. Optimized performance of the SRM motor drive is very dependent on the integration of the controller, converter and the motor. This research on system integration involves considerable changes in the control algorithms and their implementation. A Personal computer (PC) based controller is very appropriate for this purpose. Accordingly, the present paper is concerned with the design of a PC based controller for a SRM. The PC allows for real-time microprocessor control with the possibility of on-line system parameter modifications. Software reconfiguration of this controller is easier than a hardware based controller. User friendliness is a natural consequence of such a system. Considering the low cost of PCs, this controller will offer an excellent cost-effective means of studying the control strategies for the SRM drive intop greater detail than in the past.
Potential field cellular automata model for pedestrian flow
NASA Astrophysics Data System (ADS)
Zhang, Peng; Jian, Xiao-Xia; Wong, S. C.; Choi, Keechoo
2012-02-01
This paper proposes a cellular automata model of pedestrian flow that defines a cost potential field, which takes into account the costs of travel time and discomfort, for a pedestrian to move to an empty neighboring cell. The formulation is based on a reconstruction of the density distribution and the underlying physics, including the rule for resolving conflicts, which is comparable to that in the floor field cellular automaton model. However, we assume that each pedestrian is familiar with the surroundings, thereby minimizing his or her instantaneous cost. This, in turn, helps reduce the randomness in selecting a target cell, which improves the existing cellular automata modelings, together with the computational efficiency. In the presence of two pedestrian groups, which are distinguished by their destinations, the cost distribution for each group is magnified due to the strong interaction between the two groups. As a typical phenomenon, the formation of lanes in the counter flow is reproduced.
DEEP-SaM - Energy-Efficient Provisioning Policies for Computing Environments
NASA Astrophysics Data System (ADS)
Bodenstein, Christian; Püschel, Tim; Hedwig, Markus; Neumann, Dirk
The cost of electricity for datacenters is a substantial operational cost that can and should be managed, not only for saving energy, but also due to the ecologic commitment inherent to power consumption. Often, pursuing this goal results in chronic underutilization of resources, a luxury most resource providers do not have in light of their corporate commitments. This work proposes, formalizes and numerically evaluates DEEP-Sam, for clearing provisioning markets, based on the maximization of welfare, subject to utility-level dependant energy costs and customer satisfaction levels. We focus specifically on linear power models, and the implications of the inherent fixed costs related to energy consumption of modern datacenters and cloud environments. We rigorously test the model by running multiple simulation scenarios and evaluate the results critically. We conclude with positive results and implications for long-term sustainable management of modern datacenters.
NASA Astrophysics Data System (ADS)
Maity, H.; Biswas, A.; Bhattacharjee, A. K.; Pal, A.
In this paper, we have proposed the design of quantum cost (QC) optimized 4-bit reversible universal shift register (RUSR) using reduced number of reversible logic gates. The proposed design is very useful in quantum computing due to its low QC, less no. of reversible logic gate and less delay. The QC, no. of gates, garbage outputs (GOs) are respectively 64, 8 and 16 for proposed work. The improvement of proposed work is also presented. The QC is 5.88% to 70.9% improved, no. of gate is 60% to 83.33% improved with compared to latest reported result.
Cost Considerations in Nonlinear Finite-Element Computing
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.
1985-01-01
Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.
Schnapauff, D; Collettini, F; Steffen, I; Wieners, G; Hamm, B; Gebauer, B; Maurer, M H
2016-02-25
To analyse and compare the costs of hepatic tumor ablation with computed tomography (CT)-guided high-dose rate brachytherapy (CT-HDRBT) and CT-guided radiofrequency ablation (CT-RFA) as two alternative minimally invasive treatment options of hepatocellular carcinoma (HCC). An activity based process model was created determining working steps and required staff of CT-RFA and CT-HDRBT. Prorated costs of equipment use (purchase, depreciation, and maintenance), costs of staff, and expenditure for disposables were identified in a sample of 20 patients (10 treated by CT-RFA and 10 by CT-HDRBT) and compared. A sensitivity and break even analysis was performed to analyse the dependence of costs on the number of patients treated annually with both methods. Costs of CT-RFA were nearly stable with mean overall costs of approximately 1909 €, 1847 €, 1816 € and 1801 € per patient when treating 25, 50, 100 or 200 patients annually, as the main factor influencing the costs of this procedure was the single-use RFA probe. Mean costs of CT-HDRBT decreased significantly per patient ablation with a rising number of patients treated annually, with prorated costs of 3442 €, 1962 €, 1222 € and 852 € when treating 25, 50, 100 or 200 patients, due to low costs of single-use disposables compared to high annual fix-costs which proportionally decreased per patient with a higher number of patients treated annually. A break-even between both methods was reached when treating at least 55 patients annually. Although CT-HDRBT is a more complex procedure with more staff involved, it can be performed at lower costs per patient from the perspective of the medical provider when treating more than 55 patients compared to CT-RFA, mainly due to lower costs for disposables and a decreasing percentage of fixed costs with an increasing number of treatments.
ERIC Educational Resources Information Center
Casey, James B.
1998-01-01
Explains how a public library can compute the actual cost of distributing tax forms to the public by listing all direct and indirect costs and demonstrating the formulae and necessary computations. Supplies directions for calculating costs involved for all levels of staff as well as associated public relations efforts, space, and utility costs.…
Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent
2013-05-01
This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.
QM/MM free energy simulations: recent progress and challenges
Lu, Xiya; Fang, Dong; Ito, Shingo; Okamoto, Yuko; Ovchinnikov, Victor
2016-01-01
Due to the higher computational cost relative to pure molecular mechanical (MM) simulations, hybrid quantum mechanical/molecular mechanical (QM/MM) free energy simulations particularly require a careful consideration of balancing computational cost and accuracy. Here we review several recent developments in free energy methods most relevant to QM/MM simulations and discuss several topics motivated by these developments using simple but informative examples that involve processes in water. For chemical reactions, we highlight the value of invoking enhanced sampling technique (e.g., replica-exchange) in umbrella sampling calculations and the value of including collective environmental variables (e.g., hydration level) in metadynamics simulations; we also illustrate the sensitivity of string calculations, especially free energy along the path, to various parameters in the computation. Alchemical free energy simulations with a specific thermodynamic cycle are used to probe the effect of including the first solvation shell into the QM region when computing solvation free energies. For cases where high-level QM/MM potential functions are needed, we analyze two different approaches: the QM/MM-MFEP method of Yang and co-workers and perturbative correction to low-level QM/MM free energy results. For the examples analyzed here, both approaches seem productive although care needs to be exercised when analyzing the perturbative corrections. PMID:27563170
Software Solution Saves Dollars
ERIC Educational Resources Information Center
Trotter, Andrew
2004-01-01
This article discusses computer software that can give classrooms and computer labs the capabilities of costly PC's at a small fraction of the cost. A growing number of cost-conscious school districts are finding budget relief in low-cost computer software known as "open source" that can do everything from manage school Web sites to equip…
Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration
NASA Astrophysics Data System (ADS)
Zhou, Jian; Qi, Jinyi
2011-03-01
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
Offodile, Anaeze C; Chatterjee, Abhishek; Vallejo, Sergio; Fisher, Carla S; Tchou, Julia C; Guo, Lifei
2015-04-01
Computed tomographic angiography is a diagnostic tool increasingly used for preoperative vascular mapping in abdomen-based perforator flap breast reconstruction. This study compared the use of computed tomographic angiography and the conventional practice of Doppler ultrasonography only in postmastectomy reconstruction using a cost-utility model. Following a comprehensive literature review, a decision analytic model was created using the three most clinically relevant health outcomes in free autologous breast reconstruction with computed tomographic angiography versus Doppler ultrasonography only. Cost and utility estimates for each health outcome were used to derive the quality-adjusted life-years and incremental cost-utility ratio. One-way sensitivity analysis was performed to scrutinize the robustness of the authors' results. Six studies and 782 patients were identified. Cost-utility analysis revealed a baseline cost savings of $3179, a gain in quality-adjusted life-years of 0.25. This yielded an incremental cost-utility ratio of -$12,716, implying a dominant choice favoring preoperative computed tomographic angiography. Sensitivity analysis revealed that computed tomographic angiography was costlier when the operative time difference between the two techniques was less than 21.3 minutes. However, the clinical advantage of computed tomographic angiography over Doppler ultrasonography only showed that computed tomographic angiography would still remain the cost-effective option even if it offered no additional operating time advantage. The authors' results show that computed tomographic angiography is a cost-effective technology for identifying lower abdominal perforators for autologous breast reconstruction. Although the perfect study would be a randomized controlled trial of the two approaches with true cost accrual, the authors' results represent the best available evidence.
Reviews on Security Issues and Challenges in Cloud Computing
NASA Astrophysics Data System (ADS)
An, Y. Z.; Zaaba, Z. F.; Samsudin, N. F.
2016-11-01
Cloud computing is an Internet-based computing service provided by the third party allowing share of resources and data among devices. It is widely used in many organizations nowadays and becoming more popular because it changes the way of how the Information Technology (IT) of an organization is organized and managed. It provides lots of benefits such as simplicity and lower costs, almost unlimited storage, least maintenance, easy utilization, backup and recovery, continuous availability, quality of service, automated software integration, scalability, flexibility and reliability, easy access to information, elasticity, quick deployment and lower barrier to entry. While there is increasing use of cloud computing service in this new era, the security issues of the cloud computing become a challenges. Cloud computing must be safe and secure enough to ensure the privacy of the users. This paper firstly lists out the architecture of the cloud computing, then discuss the most common security issues of using cloud and some solutions to the security issues since security is one of the most critical aspect in cloud computing due to the sensitivity of user's data.
Design and synthesis of the superionic conductor Na10SnP2S12
NASA Astrophysics Data System (ADS)
Richards, William D.; Tsujimura, Tomoyuki; Miara, Lincoln J.; Wang, Yan; Kim, Jae Chul; Ong, Shyue Ping; Uechi, Ichiro; Suzuki, Naoki; Ceder, Gerbrand
2016-03-01
Sodium-ion batteries are emerging as candidates for large-scale energy storage due to their low cost and the wide variety of cathode materials available. As battery size and adoption in critical applications increases, safety concerns are resurfacing due to the inherent flammability of organic electrolytes currently in use in both lithium and sodium battery chemistries. Development of solid-state batteries with ionic electrolytes eliminates this concern, while also allowing novel device architectures and potentially improving cycle life. Here we report the computation-assisted discovery and synthesis of a high-performance solid-state electrolyte material: Na10SnP2S12, with room temperature ionic conductivity of 0.4 mS cm-1 rivalling the conductivity of the best sodium sulfide solid electrolytes to date. We also computationally investigate the variants of this compound where tin is substituted by germanium or silicon and find that the latter may achieve even higher conductivity.
On nonlinear finite element analysis in single-, multi- and parallel-processors
NASA Technical Reports Server (NTRS)
Utku, S.; Melosh, R.; Islam, M.; Salama, M.
1982-01-01
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Design and synthesis of the superionic conductor Na10SnP2S12.
Richards, William D; Tsujimura, Tomoyuki; Miara, Lincoln J; Wang, Yan; Kim, Jae Chul; Ong, Shyue Ping; Uechi, Ichiro; Suzuki, Naoki; Ceder, Gerbrand
2016-03-17
Sodium-ion batteries are emerging as candidates for large-scale energy storage due to their low cost and the wide variety of cathode materials available. As battery size and adoption in critical applications increases, safety concerns are resurfacing due to the inherent flammability of organic electrolytes currently in use in both lithium and sodium battery chemistries. Development of solid-state batteries with ionic electrolytes eliminates this concern, while also allowing novel device architectures and potentially improving cycle life. Here we report the computation-assisted discovery and synthesis of a high-performance solid-state electrolyte material: Na10SnP2S12, with room temperature ionic conductivity of 0.4 mS cm(-1) rivalling the conductivity of the best sodium sulfide solid electrolytes to date. We also computationally investigate the variants of this compound where tin is substituted by germanium or silicon and find that the latter may achieve even higher conductivity.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Computer-assisted Behavioral Therapy and Contingency Management for Cannabis Use Disorder
Budney, Alan J.; Stanger, Catherine; Tilford, J. Mick; Scherer, Emily; Brown, Pamela C.; Li, Zhongze; Li, Zhigang; Walker, Denise
2015-01-01
Computer-assisted behavioral treatments hold promise for enhancing access to and reducing costs of treatments for substance use disorders. This study assessed the efficacy of a computer-assisted version of an efficacious, multicomponent treatment for cannabis use disorders (CUD), i.e., motivational enhancement therapy, cognitive-behavioral therapy, and abstinence-based contingency-management (MET/CBT/CM). An initial cost comparison was also performed. Seventy-five adult participants, 59% African Americans, seeking treatment for CUD received either, MET only (BRIEF), therapist-delivered MET/CBT/CM (THERAPIST), or computer-delivered MET/CBT/CM (COMPUTER). During treatment, the THERAPIST and COMPUTER conditions engendered longer durations of continuous cannabis abstinence than BRIEF (p < .05), but did not differ from each other. Abstinence rates and reduction in days of use over time were maintained in COMPUTER at least as well as in THERAPIST. COMPUTER averaged approximately $130 (p < .05) less per case than THERAPIST in therapist costs, which offset most of the costs of CM. Results add to promising findings that illustrate potential for computer-assisted delivery methods to enhance access to evidence-based care, reduce costs, and possibly improve outcomes. The observed maintenance effects and the cost findings require replication in larger clinical trials. PMID:25938629
Identification of Computational and Experimental Reduced-Order Models
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Hong, Moeljo S.; Bartels, Robert E.; Piatak, David J.; Scott, Robert C.
2003-01-01
The identification of computational and experimental reduced-order models (ROMs) for the analysis of unsteady aerodynamic responses and for efficient aeroelastic analyses is presented. For the identification of a computational aeroelastic ROM, the CFL3Dv6.0 computational fluid dynamics (CFD) code is used. Flutter results for the AGARD 445.6 Wing and for a Rigid Semispan Model (RSM) computed using CFL3Dv6.0 are presented, including discussion of associated computational costs. Modal impulse responses of the unsteady aerodynamic system are computed using the CFL3Dv6.0 code and transformed into state-space form. The unsteady aerodynamic state-space ROM is then combined with a state-space model of the structure to create an aeroelastic simulation using the MATLAB/SIMULINK environment. The MATLAB/SIMULINK ROM is then used to rapidly compute aeroelastic transients, including flutter. The ROM shows excellent agreement with the aeroelastic analyses computed using the CFL3Dv6.0 code directly. For the identification of experimental unsteady pressure ROMs, results are presented for two configurations: the RSM and a Benchmark Supercritical Wing (BSCW). Both models were used to acquire unsteady pressure data due to pitching oscillations on the Oscillating Turntable (OTT) system at the Transonic Dynamics Tunnel (TDT). A deconvolution scheme involving a step input in pitch and the resultant step response in pressure, for several pressure transducers, is used to identify the unsteady pressure impulse responses. The identified impulse responses are then used to predict the pressure responses due to pitching oscillations at several frequencies. Comparisons with the experimental data are then presented.
Spacelab experiment computer study. Volume 1: Executive summary (presentation)
NASA Technical Reports Server (NTRS)
Lewis, J. L.; Hodges, B. C.; Christy, J. O.
1976-01-01
A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.
Ogah, Okechukwu S.; Stewart, Simon; Onwujekwe, Obinna E.; Falase, Ayodele O.; Adebayo, Saheed O.; Olunuga, Taiwo; Sliwa, Karen
2014-01-01
Background: Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Methods: Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Results: Mean age of the cohort was 58.0±15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira −49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. Conclusion: The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required. PMID:25415310
Ogah, Okechukwu S; Stewart, Simon; Onwujekwe, Obinna E; Falase, Ayodele O; Adebayo, Saheed O; Olunuga, Taiwo; Sliwa, Karen
2014-01-01
Heart failure (HF) is a deadly, disabling and often costly syndrome world-wide. Unfortunately, there is a paucity of data describing its economic impact in sub Saharan Africa; a region in which the number of relatively younger cases will inevitably rise. Heath economic data were extracted from a prospective HF registry in a tertiary hospital situated in Abeokuta, southwest Nigeria. Outpatient and inpatient costs were computed from a representative cohort of 239 HF cases including personnel, diagnostic and treatment resources used for their management over a 12-month period. Indirect costs were also calculated. The annual cost per person was then calculated. Mean age of the cohort was 58.0 ± 15.1 years and 53.1% were men. The total computed cost of care of HF in Abeokuta was 76, 288,845 Nigerian Naira (US$508, 595) translating to 319,200 Naira (US$2,128 US Dollars) per patient per year. The total cost of in-patient care (46% of total health care expenditure) was estimated as 34,996,477 Naira (about 301,230 US dollars). This comprised of 17,899,977 Naira- 50.9% ($US114,600) and 17,806,500 naira -49.1%($US118,710) for direct and in-direct costs respectively. Out-patient cost was estimated as 41,292,368 Naira ($US 275,282). The relatively high cost of outpatient care was largely due to cost of transportation for monthly follow up visits. Payments were mostly made through out-of-pocket spending. The economic burden of HF in Nigeria is particularly high considering, the relatively young age of affected cases, a minimum wage of 18,000 Naira ($US120) per month and considerable component of out-of-pocket spending for those affected. Health reforms designed to mitigate the individual to societal burden imposed by the syndrome are required.
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
NASA Technical Reports Server (NTRS)
Plankey, B.
1981-01-01
A computer program called ECPVER (Energy Consumption Program - Verification) was developed to simulate all energy loads for any number of buildings. The program computes simulated daily, monthly, and yearly energy consumption which can be compared with actual meter readings for the same time period. Such comparison can lead to validation of the model under a variety of conditions, which allows it to be used to predict future energy saving due to energy conservation measures. Predicted energy saving can then be compared with actual saving to verify the effectiveness of those energy conservation changes. This verification procedure is planned to be an important advancement in the Deep Space Network Energy Project, which seeks to reduce energy cost and consumption at all DSN Deep Space Stations.
Bethel, EW; Bauer, A; Abbasi, H; ...
2016-06-10
The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less
2018-01-01
Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410
Code of Federal Regulations, 2010 CFR
2010-04-01
... computer hardware or software, or both, the cost of contracting for those services, or the cost of... operating budget. At the HA's option, the cost of the computer software may include service contracts to...
Enabling technologies for fiber optic sensing
NASA Astrophysics Data System (ADS)
Ibrahim, Selwan K.; Farnan, Martin; Karabacak, Devrez M.; Singer, Johannes M.
2016-04-01
In order for fiber optic sensors to compete with electrical sensors, several critical parameters need to be addressed such as performance, cost, size, reliability, etc. Relying on technologies developed in different industrial sectors helps to achieve this goal in a more efficient and cost effective way. FAZ Technology has developed a tunable laser based optical interrogator based on technologies developed in the telecommunication sector and optical transducer/sensors based on components sourced from the automotive market. Combining Fiber Bragg Grating (FBG) sensing technology with the above, high speed, high precision, reliable quasi distributed optical sensing systems for temperature, pressure, acoustics, acceleration, etc. has been developed. Careful design needs to be considered to filter out any sources of measurement drifts/errors due to different effects e.g. polarization and birefringence, coating imperfections, sensor packaging etc. Also to achieve high speed and high performance optical sensing systems, combining and synchronizing multiple optical interrogators similar to what has been used with computer/processors to deliver super computing power is an attractive solution. This path can be achieved by using photonic integrated circuit (PIC) technology which opens the doors to scaling up and delivering powerful optical sensing systems in an efficient and cost effective way.
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
NASA Astrophysics Data System (ADS)
Mohammadi, Hadi
Use of the Patch Vulnerability Management (PVM) process should be seriously considered for any networked computing system. The PVM process prevents the operating system (OS) and software applications from being attacked due to security vulnerabilities, which lead to system failures and critical data leakage. The purpose of this research is to create and design a Security and Critical Patch Management Process (SCPMP) framework based on Systems Engineering (SE) principles. This framework will assist Information Technology Department Staff (ITDS) to reduce IT operating time and costs and mitigate the risk of security and vulnerability attacks. Further, this study evaluates implementation of the SCPMP in the networked computing systems of an academic environment in order to: 1. Meet patch management requirements by applying SE principles. 2. Reduce the cost of IT operations and PVM cycles. 3. Improve the current PVM methodologies to prevent networked computing systems from becoming the targets of security vulnerability attacks. 4. Embed a Maintenance Optimization Tool (MOT) in the proposed framework. The MOT allows IT managers to make the most practicable choice of methods for deploying and installing released patches and vulnerability remediation. In recent years, there has been a variety of frameworks for security practices in every networked computing system to protect computer workstations from becoming compromised or vulnerable to security attacks, which can expose important information and critical data. I have developed a new mechanism for implementing PVM for maximizing security-vulnerability maintenance, protecting OS and software packages, and minimizing SCPMP cost. To increase computing system security in any diverse environment, particularly in academia, one must apply SCPMP. I propose an optimal maintenance policy that will allow ITDS to measure and estimate the variation of PVM cycles based on their department's requirements. My results demonstrate that MOT optimizes the process of implementing SCPMP in academic workstations.
Cyber-Physical Trade-Offs in Distributed Detection Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Yao, David K. Y.; Chin, J. C.
2010-01-01
We consider a network of sensors that measure the scalar intensity due to the background or a source combined with background, inside a two-dimensional monitoring area. The sensor measurements may be random due to the underlying nature of the source and background or due to sensor errors or both. The detection problem is infer the presence of a source of unknown intensity and location based on sensor measurements. In the conventional approach, detection decisions are made at the individual sensors, which are then combined at the fusion center, for example using the majority rule. With increased communication and computation costs,more » we show that a more complex fusion algorithm based on measurements achieves better detection performance under smooth and non-smooth source intensity functions, Lipschitz conditions on probability ratios and a minimum packing number for the state-space. We show that these conditions for trade-offs between the cyber costs and physical detection performance are applicable for two detection problems: (i) point radiation sources amidst background radiation, and (ii) sources and background with Gaussian distributions.« less
An evaluation of superminicomputers for thermal analysis
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Vidal, J. B.; Jones, G. K.
1962-01-01
The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis
Raimondo, Federico; Kamienkowski, Juan E.; Sigman, Mariano; Fernandez Slezak, Diego
2012-01-01
In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. PMID:22811699
Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K
2012-01-01
Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.
NASA Astrophysics Data System (ADS)
Kerst, Stijn; Shyrokau, Barys; Holweg, Edward
2018-05-01
This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.
Platform Architecture for Decentralized Positioning Systems.
Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg
2017-04-26
A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system.
Platform Architecture for Decentralized Positioning Systems
Kasmi, Zakaria; Norrdine, Abdelmoumen; Blankenbach, Jörg
2017-01-01
A platform architecture for positioning systems is essential for the realization of a flexible localization system, which interacts with other systems and supports various positioning technologies and algorithms. The decentralized processing of a position enables pushing the application-level knowledge into a mobile station and avoids the communication with a central unit such as a server or a base station. In addition, the calculation of the position on low-cost and resource-constrained devices presents a challenge due to the limited computing, storage capacity, as well as power supply. Therefore, we propose a platform architecture that enables the design of a system with the reusability of the components, extensibility (e.g., with other positioning technologies) and interoperability. Furthermore, the position is computed on a low-cost device such as a microcontroller, which simultaneously performs additional tasks such as data collecting or preprocessing based on an operating system. The platform architecture is designed, implemented and evaluated on the basis of two positioning systems: a field strength system and a time of arrival-based positioning system. PMID:28445414
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
NASA Technical Reports Server (NTRS)
Rivera, J. M.; Simpson, R. W.
1980-01-01
The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.
Parallel-Computing Architecture for JWST Wavefront-Sensing Algorithms
2011-09-01
results due to the increasing cost and complexity of each test. 2. ALGORITHM OVERVIEW Phase retrieval is an image-based wavefront-sensing...broadband illumination problems we have found that hand-tuning the right matrix sizes can account for a speedup of 86x faster. This comes from hand-picking...Wavefront Sensing and Control”. Proceedings of SPIE (2007) vol. 6687 (08). [5] Greenhouse, M. A., Drury , M. P., Dunn, J. L., Glazer, S. D., Greville, E
The Modeling, Simulation and Comparison of Interconnection Networks for Parallel Processing.
1987-12-01
performs better at a lower hardware cost than do the single stage cube and mesh networks. As a result, the designer of a paralll pro- cessing system is...attempted, and in most cases succeeded, in designing and implementing faster. more powerful systems. Due to design innovations and technological advances...largely to the computational complexity of the algorithms executed. In the von Neumann machine, instructions must be executed in a sequential manner. Design
Manual of phosphoric acid fuel cell power plant cost model and computer program
NASA Technical Reports Server (NTRS)
Lu, C. Y.; Alkasab, K. A.
1984-01-01
Cost analysis of phosphoric acid fuel cell power plant includes two parts: a method for estimation of system capital costs, and an economic analysis which determines the levelized annual cost of operating the system used in the capital cost estimation. A FORTRAN computer has been developed for this cost analysis.
Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico
NASA Astrophysics Data System (ADS)
Rodriguez, Miguel A.
The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.
Parallel computing in experimental mechanics and optical measurement: A review (II)
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Kemao, Qian
2018-05-01
With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.
NASA Astrophysics Data System (ADS)
Bertin, N.; Upadhyay, M. V.; Pradalier, C.; Capolungo, L.
2015-09-01
In this paper, we propose a novel full-field approach based on the fast Fourier transform (FFT) technique to compute mechanical fields in periodic discrete dislocation dynamics (DDD) simulations for anisotropic materials: the DDD-FFT approach. By coupling the FFT-based approach to the discrete continuous model, the present approach benefits from the high computational efficiency of the FFT algorithm, while allowing for a discrete representation of dislocation lines. It is demonstrated that the computational time associated with the new DDD-FFT approach is significantly lower than that of current DDD approaches when large number of dislocation segments are involved for isotropic and anisotropic elasticity, respectively. Furthermore, for fine Fourier grids, the treatment of anisotropic elasticity comes at a similar computational cost to that of isotropic simulation. Thus, the proposed approach paves the way towards achieving scale transition from DDD to mesoscale plasticity, especially due to the method’s ability to incorporate inhomogeneous elasticity.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Modeling Cardiac Electrophysiology at the Organ Level in the Peta FLOPS Computing Age
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Lawrence; Bishop, Martin; Hoetzl, Elena
2010-09-30
Despite a steep increase in available compute power, in-silico experimentation with highly detailed models of the heart remains to be challenging due to the high computational cost involved. It is hoped that next generation high performance computing (HPC) resources lead to significant reductions in execution times to leverage a new class of in-silico applications. However, performance gains with these new platforms can only be achieved by engaging a much larger number of compute cores, necessitating strongly scalable numerical techniques. So far strong scalability has been demonstrated only for a moderate number of cores, orders of magnitude below the range requiredmore » to achieve the desired performance boost.In this study, strong scalability of currently used techniques to solve the bidomain equations is investigated. Benchmark results suggest that scalability is limited to 512-4096 cores within the range of relevant problem sizes even when systems are carefully load-balanced and advanced IO strategies are employed.« less
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
NASA Astrophysics Data System (ADS)
Rusz, Ján; Lubk, Axel; Spiegelberg, Jakob; Tyutyunnikov, Dmitry
2017-12-01
The complex interplay of elastic and inelastic scattering amenable to different levels of approximation constitutes the major challenge for the computation and hence interpretation of TEM-based spectroscopical methods. The two major approaches to calculate inelastic scattering cross sections of fast electrons on crystals—Yoshioka-equations-based forward propagation and the reciprocal wave method—are founded in two conceptually differing schemes—a numerical forward integration of each inelastically scattered wave function, yielding the exit density matrix, and a computation of inelastic scattering matrix elements using elastically scattered initial and final states (double channeling). Here, we compare both approaches and show that the latter is computationally competitive to the former by exploiting analytical integration schemes over multiple excited states. Moreover, we show how to include full nonlocality of the inelastic scattering event, neglected in the forward propagation approaches, at no additional computing costs in the reciprocal wave method. Detailed simulations show in some cases significant errors due to the z -locality approximation and hence pitfalls in the interpretation of spectroscopical TEM results.
Bossard, B.; Renard, J. M.; Capelle, P.; Paradis, P.; Beuscart, M. C.
2000-01-01
Investing in information technology has become a crucial process in hospital management today. Medical and administrative managers are faced with difficulties in measuring medical information technology costs and benefits due to the complexity of the domain. This paper proposes a preimplementation methodology for evaluating and appraising material, process and human costs and benefits. Based on the users needs and organizational process analysis, the methodology provides an evaluative set of financial and non financial indicators which can be integrated in a decision making and investment evaluation process. We describe the first results obtained after a few months of operation for the Computer-Based Patient Record (CPR) project. Its full acceptance, in spite of some difficulties, encourages us to diffuse the method for the entire project. PMID:11079851
Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.
Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei
2018-06-15
Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.
An Energy Integrated Dispatching Strategy of Multi- energy Based on Energy Internet
NASA Astrophysics Data System (ADS)
Jin, Weixia; Han, Jun
2018-01-01
Energy internet is a new way of energy use. Energy internet achieves energy efficiency and low cost by scheduling a variety of different forms of energy. Particle Swarm Optimization (PSO) is an advanced algorithm with few parameters, high computational precision and fast convergence speed. By improving the parameters ω, c1 and c2, PSO can improve the convergence speed and calculation accuracy. The objective of optimizing model is lowest cost of fuel, which can meet the load of electricity, heat and cold after all the renewable energy is received. Due to the different energy structure and price in different regions, the optimization strategy needs to be determined according to the algorithm and model.
Photovoltaics and electric utilities
NASA Astrophysics Data System (ADS)
Bright, R.; Leigh, R.; Sills, T.
1981-12-01
The long term value of grid connected, residential photovoltaic (PV) systems is determined. The value of the PV electricity is defined as the full avoided cost in accordance with the Public Utilities Regulatory Policies Act of 1978. The avoided cost is computed using a long range utility planning approach to measure revenue requirement changes in response to the time phased introduction of PV systems into the grid. A case study approach to three utility systems is used. The changing value of PV electricity over a twenty year period from 1985 is presented, and the fuel and capital savings due to FY are analyzed. These values are translated into measures of breakeven capital investment under several options of power interchange and pricing.
Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing
NASA Astrophysics Data System (ADS)
Klems, Markus; Nimis, Jens; Tai, Stefan
On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.
Performance Comparison of Mainframe, Workstations, Clusters, and Desktop Computers
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2005-01-01
A performance evaluation of a variety of computers frequently found in a scientific or engineering research environment was conducted using a synthetic and application program benchmarks. From a performance perspective, emerging commodity processors have superior performance relative to legacy mainframe computers. In many cases, the PC clusters exhibited comparable performance with traditional mainframe hardware when 8-12 processors were used. The main advantage of the PC clusters was related to their cost. Regardless of whether the clusters were built from new computers or whether they were created from retired computers their performance to cost ratio was superior to the legacy mainframe computers. Finally, the typical annual maintenance cost of legacy mainframe computers is several times the cost of new equipment such as multiprocessor PC workstations. The savings from eliminating the annual maintenance fee on legacy hardware can result in a yearly increase in total computational capability for an organization.
NASA Technical Reports Server (NTRS)
Summers, Geoffrey P.; Walters, Robert J.; Messenger, Scott R.; Burke, Edward A.
1995-01-01
An analysis embodied in a PC computer program is presented which quantitatively demonstrates how the availability of radiation hard solar cells can minimize the cost of a global satellite communication system. The chief distinction between the currently proposed systems, such as Iridium Odyssey and Ellipsat, is the number of satellites employed and their operating altitudes. Analysis of the major costs associated with implementing these systems shows that operation within the earth's radiation belts can reduce the total system cost by as much as a factor of two, so long as radiation hard components including solar cells, can be used. A detailed evaluation of several types of planar solar cells is given, including commercially available Si and GaAs/Ge cells, and InP/Si cells which are under development. The computer program calculates the end of life (EOL) power density of solar arrays taking into account the cell geometry, coverglass thickness, support frame, electrical interconnects, etc. The EOL power density can be determined for any altitude from low earth orbit (LEO) to geosynchronous (GEO) and for equatorial to polar planes of inclination. The mission duration can be varied over the entire range planned for the proposed satellite systems. An algorithm is included in the program for determining the degradation of cell efficiency for different cell technologies due to proton and electron irradiation. The program can be used to determine the optimum configuration for any cell technology for a particular orbit and for a specified mission life. Several examples of applying the program are presented, in which it is shown that the EOL power density of different technologies can vary by an order of magnitude for certain missions. Therefore, although a relatively radiation soft technology can be made to provide the required EOL power by simply increasing the size of the array, the impact on the total system budget could be unacceptable, due to increased launch and hardware costs. In aggregate these factors can account for more than a 10% increase in the total system cost. Since the estimated total costs of proposed global coverage systems range from $1 Billion to $9 Billion, the availability of radiation hard solar cells could make a decisive difference in the selection of a particular constellation architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
Autonomous Sun-Direction Estimation Using Partially Underdetermined Coarse Sun Sensor Configurations
NASA Astrophysics Data System (ADS)
O'Keefe, Stephen A.
In recent years there has been a significant increase in interest in smaller satellites as lower cost alternatives to traditional satellites, particularly with the rise in popularity of the CubeSat. Due to stringent mass, size, and often budget constraints, these small satellites rely on making the most of inexpensive hardware components and sensors, such as coarse sun sensors (CSS) and magnetometers. More expensive high-accuracy sun sensors often combine multiple measurements, and use specialized electronics, to deterministically solve for the direction of the Sun. Alternatively, cosine-type CSS output a voltage relative to the input light and are attractive due to their very low cost, simplicity to manufacture, small size, and minimal power consumption. This research investigates using coarse sun sensors for performing robust attitude estimation in order to point a spacecraft at the Sun after deployment from a launch vehicle, or following a system fault. As an alternative to using a large number of sensors, this thesis explores sun-direction estimation techniques with low computational costs that function well with underdetermined sets of CSS. Single-point estimators are coupled with simultaneous nonlinear control to achieve sun-pointing within a small percentage of a single orbit despite the partially underdetermined nature of the sensor suite. Leveraging an extensive analysis of the sensor models involved, sequential filtering techniques are shown to be capable of estimating the sun-direction to within a few degrees, with no a priori attitude information and using only CSS, despite the significant noise and biases present in the system. Detailed numerical simulations are used to compare and contrast the performance of the five different estimation techniques, with and without rate gyro measurements, their sensitivity to rate gyro accuracy, and their computation time. One of the key concerns with reducing the number of CSS is sensor degradation and failure. In this thesis, a Modified Rodrigues Parameter based CSS calibration filter suitable for autonomous on-board operation is developed. The sensitivity of this method's accuracy to the available Earth albedo data is evaluated and compared to the required computational effort. The calibration filter is expanded to perform sensor fault detection, and promising results are shown for reduced resolution albedo models. All of the methods discussed provide alternative attitude, determination, and control system algorithms for small satellite missions looking to use inexpensive, small sensors due to size, power, or budget limitations.
MDTS: automatic complex materials design using Monte Carlo tree search.
M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji
2017-01-01
Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.
Genome assembly reborn: recent computational challenges
2009-01-01
Research into genome assembly algorithms has experienced a resurgence due to new challenges created by the development of next generation sequencing technologies. Several genome assemblers have been published in recent years specifically targeted at the new sequence data; however, the ever-changing technological landscape leads to the need for continued research. In addition, the low cost of next generation sequencing data has led to an increased use of sequencing in new settings. For example, the new field of metagenomics relies on large-scale sequencing of entire microbial communities instead of isolate genomes, leading to new computational challenges. In this article, we outline the major algorithmic approaches for genome assembly and describe recent developments in this domain. PMID:19482960
MDTS: automatic complex materials design using Monte Carlo tree search
NASA Astrophysics Data System (ADS)
Dieb, Thaer M.; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji
2017-12-01
Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at https://github.com/tsudalab/MDTS.
Findings in resting-state fMRI by differences from K-means clustering.
Chyzhyk, Darya; Graña, Manuel
2014-01-01
Resting state fMRI has growing number of studies with diverse aims, always centered on some kind of functional connectivity biomarker obtained from correlation regarding seed regions, or by analytical decomposition of the signal towards the localization of the spatial distribution of functional connectivity patterns. In general, studies are computationally costly and very sensitive to noise and preprocessing of data. In this paper we consider clustering by K-means as a exploratory procedure which can provide some results with little computational effort, due to efficient implementations that are readily available. We demonstrate the approach on a dataset of schizophrenia patients, finding differences between patients with and without auditory hallucinations.
Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System
NASA Technical Reports Server (NTRS)
Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.
2003-01-01
The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.
Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio
2008-04-14
In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost-effective) over both colonoscopy and colonoscopy with 1-time ultrasonography.
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup
Kudtarkar, Parul; DeLuca, Todd F.; Fusaro, Vincent A.; Tonellato, Peter J.; Wall, Dennis P.
2010-01-01
Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon’s Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon’s computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure. PMID:21258651
A hybrid computational strategy to address WGS variant analysis in >5000 samples.
Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli
2016-09-10
The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low coverage data can be accomplished in a scalable, cost effective and fast manner by using heterogeneous computing platforms without compromising on the quality of variants.
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2005-01-01
Over the past 30 years, numerical methods and simulation tools for fluid dynamic problems have advanced as a new discipline, namely, computational fluid dynamics (CFD). Although a wide spectrum of flow regimes are encountered in many areas of science and engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to a large demand for predicting the aerodynamic performance characteristics of flight vehicles, such as commercial, military, and space vehicles. As flow analysis is required to be more accurate and computationally efficient for both commercial and mission-oriented applications (such as those encountered in meteorology, aerospace vehicle development, general fluid engineering and biofluid analysis) CFD tools for engineering become increasingly important for predicting safety, performance and cost. This paper presents the author's perspective on the maturity of CFD, especially from an aerospace engineering point of view.
Practical Designs of Brain-Computer Interfaces Based on the Modulation of EEG Rhythms
NASA Astrophysics Data System (ADS)
Wang, Yijun; Gao, Xiaorong; Hong, Bo; Gao, Shangkai
A brain-computer interface (BCI) is a communication channel which does not depend on the brain's normal output pathways of peripheral nerves and muscles [1-3]. It supplies paralyzed patients with a new approach to communicate with the environment. Among various brain monitoring methods employed in current BCI research, electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-related cortical potential (MRCP). Details about these signals can be found in chapter "Brain Signals for Brain-Computer Interfaces". These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities.
32 CFR 701.52 - Computation of fees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... correspondence and preparation costs, these fees are not recoupable from the requester. (b) DD 2086, Record of... costs, as requesters may solicit a copy of that document to ensure accurate computation of fees. Costs... 32 National Defense 5 2010-07-01 2010-07-01 false Computation of fees. 701.52 Section 701.52...
12 CFR 1070.22 - Fees for processing requests for CFPB records.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CFPB shall charge the requester for the actual direct cost of the search, including computer search time, runs, and the operator's salary. The fee for computer output will be the actual direct cost. For... and the cost of operating the computer to process a request) equals the equivalent dollar amount of...
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
A survey of computer search service costs in the academic health sciences library.
Shirley, S
1978-01-01
The Norris Medical Library, University of Southern California, has recently completed an extensive survey of costs involved in the provision of computer search services beyond vendor charges for connect time and printing. In this survey costs for such items as terminal depreciation, repair contract, personnel time, and supplies are analyzed. Implications of this cost survey are discussed in relation to planning and price setting for computer search services. PMID:708953
Individual titanium zygomatic implant
NASA Astrophysics Data System (ADS)
Nekhoroshev, M. V.; Ryabov, K. N.; Avdeev, E. V.
2018-03-01
Custom individual implants for the reconstruction of craniofacial defects have gained importance due to better qualitative characteristics over their generic counterparts – plates, which should be bent according to patient needs. The Additive Manufacturing of individual implants allows reducing cost and improving quality of implants. In this paper, the authors describe design of zygomatic implant models based on computed tomography (CT) data. The fabrication of the implants will be carried out with 3D printing by selective laser melting machine SLM 280HL.
53rd Course Molecular Physics and Plasmas in Hypersonics 2
2013-09-09
between CO2 symmetric and bending modes ( 11 ) proceeds fast due to the Fermi resonance between the frequencies of these modes and can be considered as...of local maximization of the collision frequency given by Eq. ( 11 ) allows a strong reduction of the computational cost and it is verified a...called arc-jets or DC-Plasmatron [25, 26]. PWTs using Inductively Coupled Plasma (ICP) torch, based on Radio - Frequency (RF) discharge, are so- called
Robust optimization of a tandem grating solar thermal absorber
NASA Astrophysics Data System (ADS)
Choi, Jongin; Kim, Mingeon; Kang, Kyeonghwan; Lee, Ikjin; Lee, Bong Jae
2018-04-01
Ideal solar thermal absorbers need to have a high value of the spectral absorptance in the broad solar spectrum to utilize the solar radiation effectively. Majority of recent studies about solar thermal absorbers focus on achieving nearly perfect absorption using nanostructures, whose characteristic dimension is smaller than the wavelength of sunlight. However, precise fabrication of such nanostructures is not easy in reality; that is, unavoidable errors always occur to some extent in the dimension of fabricated nanostructures, causing an undesirable deviation of the absorption performance between the designed structure and the actually fabricated one. In order to minimize the variation in the solar absorptance due to the fabrication error, the robust optimization can be performed during the design process. However, the optimization of solar thermal absorber considering all design variables often requires tremendous computational costs to find an optimum combination of design variables with the robustness as well as the high performance. To achieve this goal, we apply the robust optimization using the Kriging method and the genetic algorithm for designing a tandem grating solar absorber. By constructing a surrogate model through the Kriging method, computational cost can be substantially reduced because exact calculation of the performance for every combination of variables is not necessary. Using the surrogate model and the genetic algorithm, we successfully design an effective solar thermal absorber exhibiting a low-level of performance degradation due to the fabrication uncertainty of design variables.
Achelrod, Dmitrij; Welte, Tobias; Schreyögg, Jonas; Stargardt, Tom
2016-09-01
To curb costs and improve health outcomes in chronic obstructive pulmonary disease (COPD), a nationwide disease management programme (DMP) was introduced in Germany in 2005. Yet, its effectiveness has not been comprehensively evaluated. To examine the effects of the German COPD DMP over three years on costs and health resource utilisation from the payer perspective, process quality, morbidity and mortality. A retrospective, population-based cohort study design is applied, using administrative data. After eliminating differences in observable characteristics between the DMP and the control group with entropy balancing, difference-in-difference estimators were computed to account for time-invariant unobservable heterogeneity. 215,104 individuals were included into the analysis of whom 25,269 were enrolled in the DMP. DMP patients had a reduced mortality hazard ratio (0.89, 95%CI: 0.84-0.94) but incurred excess costs of €553 per year. DMP enrolees reveal higher healthcare utilisation with larger shares of individuals being hospitalised (3.14%), consulting an outpatient clinic due to exacerbations (11.13%) and pharmaceutical prescriptions (2.78). However, average length of hospitalisation due to COPD fell by 0.49 days, adherence to medication guidelines as well as indicators for morbidity improved. The German COPD DMP achieved significant improvements in mortality, morbidity and process quality, but at higher costs. Given the low ICER per life year gained, DMP COPD may constitute a cost-effective option to promote COPD population health. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A DSMC Study of Low Pressure Argon Discharge
NASA Astrophysics Data System (ADS)
Hash, David; Meyyappan, M.
1997-10-01
Work toward a self-consistent plasma simulation using the DSMC method for examination of the flowfields of low-pressure high density plasma reactors is presented. Presently, DSMC simulations for these applications involve either treating the electrons as a fluid or imposing experimentally determined values for the electron number density profile. In either approach, the electrons themselves are not physically simulated. Self-consistent plasma DSMC simulations have been conducted for aerospace applications but at a severe computational cost due in part to the scalar architectures on which the codes were employed. The present work attempts to conduct such simulations at a more reasonable cost using a plasma version of the object-oriented parallel Cornell DSMC code, MONACO, on an IBM SP-2. Due the availability of experimental data, the GEC reference cell is chosen to conduct preliminary investigations. An argon discharge is examined thus affording a simple chemistry set with eight gas-phase reactions and five species: Ar, Ar^+, Ar^*, Ar_2, and e where Ar^* is a metastable.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
Gøthesen, Øystein; Slover, James; Havelin, Leif; Askildsen, Jan Erik; Malchau, Henrik; Furnes, Ove
2013-07-06
The use of Computer Assisted Surgery (CAS) for knee replacements is intended to improve the alignment of knee prostheses in order to reduce the number of revision operations. Is the cost effectiveness of computer assisted surgery influenced by patient volume and age? By employing a Markov model, we analysed the cost effectiveness of computer assisted surgery versus conventional arthroplasty with respect to implant survival and operation volume in two theoretical Norwegian age cohorts. We obtained mortality and hospital cost data over a 20-year period from Norwegian registers. We presumed that the cost of an intervention would need to be below NOK 500,000 per QALY (Quality Adjusted Life Year) gained, to be considered cost effective. The added cost of computer assisted surgery, provided this has no impact on implant survival, is NOK 1037 and NOK 1414 respectively for 60 and 75-year-olds per quality-adjusted life year at a volume of 25 prostheses per year, and NOK 128 and NOK 175 respectively at a volume of 250 prostheses per year. Sensitivity analyses showed that the 10-year implant survival in cohort 1 needs to rise from 89.8% to 90.6% at 25 prostheses per year, and from 89.8 to 89.9% at 250 prostheses per year for computer assisted surgery to be considered cost effective. In cohort 2, the required improvement is a rise from 95.1% to 95.4% at 25 prostheses per year, and from 95.10% to 95.14% at 250 prostheses per year. The cost of using computer navigation for total knee replacements may be acceptable for 60-year-old as well as 75-year-old patients if the technique increases the implant survival rate just marginally, and the department has a high operation volume. A low volume department might not achieve cost-effectiveness unless computer navigation has a more significant impact on implant survival, thus may defer the investments until such data are available.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
COMPUTER PROGRAM FOR CALCULATING THE COST OF DRINKING WATER TREATMENT SYSTEMS
This FORTRAN computer program calculates the construction and operation/maintenance costs for 45 centralized unit treatment processes for water supply. The calculated costs are based on various design parameters and raw water quality. These cost data are applicable to small size ...
Soergel, Philipp; Makowski, Lars; Schippert, Cordula; Staboulidou, Ismini; Hille, Ursula; Hillemanns, Peter
2012-02-01
Cervical intraepithelial neoplasia (CIN) represents the precursor of invasive cervical cancer and is associated with human papillomavirus infection (HPV) against which two vaccines have been approved in the last years. Standard treatments of high-grade CIN are conisation procedures, which are associated with an increased risk of subsequent pregnancy complications like premature delivery and possible subsequent life-long disability. HPV vaccination has therefore the potential to decrease neonatal morbidity and mortality. This has not been taken into account in published cost-effectiveness models. We calculated the possible reduction rate of conisations for different vaccination strategies for Germany. Using this rate, we computed the reduction of conisation-associated preterm deliveries, life-long disability and neonatal death due to prematurity. The number of life-years saved (LYS) and gain in quality-adjusted life-years (QALYs) was estimated. The incremental costs per LYS / additional QALY were calculated. The reduction of conisation procedures was highest in scenario I (vaccination coverage 90% prior to HPV exposition) with about 50%. The costs per LYS or additional QALY were lowest in scenario I, II and III with 45,101 € or 43,505-47,855 € and rose up to 60,544 € or 58,401-64,240 € in scenario V (50% vaccinated prior to sexual activity + additional 20% catch-up at a mean age of 20 y). Regarding the HPV 16 / 18 vaccines as "vaccines against conisation-related neonatal morbidity and mortality" alone, they already have the potential to be cost-effective. This effect adds up to reduction of cervical cancer cases and decreased costs of screening for CIN. Further studies on cost-effectiveness of HPV vaccination should take the significant amount of neonatal morbidity and mortality into account.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
Estimating costs and performance of systems for machine processing of remotely sensed data
NASA Technical Reports Server (NTRS)
Ballard, R. J.; Eastwood, L. F., Jr.
1977-01-01
This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.
Gega, L; Norman, I J; Marks, I M
2007-03-01
Exposure therapy is effective for phobic anxiety disorders (specific phobias, agoraphobia, social phobia) and panic disorder. Despite their high prevalence in the community, sufferers often get no treatment or if they do, it is usually after a long delay. This is largely due to the scarcity of healthcare professionals trained in exposure therapy, which is due, in part, to the high cost of training. Traditional teaching methods employed are labour intensive, being based mainly on role-play in small groups with feedback and coaching from experienced trainers. In an attempt to increase knowledge and skills in exposure therapy, there is now some interest in providing relevant teaching as part of pre-registration nurse education. Computers have been developed to teach terminology and simulate clinical scenarios for health professionals, and offer a potentially cost effective alternative to traditional teaching methods. To test whether student nurses would learn about exposure therapy for phobia/panic as well by computer-aided self-instruction as by face-to-face teaching, and to compare the individual and combined effects of two educational methods, traditional face-to-face teaching comprising a presentation with discussion and questions/answers by a specialist cognitive behaviour nurse therapist, and a computer-aided self-instructional programme based on a self-help programme for patients with phobia/panic called FearFighter, on students' knowledge, skills and satisfaction. Randomised controlled trial, with a crossover, completed in 2 consecutive days over a period of 4h per day. Ninety-two mental health pre-registration nursing students, of mixed gender, age and ethnic origin, with no previous training in cognitive behaviour therapy studying at one UK university. The two teaching methods led to similar improvements in knowledge and skills, and to similar satisfaction, when used alone. Using them in tandem conferred no added benefit. Computer-aided self-instruction was more efficient as it saved teacher preparation and delivery time, and needed no specialist tutor. Computer-aided self-instruction saved almost all preparation time and delivery effort for the expert teacher. When added to past results in medical students, the present results in nurses justify the use of computer-aided self-instruction for learning about exposure therapy and phobia/panic and of research into its value for other areas of health education.
26 CFR 7.57(d)-1 - Election with respect to straight line recovery of intangibles.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Tax Reform Act of 1976. Under this election taxpayers may use cost depletion to compute straight line... wells to which the election applies, cost depletion to compute straight line recovery of intangibles for... whether or not the taxpayer uses cost depletion in computing taxable income. (5) The election is made by a...
The Processing Cost of Reference Set Computation: Acquisition of Stress Shift and Focus
ERIC Educational Resources Information Center
Reinhart, Tanya
2004-01-01
Reference set computation -- the construction of a (global) comparison set to determine whether a given derivation is appropriate in context -- comes with a processing cost. I argue that this cost is directly visible at the acquisition stage: In those linguistic areas in which it has been independently established that such computation is indeed…
Some Useful Cost-Benefit Criteria for Evaluating Computer-Based Test Delivery Models and Systems
ERIC Educational Resources Information Center
Luecht, Richard M.
2005-01-01
Computer-based testing (CBT) is typically implemented using one of three general test delivery models: (1) multiple fixed testing (MFT); (2) computer-adaptive testing (CAT); or (3) multistage testing (MSTs). This article reviews some of the real cost drivers associated with CBT implementation--focusing on item production costs, the costs…
Satellite broadcasting system study
NASA Technical Reports Server (NTRS)
1972-01-01
The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.
NASA Astrophysics Data System (ADS)
Chetty, S.; Field, L. A.
2014-12-01
SWIMS III, is a low cost, autonomous sensor data gathering platform developed specifically for extreme/harsh cold environments. Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally inert materials that when deployed will increase the albedo, enabling the formation and/preservation of multi-year ice. SWIMS III's sophisticated autonomous sensors are designed to measure the albedo, weather, water temperature and other environmental parameters. This platform uses low cost, high accuracy/precision sensors, extreme environment command and data handling computer system using satellite and terrestrial wireless solution. The system also incorporates tilt sensors and sonar based ice thickness sensors. The system is light weight and can be deployed by hand by a single person. This presentation covers the technical, and design challenges in developing and deploying these platforms.
Event-Triggered Adaptive Dynamic Programming for Continuous-Time Systems With Control Constraints.
Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo
2016-08-31
In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law. In addition, in the proposed method, the control signal is transmitted in an aperiodic manner to reduce the computational and the transmission cost. Both the networks are only updated at the trigger instants decided by the event-triggered condition. Detailed Lyapunov analysis is provided to guarantee that the closed-loop event-triggered system is ultimately bounded. Three case studies are used to demonstrate the effectiveness of the proposed method.
Progressive Damage and Failure Analysis of Composite Laminates
NASA Astrophysics Data System (ADS)
Joseph, Ashith P. K.
Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-03-24
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.
Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah
2017-01-01
Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
A Stochastic Spiking Neural Network for Virtual Screening.
Morro, A; Canals, V; Oliver, A; Alomar, M L; Galan-Prado, F; Ballester, P J; Rossello, J L
2018-04-01
Virtual screening (VS) has become a key computational tool in early drug design and screening performance is of high relevance due to the large volume of data that must be processed to identify molecules with the sought activity-related pattern. At the same time, the hardware implementations of spiking neural networks (SNNs) arise as an emerging computing technique that can be applied to parallelize processes that normally present a high cost in terms of computing time and power. Consequently, SNN represents an attractive alternative to perform time-consuming processing tasks, such as VS. In this brief, we present a smart stochastic spiking neural architecture that implements the ultrafast shape recognition (USR) algorithm achieving two order of magnitude of speed improvement with respect to USR software implementations. The neural system is implemented in hardware using field-programmable gate arrays allowing a highly parallelized USR implementation. The results show that, due to the high parallelization of the system, millions of compounds can be checked in reasonable times. From these results, we can state that the proposed architecture arises as a feasible methodology to efficiently enhance time-consuming data-mining processes such as 3-D molecular similarity search.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
An Autonomous Underwater Recorder Based on a Single Board Computer.
Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson
2015-01-01
As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost.
Design and synthesis of the superionic conductor Na10SnP2S12
Richards, William D.; Tsujimura, Tomoyuki; Miara, Lincoln J.; Wang, Yan; Kim, Jae Chul; Ong, Shyue Ping; Uechi, Ichiro; Suzuki, Naoki; Ceder, Gerbrand
2016-01-01
Sodium-ion batteries are emerging as candidates for large-scale energy storage due to their low cost and the wide variety of cathode materials available. As battery size and adoption in critical applications increases, safety concerns are resurfacing due to the inherent flammability of organic electrolytes currently in use in both lithium and sodium battery chemistries. Development of solid-state batteries with ionic electrolytes eliminates this concern, while also allowing novel device architectures and potentially improving cycle life. Here we report the computation-assisted discovery and synthesis of a high-performance solid-state electrolyte material: Na10SnP2S12, with room temperature ionic conductivity of 0.4 mS cm−1 rivalling the conductivity of the best sodium sulfide solid electrolytes to date. We also computationally investigate the variants of this compound where tin is substituted by germanium or silicon and find that the latter may achieve even higher conductivity. PMID:26984102
Implementing ADM1 for plant-wide benchmark simulations in Matlab/Simulink.
Rosen, C; Vrecko, D; Gernaey, K V; Pons, M N; Jeppsson, U
2006-01-01
The IWA Anaerobic Digestion Model No.1 (ADM1) was presented in 2002 and is expected to represent the state-of-the-art model within this field in the future. Due to its complexity the implementation of the model is not a simple task and several computational aspects need to be considered, in particular if the ADM1 is to be included in dynamic simulations of plant-wide or even integrated systems. In this paper, the experiences gained from a Matlab/Simulink implementation of ADM1 into the extended COST/IWA Benchmark Simulation Model (BSM2) are presented. Aspects related to system stiffness, model interfacing with the ASM family, mass balances, acid-base equilibrium and algebraic solvers for pH and other troublesome state variables, numerical solvers and simulation time are discussed. The main conclusion is that if implemented properly, the ADM1 will also produce high-quality results in dynamic plant-wide simulations including noise, discrete sub-systems, etc. without imposing any major restrictions due to extensive computational efforts.
A fast dynamic grid adaption scheme for meteorological flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiedler, B.H.; Trapp, R.J.
1993-10-01
The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less
ERIC Educational Resources Information Center
Dennis, J. Richard; Thomson, David
This paper is concerned with a low cost alternative for providing computer experience to secondary school students. The brief discussion covers the programmable calculator and its relevance for teaching the concepts and the rudiments of computer programming and for computer problem solving. A list of twenty-five programming activities related to…
Computers in Education: Their Use and Cost, Education Automation Monograph Number 2.
ERIC Educational Resources Information Center
American Data Processing, Inc., Detroit, MI.
This monograph on the cost and use of computers in education consists of two parts. Part I is a report of the President's Science Advisory Committee concerning the cost and use of the computer in undergraduate, secondary, and higher education. In addition, the report contains a discussion of the interaction between research and educational uses of…
A computer program for analysis of fuelwood harvesting costs
George B. Harpole; Giuseppe Rensi
1985-01-01
The fuelwood harvesting computer program (FHP) is written in FORTRAN 60 and designed to select a collection of harvest units and systems from among alternatives to satisfy specified energy requirements at a lowest cost per million Btu's as recovered in a boiler, or thousand pounds of H2O evaporative capacity kiln drying. Computed energy costs are used as a...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. K Appendix K to Part 226—Total Annual Loan Cost Rate Computations for Reverse Mortgage...
Towards Dynamic Remote Data Auditing in Computational Clouds
Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114
Towards dynamic remote data auditing in computational clouds.
Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul
2014-01-01
Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.
PCS: a pallet costing system for wood pallet manufacturers (version 1.0 for Windows®)
A. Jefferson, Jr. Palmer; Cynthia D. West; Bruce G. Hansen; Marshall S. White; Hal L. Mitchell
2002-01-01
The Pallet Costing System (PCS) is a computer-based, Microsoft Windows® application that computes the total and per-unit cost of manufacturing an order of wood pallets. Information about the manufacturing facility, along with the pallet-order requirements provided by the customer, is used in determining production cost. The major cost factors addressed by PCS...
Development of a High Resolution 3D Infant Stomach Model for Surgical Planning
NASA Astrophysics Data System (ADS)
Chaudry, Qaiser; Raza, S. Hussain; Lee, Jeonggyu; Xu, Yan; Wulkan, Mark; Wang, May D.
Medical surgical procedures have not changed much during the past century due to the lack of accurate low-cost workbench for testing any new improvement. The increasingly cheaper and powerful computer technologies have made computer-based surgery planning and training feasible. In our work, we have developed an accurate 3D stomach model, which aims to improve the surgical procedure that treats the infant pediatric and neonatal gastro-esophageal reflux disease (GERD). We generate the 3-D infant stomach model based on in vivo computer tomography (CT) scans of an infant. CT is a widely used clinical imaging modality that is cheap, but with low spatial resolution. To improve the model accuracy, we use the high resolution Visible Human Project (VHP) in model building. Next, we add soft muscle material properties to make the 3D model deformable. Then we use virtual reality techniques such as haptic devices to make the 3D stomach model deform upon touching force. This accurate 3D stomach model provides a workbench for testing new GERD treatment surgical procedures. It has the potential to reduce or eliminate the extensive cost associated with animal testing when improving any surgical procedure, and ultimately, to reduce the risk associated with infant GERD surgery.
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, M. D.; Cockrell, C. R.; Beck, F. B.
2004-01-01
The hybrid Finite Element Method(FEM)/Method of Moments(MoM) technique has become popular over the last few years due to its flexibility to handle arbitrarily shaped objects with complex materials. One of the disadvantages of this technique, however, is the computational cost involved in obtaining solutions over a frequency range as computations are repeated for each frequency. In this paper, the application of Model Based Parameter Estimation (MBPE) method[1] with the hybrid FEM/MoM technique is presented for fast computation of frequency response of cavity-backed apertures[2,3]. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency-derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. Using the rational function approximation, the electric field is calculated at different frequencies from which the frequency response is obtained.
48 CFR 42.709-4 - Computing interest.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Computing interest. 42.709... MANAGEMENT CONTRACT ADMINISTRATION AND AUDIT SERVICES Indirect Cost Rates 42.709-4 Computing interest. For 42.709-1(a)(1)(ii), compute interest on any paid portion of the disallowed cost as follows: (a) Consider...
48 CFR 42.709-4 - Computing interest.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Computing interest. 42.709... MANAGEMENT CONTRACT ADMINISTRATION AND AUDIT SERVICES Indirect Cost Rates 42.709-4 Computing interest. For 42.709-1(a)(1)(ii), compute interest on any paid portion of the disallowed cost as follows: (a) Consider...
48 CFR 42.709-4 - Computing interest.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Computing interest. 42.709... MANAGEMENT CONTRACT ADMINISTRATION AND AUDIT SERVICES Indirect Cost Rates 42.709-4 Computing interest. For 42.709-1(a)(1)(ii), compute interest on any paid portion of the disallowed cost as follows: (a) Consider...
1991-09-01
back of a paper or plastic card. A decoder reads the flux reversals and translates them into letters and numbers for processing by a computer. The best...read without decoding. In the past 3 or 4 years, OCR technology has been improved significantly due mostly to the availability of relatively low-cost...transaction readers, and hand- held readers. Page readers scan pages of text either directly from paper or from digitized images of documents stored in the
Economic models for management of resources in peer-to-peer and grid computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David
2001-07-01
The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
NASA Astrophysics Data System (ADS)
Nigg, D. W.; Wheeler, F. J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.
Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigg, D.W.; Wheeler, F.J.
1981-01-01
A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less
Parameterized reduced-order models using hyper-dual numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize themore » effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.« less
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
Non-linear wave phenomena in Josephson elements for superconducting electronics
NASA Astrophysics Data System (ADS)
Christiansen, P. L.; Parmentier, R. D.; Skovgaard, O.
1985-07-01
The long and intermediate length Josephson tunnel junction oscillator with overlap geometry of linear and circular configuration, is investigated by computational solution of the perturbed sine-Gordon equation model and by experimental measurements. The model predicts the experimental results very well. Line oscillators as well as ring oscillators are treated. For long junctions soliton perturbation methods are developed and turn out to be efficient prediction tools, also providing physical understanding of the dynamics of the oscillator. For intermediate length junctions expansions in terms of linear cavity modes reduce computational costs. The narrow linewidth of the electromagnetic radiation (typically 1 kHz of a line at 10 GHz) is demonstrated experimentally. Corresponding computer simulations requiring a relative accuracy of less than 10 to the -7th power are performed on supercomputer CRAY-1-S. The broadening of linewidth due to external microradiation and internal thermal noise is determined.
Venko, Katja; Roy Choudhury, A; Novič, Marjana
2017-01-01
The structural and functional details of transmembrane proteins are vastly underexplored, mostly due to experimental difficulties regarding their solubility and stability. Currently, the majority of transmembrane protein structures are still unknown and this present a huge experimental and computational challenge. Nowadays, thanks to X-ray crystallography or NMR spectroscopy over 3000 structures of membrane proteins have been solved, among them only a few hundred unique ones. Due to the vast biological and pharmaceutical interest in the elucidation of the structure and the functional mechanisms of transmembrane proteins, several computational methods have been developed to overcome the experimental gap. If combined with experimental data the computational information enables rapid, low cost and successful predictions of the molecular structure of unsolved proteins. The reliability of the predictions depends on the availability and accuracy of experimental data associated with structural information. In this review, the following methods are proposed for in silico structure elucidation: sequence-dependent predictions of transmembrane regions, predictions of transmembrane helix-helix interactions, helix arrangements in membrane models, and testing their stability with molecular dynamics simulations. We also demonstrate the usage of the computational methods listed above by proposing a model for the molecular structure of the transmembrane protein bilitranslocase. Bilitranslocase is bilirubin membrane transporter, which shares similar tissue distribution and functional properties with some of the members of the Organic Anion Transporter family and is the only member classified in the Bilirubin Transporter Family. Regarding its unique properties, bilitranslocase is a potentially interesting drug target.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollingsworth, Jeff
2014-07-31
The purpose of this project was to develop tools and techniques to improve the ability of computational scientists to investigate and correct problems (bugs) in their programs. Specifically, the University of Maryland component of this project focused on the problems associated with the finite number of bits available in a computer to represent numeric values. In large scale scientific computation, numbers are frequently added to and multiplied with each other billions of times. Thus even small errors due to the representation of numbers can accumulate into big errors. However, using too many bits to represent a number results in additionalmore » computation, memory, and energy costs. Thus it is critical to find the right size for numbers. This project focused on several aspects of this general problem. First, we developed a tool to look for cancelations, the catastrophic loss of precision in numbers due to the addition of two numbers whose actual values are close to each other, but whose representation in a computer is identical or nearly so. Second, we developed a suite of tools to allow programmers to identify exactly how much precision is required for each operation in their program. This tool allows programmers to both verify that enough precision is available, but more importantly find cases where extra precision could be eliminated to allow the program to use less memory, computer time, or energy. These tools use advanced binary modification techniques to allow the analysis of actual optimized code. The system, called Craft, has been applied to a number of benchmarks and real applications.« less
Converting laserdisc video to digital video: a demonstration project using brain animations.
Jao, C S; Hier, D B; Brint, S U
1995-01-01
Interactive laserdiscs are of limited value in large group learning situations due to the expense of establishing multiple workstations. The authors implemented an alternative to laserdisc video by using indexed digital video combined with an expert system. High-quality video was captured from a laserdisc player and combined with waveform audio into an audio-video-interleave (AVI) file format in the Microsoft Video-for-Windows environment (Microsoft Corp., Seattle, WA). With the use of an expert system, a knowledge-based computer program provided random access to these indexed AVI files. The program can be played on any multimedia computer without the need for laserdiscs. This system offers a high level of interactive video without the overhead and cost of a laserdisc player.
3D Product Development for Loose-Fitting Garments Based on Parametric Human Models
NASA Astrophysics Data System (ADS)
Krzywinski, S.; Siegmund, J.
2017-10-01
Researchers and commercial suppliers worldwide pursue the objective of achieving a more transparent garment construction process that is computationally linked to a virtual body, in order to save development costs over the long term. The current aim is not to transfer the complete pattern making step to a 3D design environment but to work out basic constructions in 3D that provide excellent fit due to their accurate construction and morphological pattern grading (automatic change of sizes in 3D) in respect of sizes and body types. After a computer-aided derivation of 2D pattern parts, these can be made available to the industry as a basis on which to create more fashionable variations.
A bee-hive frequency selective surface for Wi-Max and GPS applications
NASA Astrophysics Data System (ADS)
Ray, A.; Kahar, M.; Sarkar, P. P.
2013-10-01
The paper presents investigations on a bee-hive cell, concentric aperture frequency selective surface (FSS) tuned to pass 1.5 GHz for global positioning system application and 3.5 GHz for worldwide interoperability for microwave access applications. The designed dual-band FSS screen is easy to fabricate with low cost materials, exhibiting low weight, with two broad transmission bands, where the maximum recorded -10 dB transmission percentage bandwidth is 68.67 %. Due to symmetrical nature of design, FSS is insensitive to variation of RF incidence angle for 60° rotations. A computationally efficient method for analysing this FSS is presented. Experimental investigation is performed using standard microwave test bench. It is observed that the computed and experimental results are in close agreement.
Asynchronous sampled-data approach for event-triggered systems
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Memon, Azhar M.
2017-11-01
While aperiodically triggered network control systems save a considerable amount of communication bandwidth, they also pose challenges such as coupling between control and event-condition design, optimisation of the available resources such as control, communication and computation power, and time-delays due to computation and communication network. With this motivation, the paper presents separate designs of control and event-triggering mechanism, thus simplifying the overall analysis, asynchronous linear quadratic Gaussian controller which tackles delays and aperiodic nature of transmissions, and a novel event mechanism which compares the cost of the aperiodic system against a reference periodic implementation. The proposed scheme is simulated on a linearised wind turbine model for pitch angle control and the results show significant improvement against the periodic counterpart.
Lee, Soohyun; Seo, Chae Hwa; Alver, Burak Han; Lee, Sanghyuk; Park, Peter J
2015-09-03
RNA-seq has been widely used for genome-wide expression profiling. RNA-seq data typically consists of tens of millions of short sequenced reads from different transcripts. However, due to sequence similarity among genes and among isoforms, the source of a given read is often ambiguous. Existing approaches for estimating expression levels from RNA-seq reads tend to compromise between accuracy and computational cost. We introduce a new approach for quantifying transcript abundance from RNA-seq data. EMSAR (Estimation by Mappability-based Segmentation And Reclustering) groups reads according to the set of transcripts to which they are mapped and finds maximum likelihood estimates using a joint Poisson model for each optimal set of segments of transcripts. The method uses nearly all mapped reads, including those mapped to multiple genes. With an efficient transcriptome indexing based on modified suffix arrays, EMSAR minimizes the use of CPU time and memory while achieving accuracy comparable to the best existing methods. EMSAR is a method for quantifying transcripts from RNA-seq data with high accuracy and low computational cost. EMSAR is available at https://github.com/parklab/emsar.
NASA Astrophysics Data System (ADS)
Ljungberg, Mathias P.
2017-12-01
A method is presented for describing vibrational effects in x-ray absorption spectroscopy and resonant inelastic x-ray scattering (RIXS) using a combination of the classical Franck-Condon (FC) approximation and classical trajectories run on the core-excited state. The formulation of RIXS is an extension of the semiclassical Kramers-Heisenberg formalism of Ljungberg et al. [Phys. Rev. B 82, 245115 (2010), 10.1103/PhysRevB.82.245115] to the resonant case, retaining approximately the same computational cost. To overcome difficulties with connecting the absorption and emission processes in RIXS, the classical FC approximation is used for the absorption, which is seen to work well provided that a zero-point-energy correction is included. In the case of core-excited states with dissociative character, the method is capable of closely reproducing the main features for one-dimensional test systems, compared to the quantum-mechanical formulation. Due to the good accuracy combined with the relatively low computational cost, the method has great potential of being used for complex systems with many degrees of freedom, such as liquids and surface adsorbates.
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2017-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2016-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.
Methodology for cost analysis of film-based and filmless portable chest systems
NASA Astrophysics Data System (ADS)
Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.
1996-05-01
Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
Algorithm For Optimal Control Of Large Structures
NASA Technical Reports Server (NTRS)
Salama, Moktar A.; Garba, John A..; Utku, Senol
1989-01-01
Cost of computation appears competitive with other methods. Problem to compute optimal control of forced response of structure with n degrees of freedom identified in terms of smaller number, r, of vibrational modes. Article begins with Hamilton-Jacobi formulation of mechanics and use of quadratic cost functional. Complexity reduced by alternative approach in which quadratic cost functional expressed in terms of control variables only. Leads to iterative solution of second-order time-integral matrix Volterra equation of second kind containing optimal control vector. Cost of algorithm, measured in terms of number of computations required, is of order of, or less than, cost of prior algoritms applied to similar problems.
Periodic component analysis as a spatial filter for SSVEP-based brain-computer interface.
Kiran Kumar, G R; Reddy, M Ramasubba
2018-06-08
Traditional Spatial filters used for steady-state visual evoked potential (SSVEP) extraction such as minimum energy combination (MEC) require the estimation of the background electroencephalogram (EEG) noise components. Even though this leads to improved performance in low signal to noise ratio (SNR) conditions, it makes such algorithms slow compared to the standard detection methods like canonical correlation analysis (CCA) due to the additional computational cost. In this paper, Periodic component analysis (πCA) is presented as an alternative spatial filtering approach to extract the SSVEP component effectively without involving extensive modelling of the noise. The πCA can separate out components corresponding to a given frequency of interest from the background electroencephalogram (EEG) by capturing the temporal information and does not generalize SSVEP based on rigid templates. Data from ten test subjects were used to evaluate the proposed method and the results demonstrate that the periodic component analysis acts as a reliable spatial filter for SSVEP extraction. Statistical tests were performed to validate the results. The experimental results show that πCA provides significant improvement in accuracy compared to standard CCA and MEC in low SNR conditions. The results demonstrate that πCA provides better detection accuracy compared to CCA and on par with that of MEC at a lower computational cost. Hence πCA is a reliable and efficient alternative detection algorithm for SSVEP based brain-computer interface (BCI). Copyright © 2018. Published by Elsevier B.V.
Efficient scatter model for simulation of ultrasound images from computed tomography data
NASA Astrophysics Data System (ADS)
D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.
2015-12-01
Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.
Performance limits and trade-offs in entropy-driven biochemical computers.
Chu, Dominique
2018-04-14
It is now widely accepted that biochemical reaction networks can perform computations. Examples are kinetic proof reading, gene regulation, or signalling networks. For many of these systems it was found that their computational performance is limited by a trade-off between the metabolic cost, the speed and the accuracy of the computation. In order to gain insight into the origins of these trade-offs, we consider entropy-driven computers as a model of biochemical computation. Using tools from stochastic thermodynamics, we show that entropy-driven computation is subject to a trade-off between accuracy and metabolic cost, but does not involve time-trade-offs. Time trade-offs appear when it is taken into account that the result of the computation needs to be measured in order to be known. We argue that this measurement process, although usually ignored, is a major contributor to the cost of biochemical computation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Application of H-matrices method to the calculation of the stress field in a viscoelastic medium
NASA Astrophysics Data System (ADS)
Ohtani, M.; Hirahara, K.
2017-12-01
In SW Japan, the Philippine Sea plate subducts from the south and the large earthquakes around M (Magnitude) 8 repeatedly occur at the plate boundary along the Nankai Trough, called as Nankai/Tonankai earthquakes. Near the rupture area of these earthquakes, the active volcanoes lines in the Kyushu region SW Japan, such as Sakurajima volcano. There are also distributed in the Tokai-Kanto region SE Japan, such as Mt. Fuji. The eruption of Mt. Fuji in 1707, called as Hoei eruption, have occurred 49 days after the one of the series of Nankai/Tonankai earthquakes, 1707 Hoei earthquake (M8.4). It suggests that the stress field due to the earthquake sometimes helps the volcanoes to erupt. When we consider the stress change due to the earthquake, the effect of viscoelastic deformation of the crust will be important. FEM is always used for modeling such inelastic effect. However, it requires the high computational cost of O(N3), where N is the number of discretized cells of the inelastic medium. Recently, a new method based on BIEM is proposed by Barbot and Fialko (2010). In their method, calculation of the stress field due to the inelastic strain is replaced to solve the inhomogeneous Navier's equation with equivalent body forces of the inelastic strain. Then, using the stress-strain greenfunction in an elastic medium, we can take into account the inelastic effect. In this study, we employ their method to evaluate the stress change at the active volcanoes around the Nankai/Tonankai earthquakes. Their method requires the computational cost and memory storage of O(N2). We try to reduce the computational amount and the memory by applying the fast computation method of H-matrices method. With H-matrices method, a dense matrix is divided into hierarchical structure of submatrices, and each submatrix is approximated to be low rank. When we divide the viscoelastic medium into N = 8,640 or 69,120 uniform cuboid cells and apply the H-matrices method, the required storage memory for the matrices of stress-strain greenfunction are reduced to 0.17 times or 0.05 times of those for the uncompressed original matrices with enough accuracy. In this study, using this method, we show the time development of the stress change at the volcanoes around the Nankai/Tonankai earthquakes, assuming the simple viscos structure.
NASA Astrophysics Data System (ADS)
Salvati, Paola; Bianchi, Cinzia; Hussin, Haydar; Guzzetti, Fausto
2013-04-01
Landslide and flood events in Italy cause wide and severe damage to buildings and infrastructure, and are frequently involved in the loss of human life. The cost estimates of past natural disasters generally refer to the amount of public money used for the restoration of the direct damage, and most commonly do not account for all disaster impacts. Other cost components, including indirect losses, are difficult to quantify and, among these, the cost of human lives. The value of specific human life can be identified with the value of a statistical life (VLS), defined as the value that an individual places on a marginal change in their likelihood of death This is different from the value of an actual life. Based on information of fatal car accidents in Italy, we evaluate the cost that society suffers for the loss of life due to landslide and flood events. Using a catalogue of fatal landslide and flood events, for which information about gender and age of the fatalities is known, we determine the cost that society suffers for the loss of their life. For the purpose, we calculate the economic value in terms of the total income that the working-age population involved in the fatal events would have earned over the course of their life. For the computation, we use the pro-capita income calculated as the ratio between the GDP and the population value in Italy for each year, since 1980. Problems occur for children and retired people that we decided not to include in our estimates.
Yakami, Masahiro; Ishizu, Koichi; Kubo, Takeshi; Okada, Tomohisa; Togashi, Kaori
2011-04-01
Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. "Ordinary" hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about $3,600 with an incremental storage cost of about $900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access.
[Cost analysis for navigation in knee endoprosthetics].
Cerha, O; Kirschner, S; Günther, K-P; Lützner, J
2009-12-01
Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to
Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing
NASA Astrophysics Data System (ADS)
Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey
Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.
Health and cost impact of air pollution from biomass burning over the United States
NASA Astrophysics Data System (ADS)
Eslami, E.; Sadeghi, B.; Choi, Y.
2017-12-01
Effective assessment of health and cost effects of air pollution associated with wildfire events is critical for supporting sustainable management and policy analysis to reduce environmental damages. Since biomass burning events result in higher ozone, PM2.5, and NOx concentration values in urban regions due to long-range transport, preliminary results indicated that wildfire events cause a considerable increase in incident estimates and costs. This study aims to evaluate the health and cost impact of biomass burning events over the continental United States using combined air quality and health impact modeling. To meet this goal, a comprehensive air quality modeling scenarios containing biomass burning emissions were conducted using the Community Multiscale Air Quality (CMAQ) modeling system from 2011 to 2014 with a spatial resolution of 12 km. The modeling period includes fire seasons between April and October over the course of four years. By using modeled pollutants concentrations, the USEPA's GIS-based computer program Environmental Benefits Mapping and Analysis Program-Community Edition (BenMAP-CE) provides an inclusive figure of health and cost impact caused by changing gaseous and particulate air pollution due to fire events. The basis of BenMAP-CE is the use of a damage-function approach to estimate the health impact of an applied change in air quality by comparing a biomass burning scenario (the one that includes wildfire events) with a baseline scenario (without biomass emissions). This approach considers several factors containing population, exposure to the pollutants, adverse health effects of a particular pollutant, and economic costs. Hence, this study made it capable of showing how biomass burning across U.S. influences people's health in different months, seasons, and regions. Besides, the cost impact of the wildfire events during study periods has also been estimated at both national and regional levels. The results of this study demonstrate the BenMAP-CE can be successfully utilized as a proper tool to obtain health and cost impact of biomass burning events.
ERIC Educational Resources Information Center
Lintz, Larry M.; And Others
This study investigated the feasibility of a low cost computer-aided instruction/computer-managed instruction (CAI/CMI) system. Air Force instructors and training supervisors were surveyed to determine the potential payoffs of various CAI and CMI functions. Results indicated that a wide range of capabilities had potential for resident technical…
NASA Technical Reports Server (NTRS)
Paluzzi, Peter; Miller, Rosalind; Kurihara, West; Eskey, Megan
1998-01-01
Over the past several months, major industry vendors have made a business case for the network computer as a win-win solution toward lowering total cost of ownership. This report provides results from Phase I of the Ames Research Center network computer evaluation project. It identifies factors to be considered for determining cost of ownership; further, it examines where, when, and how network computer technology might fit in NASA's desktop computing architecture.
Computer programs for estimating civil aircraft economics
NASA Technical Reports Server (NTRS)
Maddalon, D. V.; Molloy, J. K.; Neubawer, M. J.
1980-01-01
Computer programs for calculating airline direct operating cost, indirect operating cost, and return on investment were developed to provide a means for determining commercial aircraft life cycle cost and economic performance. A representative wide body subsonic jet aircraft was evaluated to illustrate use of the programs.
Anzai, Yoshimi; Heilbrun, Marta E; Haas, Derek; Boi, Luca; Moshre, Kirk; Minoshima, Satoshi; Kaplan, Robert; Lee, Vivian S
2017-02-01
The lack of understanding of the real costs (not charge) of delivering healthcare services poses tremendous challenges in the containment of healthcare costs. In this study, we applied an established cost accounting method, the time-driven activity-based costing (TDABC), to assess the costs of performing an abdomen and pelvis computed tomography (AP CT) in an academic radiology department and identified opportunities for improved efficiency in the delivery of this service. The study was exempt from an institutional review board approval. TDABC utilizes process mapping tools from industrial engineering and activity-based costing. The process map outlines every step of discrete activity and duration of use of clinical resources, personnel, and equipment. By multiplying the cost per unit of capacity by the required task time for each step, and summing each component cost, the overall costs of AP CT is determined for patients in three settings, inpatient (IP), outpatient (OP), and emergency departments (ED). The component costs to deliver an AP CT study were as follows: radiologist interpretation: 40.1%; other personnel (scheduler, technologist, nurse, pharmacist, and transporter): 39.6%; materials: 13.9%; and space and equipment: 6.4%. The cost of performing CT was 13% higher for ED patients and 31% higher for inpatients (IP), as compared to that for OP. The difference in cost was mostly due to non-radiologist personnel costs. Approximately 80% of the direct costs of AP CT to the academic medical center are related to labor. Potential opportunities to reduce the costs include increasing the efficiency of utilization of CT, substituting lower cost resources when appropriate, and streamlining the ordering system to clarify medical necessity and clinical indications. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
10 CFR Appendix I to Part 504 - Procedures for the Computation of the Real Cost of Capital
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Procedures for the Computation of the Real Cost of Capital I Appendix I to Part 504 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS EXISTING POWERPLANTS Pt. 504, App. I Appendix I to Part 504—Procedures for the Computation of the Real Cost of Capital (a) The firm's real after-tax weighted average...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-03-22
A grid-connected Integrated Community Energy System (ICES) with a coal-burning power plant located on the University of Minnesota campus is planned. The cost benefit analysis performed for this ICES, the cost accounting methods used, and a computer simulation of the operation of the power plant are described. (LCL)
[Process-oriented cost calculation in interventional radiology. A case study].
Mahnken, A H; Bruners, P; Günther, R W; Rasche, C
2012-01-01
Currently used costing methods such as cost centre accounting do not sufficiently reflect the process-based resource utilization in medicine. The goal of this study was to establish a process-oriented cost assessment of percutaneous radiofrequency (RF) ablation of liver and lung metastases. In each of 15 patients a detailed task analysis of the primary process of hepatic and pulmonary RF ablation was performed. Based on these data a dedicated cost calculation model was developed for each primary process. The costs of each process were computed and compared with the revenue for in-patients according to the German diagnosis-related groups (DRG) system 2010. The RF ablation of liver metastases in patients without relevant comorbidities and a low patient complexity level results in a loss of EUR 588.44, whereas the treatment of patients with a higher complexity level yields an acceptable profit. The treatment of pulmonary metastases is profitable even in cases of additional expenses due to complications. Process-oriented costing provides relevant information that is needed for understanding the economic impact of treatment decisions. It is well suited as a starting point for economically driven process optimization and reengineering. Under the terms of the German DRG 2010 system percutaneous RF ablation of lung metastases is economically reasonable, while RF ablation of liver metastases in cases of low patient complexity levels does not cover the costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Joseph
2017-04-20
Mapping permeability distributions in geothermal reservoirs is essential for reducing the cost of geothermal development. To avoid the cost and sampling bias of measuring permeability directly through drilling, we require remote methods of imaging permeability such as geophysics. Electrical resistivity (or its inverse, conductivity) is one of the most sensitive geophysical properties known to reflect long range fluid interconnection and thus the likelihood of permeability. Perhaps the most widely applied geophysical methods for imaging subsurface resistivity is magnetotellurics (MT) due to its relatively great penetration depths. A primary goal of this project is to confirm through ground truthing at existingmore » geothermal systems that MT resistivity structure interpreted integratively is capable of revealing permeable fluid pathways into geothermal systems.« less
Efficient Control Law Simulation for Multiple Mobile Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driessen, B.J.; Feddema, J.T.; Kotulski, J.D.
1998-10-06
In this paper we consider the problem of simulating simple control laws involving large numbers of mobile robots. Such simulation can be computationally prohibitive if the number of robots is large enough, say 1 million, due to the 0(N2 ) cost of each time step. This work therefore uses hierarchical tree-based methods for calculating the control law. These tree-based approaches have O(NlogN) cost per time step, thus allowing for efficient simulation involving a large number of robots. For concreteness, a decentralized control law which involves only the distance and bearing to the closest neighbor robot will be considered. The timemore » to calculate the control law for each robot at each time step is demonstrated to be O(logN).« less
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
A Cost-Benefit Study of Doing Astrophysics On The Cloud: Production of Image Mosaics
NASA Astrophysics Data System (ADS)
Berriman, G. B.; Good, J. C. Deelman, E.; Singh, G. Livny, M.
2009-09-01
Utility grids such as the Amazon EC2 and Amazon S3 clouds offer computational and storage resources that can be used on-demand for a fee by compute- and data-intensive applications. The cost of running an application on such a cloud depends on the compute, storage and communication resources it will provision and consume. Different execution plans of the same application may result in significantly different costs. We studied via simulation the cost performance trade-offs of different execution and resource provisioning plans by creating, under the Amazon cloud fee structure, mosaics with the Montage image mosaic engine, a widely used data- and compute-intensive application. Specifically, we studied the cost of building mosaics of 2MASS data that have sizes of 1, 2 and 4 square degrees, and a 2MASS all-sky mosaic. These are examples of mosaics commonly generated by astronomers. We also study these trade-offs in the context of the storage and communication fees of Amazon S3 when used for long-term application data archiving. Our results show that by provisioning the right amount of storage and compute resources cost can be significantly reduced with no significant impact on application performance.
Advanced vehicles: Costs, energy use, and macroeconomic impacts
NASA Astrophysics Data System (ADS)
Wang, Guihua
Advanced vehicles and alternative fuels could play an important role in reducing oil use and changing the economy structure. We developed the Costs for Advanced Vehicles and Energy (CAVE) model to investigate a vehicle portfolio scenario in California during 2010-2030. Then we employed a computable general equilibrium model to estimate macroeconomic impacts of the advanced vehicle scenario on the economy of California. Results indicate that, due to slow fleet turnover, conventional vehicles are expected to continue to dominate the on-road fleet and gasoline is the major transportation fuel over the next two decades. However, alternative fuels could play an increasingly important role in gasoline displacement. Advanced vehicle costs are expected to decrease dramatically with production volume and technological progress; e.g., incremental costs for fuel cell vehicles and hydrogen could break even with gasoline savings in 2028. Overall, the vehicle portfolio scenario is estimated to have a slightly negative influence on California's economy, because advanced vehicles are very costly and, therefore, the resulting gasoline savings generally cannot offset the high incremental expenditure on vehicles and alternative fuels. Sensitivity analysis shows that an increase in gasoline price or a drop in alternative fuel prices could offset a portion of the negative impact.
NASA Technical Reports Server (NTRS)
Janz, R. F.
1974-01-01
The systems cost/performance model was implemented as a digital computer program to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses. The computer is described along with the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design. Preliminary results for the DSCS-II design are also included.
Cost Optimization Model for Business Applications in Virtualized Grid Environments
NASA Astrophysics Data System (ADS)
Strebel, Jörg
The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
NASA Astrophysics Data System (ADS)
Lee, Ching Hua; Gan, Chee Kwan
2017-07-01
Phonon-mediated thermal conductivity, which is of great technological relevance, arises due fundamentally to anharmonic scattering from interatomic potentials. Despite its prevalence, accurate first-principles calculations of thermal conductivity remain challenging, primarily due to the high computational cost of anharmonic interatomic force constant (IFC) calculations. Meanwhile, the related anharmonic phenomenon of thermal expansion is much more tractable, being computable from the Grüneisen parameters associated with phonon frequency shifts due to crystal deformations. In this work, we propose an approach for computing the largest cubic IFCs from the Grüneisen parameter data. This allows an approximate determination of the thermal conductivity via a much less expensive route. The key insight is that although the Grüneisen parameters cannot possibly contain all the information on the cubic IFCs, being derivable from spatially uniform deformations, they can still unambiguously and accurately determine the largest and most physically relevant ones. By fitting the anisotropic Grüneisen parameter data along judiciously designed deformations, we can deduce (i.e., reverse-engineer) the dominant cubic IFCs and estimate three-phonon scattering amplitudes. We illustrate our approach by explicitly computing the largest cubic IFCs and thermal conductivity of graphene, especially for its out-of-plane (flexural) modes that exhibit anomalously large anharmonic shifts and thermal conductivity contributions. Our calculations on graphene not only exhibit reasonable agreement with established density-functional theory results, but they also present a pedagogical opportunity for introducing an elegant analytic treatment of the Grüneisen parameters of generic two-band models. Our approach can be readily extended to more complicated crystalline materials with nontrivial anharmonic lattice effects.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-03
... anchors, both as centers for digital literacy and as hubs for access to public computers. While their... expansion of computer labs, and facilitated deployment of new educational applications that would not have... computer fees to help defray the cost of computers or training fees to help cover the cost of training...
Re-Innovating Recycling for Turbulent Boundary Layer Simulations
NASA Astrophysics Data System (ADS)
Ruan, Joseph; Blanquart, Guillaume
2017-11-01
Historically, turbulent boundary layers along a flat plate have been expensive to simulate numerically, in part due to the difficulty of initializing the inflow with ``realistic'' turbulence, but also due to boundary layer growth. The former has been resolved in several ways, primarily dedicating a region of at least 10 boundary layer thicknesses in width to rescale and recycle flow or by extending the region far enough downstream to allow a laminar flow to develop into turbulence. Both of these methods are relatively costly. We propose a new method to remove the need for an inflow region, thus reducing computational costs significantly. Leveraging the scale similarity of the mean flow profiles, we introduce a coordinate transformation so that the boundary layer problem can be solved as a parallel flow problem with additional source terms. The solutions in the new coordinate system are statistically homogeneous in the downstream direction and so the problem can be solved with periodic boundary conditions. The present study shows the stability of this method, its implementation and its validation for a few laminar and turbulent boundary layer cases.
Johnson, Michelle J; Ramachandran, Brinda; Paranjape, Ruta P; Kosasih, Judith B
2006-01-01
Rising healthcare costs combined with an increase in the number of people living with disabilities due to stroke have created a need for affordable stroke therapy that can be administered in both home and clinical environments. Studies show that robot and computer-assisted devices are promising tools for rehabilitating persons with impairment and disabilities due to stroke. Studies also have shown that highly motivating therapy produces neuromotor relearning that aids the rehabilitative process. Combining these concepts, this paper discusses TheraDrive, a simple, but novel robotic system for more motivating stroke therapy. We conducted two feasibility studies. The paper discusses these studies. Findings demonstrate the ability of the system to grade therapy and the sensitivity of its metrics to the level of motor function in the impaired arm. In addition, findings confirm the ability of the system to administer fun therapy leading to improved motor performance on steering tasks. However, further work is needed to improve the system's ability to increase motor function in the impaired arm.
IUWare and Computing Tools: Indiana University's Approach to Low-Cost Software.
ERIC Educational Resources Information Center
Sheehan, Mark C.; Williams, James G.
1987-01-01
Describes strategies for providing low-cost microcomputer-based software for classroom use on college campuses. Highlights include descriptions of the software (IUWare and Computing Tools); computing center support; license policies; documentation; promotion; distribution; staff, faculty, and user training; problems; and future plans. (LRW)
The Next Generation of Personal Computers.
ERIC Educational Resources Information Center
Crecine, John P.
1986-01-01
Discusses factors converging to create high-capacity, low-cost nature of next generation of microcomputers: a coherent vision of what graphics workstation and future computing environment should be like; hardware developments leading to greater storage capacity at lower costs; and development of software and expertise to exploit computing power…
An Autonomous Underwater Recorder Based on a Single Board Computer
Caldas-Morgan, Manuel; Alvarez-Rosario, Alexander; Rodrigues Padovese, Linilson
2015-01-01
As industrial activities continue to grow on the Brazilian coast, underwater sound measurements are becoming of great scientific importance as they are essential to evaluate the impact of these activities on local ecosystems. In this context, the use of commercial underwater recorders is not always the most feasible alternative, due to their high cost and lack of flexibility. Design and construction of more affordable alternatives from scratch can become complex because it requires profound knowledge in areas such as electronics and low-level programming. With the aim of providing a solution; a well succeeded model of a highly flexible, low-cost alternative to commercial recorders was built based on a Raspberry Pi single board computer. A properly working prototype was assembled and it demonstrated adequate performance levels in all tested situations. The prototype was equipped with a power management module which was thoroughly evaluated. It is estimated that it will allow for great battery savings on long-term scheduled recordings. The underwater recording device was successfully deployed at selected locations along the Brazilian coast, where it adequately recorded animal and manmade acoustic events, among others. Although power consumption may not be as efficient as that of commercial and/or micro-processed solutions, the advantage offered by the proposed device is its high customizability, lower development time and inherently, its cost. PMID:26076479
Cut Costs with Thin Client Computing.
ERIC Educational Resources Information Center
Hartley, Patrick H.
2001-01-01
Discusses how school districts can considerably increase the number of administrative computers in their districts without a corresponding increase in costs by using the "Thin Client" component of the Total Cost of Ownership (TCC) model. TCC and Thin Client are described, including its software and hardware components. An example of a…
12 CFR 1402.21 - Categories of requesters-fees.
Code of Federal Regulations, 2013 CFR
2013-01-01
... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...
12 CFR 1402.21 - Categories of requesters-fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...
12 CFR 1402.21 - Categories of requesters-fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...
12 CFR 1402.21 - Categories of requesters-fees.
Code of Federal Regulations, 2010 CFR
2010-01-01
... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...
12 CFR 1402.21 - Categories of requesters-fees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... searches made by computer, the Farm Credit System Insurance Corporation will determine the hourly cost of... the cost of search (including the operator time and the cost of operating the computer to process a... 1402.21 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION RELEASING INFORMATION Fees for...
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2014-05-01
Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
3D simulations of early blood vessel formation
NASA Astrophysics Data System (ADS)
Cavalli, F.; Gamba, A.; Naldi, G.; Semplice, M.; Valdembri, D.; Serini, G.
2007-08-01
Blood vessel networks form by spontaneous aggregation of individual cells migrating toward vascularization sites (vasculogenesis). A successful theoretical model of two-dimensional experimental vasculogenesis has been recently proposed, showing the relevance of percolation concepts and of cell cross-talk (chemotactic autocrine loop) to the understanding of this self-aggregation process. Here we study the natural 3D extension of the computational model proposed earlier, which is relevant for the investigation of the genuinely three-dimensional process of vasculogenesis in vertebrate embryos. The computational model is based on a multidimensional Burgers equation coupled with a reaction diffusion equation for a chemotactic factor and a mass conservation law. The numerical approximation of the computational model is obtained by high order relaxed schemes. Space and time discretization are performed by using TVD schemes and, respectively, IMEX schemes. Due to the computational costs of realistic simulations, we have implemented the numerical algorithm on a cluster for parallel computation. Starting from initial conditions mimicking the experimentally observed ones, numerical simulations produce network-like structures qualitatively similar to those observed in the early stages of in vivo vasculogenesis. We develop the computation of critical percolative indices as a robust measure of the network geometry as a first step towards the comparison of computational and experimental data.
Model Order Reduction Algorithm for Estimating the Absorption Spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Beeumen, Roel; Williams-Young, David B.; Kasper, Joseph M.
The ab initio description of the spectral interior of the absorption spectrum poses both a theoretical and computational challenge for modern electronic structure theory. Due to the often spectrally dense character of this domain in the quantum propagator’s eigenspectrum for medium-to-large sized systems, traditional approaches based on the partial diagonalization of the propagator often encounter oscillatory and stagnating convergence. Electronic structure methods which solve the molecular response problem through the solution of spectrally shifted linear systems, such as the complex polarization propagator, offer an alternative approach which is agnostic to the underlying spectral density or domain location. This generality comesmore » at a seemingly high computational cost associated with solving a large linear system for each spectral shift in some discretization of the spectral domain of interest. In this work, we present a novel, adaptive solution to this high computational overhead based on model order reduction techniques via interpolation. Model order reduction reduces the computational complexity of mathematical models and is ubiquitous in the simulation of dynamical systems and control theory. The efficiency and effectiveness of the proposed algorithm in the ab initio prediction of X-ray absorption spectra is demonstrated using a test set of challenging water clusters which are spectrally dense in the neighborhood of the oxygen K-edge. On the basis of a single, user defined tolerance we automatically determine the order of the reduced models and approximate the absorption spectrum up to the given tolerance. We also illustrate that, for the systems studied, the automatically determined model order increases logarithmically with the problem dimension, compared to a linear increase of the number of eigenvalues within the energy window. Furthermore, we observed that the computational cost of the proposed algorithm only scales quadratically with respect to the problem dimension.« less
NASA Astrophysics Data System (ADS)
Clayton, R. W.; Kohler, M. D.; Massari, A.; Heaton, T. H.; Guy, R.; Chandy, M.; Bunn, J.; Strand, L.
2014-12-01
The CSN is now in its 3rdyear of operation and has expanded to 400 stations in the Los Angeles region. The goal of the network is to produce a map of strong shaking immediately following a major earthquake as a proxy for damage and a guide for first responders. We have also instrumented a number of buildings with the goal of determining the state of health of these structures before and after they have been shaken. In one 15-story structure, our sensors distributed two per floor, and show body waves propagating in the structure after a moderate local earthquake (M4.4 in Encino, CA). Sensors in a 52-story structure, which we plan to instrument with two sensors per floor as well, show the modes of the building (see Figure) down to the fundamental mode at 5 sec due to a M5.1 earthquake in La Habra, CA. The CSN utilizes a number of technologies that will likely be important in building robust low-cost networks. These include: Distributed computing - the sensors themselves are smart-sensors that perform the basic detection and size estimation in the onboard computers and send the results immediately (without packetization latency) to the central facility. Cloud computing - the central facility is housed in the cloud, which means it is more robust than a local site, and has expandable computing resources available so that it can operate with minimal resources during quiet times but still be able to exploit an very large computing facility during an earthquake. Low-cost/low-maintenance sensors - the MEM sensors are capable of staying onscale to +/- 2g, and can measure events in the Los Angeles Basin a low as magnitude 3.
A Proposal for Production Data Collection on a Hybrid Production Line in Cooperation with MES
NASA Astrophysics Data System (ADS)
Znamenák, Jaroslav; Križanová, Gabriela; Iringová, Miriam; Važan, Pavel
2016-12-01
Due to the increasing competitive environment in the manufacturing sector, many industries have the need for a computer integrated engineering management system. The Manufacturing Execution System (MES) is a computer system designed for product manufacturing with high quality, low cost and minimum lead time. MES is a type of middleware providing the required information for the optimization of production from launching of a product order to its completion. There are many studies dealing with the advantages of the use of MES, but little research was conducted on how to implement MES effectively. A solution to this issue are KPIs. KPIs are important to many strategic philosophies or practices for improving the production process. This paper describes a proposal for analyzing manufacturing system parameters with the use of KPIs.
Computer-Assisted Diagnosis of the Sleep Apnea-Hypopnea Syndrome: A Review
Alvarez-Estevez, Diego; Moret-Bonillo, Vicente
2015-01-01
Automatic diagnosis of the Sleep Apnea-Hypopnea Syndrome (SAHS) has become an important area of research due to the growing interest in the field of sleep medicine and the costs associated with its manual diagnosis. The increment and heterogeneity of the different techniques, however, make it somewhat difficult to adequately follow the recent developments. A literature review within the area of computer-assisted diagnosis of SAHS has been performed comprising the last 15 years of research in the field. Screening approaches, methods for the detection and classification of respiratory events, comprehensive diagnostic systems, and an outline of current commercial approaches are reviewed. An overview of the different methods is presented together with validation analysis and critical discussion of the current state of the art. PMID:26266052
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Costs of cloud computing for a biometry department. A case study.
Knaus, J; Hieke, S; Binder, H; Schwarzer, G
2013-01-01
"Cloud" computing providers, such as the Amazon Web Services (AWS), offer stable and scalable computational resources based on hardware virtualization, with short, usually hourly, billing periods. The idea of pay-as-you-use seems appealing for biometry research units which have only limited access to university or corporate data center resources or grids. This case study compares the costs of an existing heterogeneous on-site hardware pool in a Medical Biometry and Statistics department to a comparable AWS offer. The "total cost of ownership", including all direct costs, is determined for the on-site hardware, and hourly prices are derived, based on actual system utilization during the year 2011. Indirect costs, which are difficult to quantify are not included in this comparison, but nevertheless some rough guidance from our experience is given. To indicate the scale of costs for a methodological research project, a simulation study of a permutation-based statistical approach is performed using AWS and on-site hardware. In the presented case, with a system utilization of 25-30 percent and 3-5-year amortization, on-site hardware can result in smaller costs, compared to hourly rental in the cloud dependent on the instance chosen. Renting cloud instances with sufficient main memory is a deciding factor in this comparison. Costs for on-site hardware may vary, depending on the specific infrastructure at a research unit, but have only moderate impact on the overall comparison and subsequent decision for obtaining affordable scientific computing resources. Overall utilization has a much stronger impact as it determines the actual computing hours needed per year. Taking this into ac count, cloud computing might still be a viable option for projects with limited maturity, or as a supplement for short peaks in demand.
Cost-Effectiveness and Cost-Utility of Internet-Based Computer Tailoring for Smoking Cessation
Evers, Silvia MAA; de Vries, Hein; Hoving, Ciska
2013-01-01
Background Although effective smoking cessation interventions exist, information is limited about their cost-effectiveness and cost-utility. Objective To assess the cost-effectiveness and cost-utility of an Internet-based multiple computer-tailored smoking cessation program and tailored counseling by practice nurses working in Dutch general practices compared with an Internet-based multiple computer-tailored program only and care as usual. Methods The economic evaluation was embedded in a randomized controlled trial, for which 91 practice nurses recruited 414 eligible smokers. Smokers were randomized to receive multiple tailoring and counseling (n=163), multiple tailoring only (n=132), or usual care (n=119). Self-reported cost and quality of life were assessed during a 12-month follow-up period. Prolonged abstinence and 24-hour and 7-day point prevalence abstinence were assessed at 12-month follow-up. The trial-based economic evaluation was conducted from a societal perspective. Uncertainty was accounted for by bootstrapping (1000 times) and sensitivity analyses. Results No significant differences were found between the intervention arms with regard to baseline characteristics or effects on abstinence, quality of life, and addiction level. However, participants in the multiple tailoring and counseling group reported significantly more annual health care–related costs than participants in the usual care group. Cost-effectiveness analysis, using prolonged abstinence as the outcome measure, showed that the mere multiple computer-tailored program had the highest probability of being cost-effective. Compared with usual care, in this group €5100 had to be paid for each additional abstinent participant. With regard to cost-utility analyses, using quality of life as the outcome measure, usual care was probably most efficient. Conclusions To our knowledge, this was the first study to determine the cost-effectiveness and cost-utility of an Internet-based smoking cessation program with and without counseling by a practice nurse. Although the Internet-based multiple computer-tailored program seemed to be the most cost-effective treatment, the cost-utility was probably highest for care as usual. However, to ease the interpretation of cost-effectiveness results, future research should aim at identifying an acceptable cutoff point for the willingness to pay per abstinent participant. PMID:23491820
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
Cloud computing for comparative genomics with windows azure platform.
Kim, Insik; Jung, Jae-Yoon; Deluca, Todd F; Nelson, Tristan H; Wall, Dennis P
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services.
Cloud Computing for Comparative Genomics with Windows Azure Platform
Kim, Insik; Jung, Jae-Yoon; DeLuca, Todd F.; Nelson, Tristan H.; Wall, Dennis P.
2012-01-01
Cloud computing services have emerged as a cost-effective alternative for cluster systems as the number of genomes and required computation power to analyze them increased in recent years. Here we introduce the Microsoft Azure platform with detailed execution steps and a cost comparison with Amazon Web Services. PMID:23032609
Wireless sensing and vibration control with increased redundancy and robustness design.
Li, Peng; Li, Luyu; Song, Gangbing; Yu, Yan
2014-11-01
Control systems with long distance sensor and actuator wiring have the problem of high system cost and increased sensor noise. Wireless sensor network (WSN)-based control systems are an alternative solution involving lower setup and maintenance costs and reduced sensor noise. However, WSN-based control systems also encounter problems such as possible data loss, irregular sampling periods (due to the uncertainty of the wireless channel), and the possibility of sensor breakdown (due to the increased complexity of the overall control system). In this paper, a wireless microcontroller-based control system is designed and implemented to wirelessly perform vibration control. The wireless microcontroller-based system is quite different from regular control systems due to its limited speed and computational power. Hardware, software, and control algorithm design are described in detail to demonstrate this prototype. Model and system state compensation is used in the wireless control system to solve the problems of data loss and sensor breakdown. A positive position feedback controller is used as the control law for the task of active vibration suppression. Both wired and wireless controllers are implemented. The results show that the WSN-based control system can be successfully used to suppress the vibration and produces resilient results in the presence of sensor failure.
Mobile healthcare applications: system design review, critical issues and challenges.
Baig, Mirza Mansoor; GholamHosseini, Hamid; Connolly, Martin J
2015-03-01
Mobile phones are becoming increasingly important in monitoring and delivery of healthcare interventions. They are often considered as pocket computers, due to their advanced computing features, enhanced preferences and diverse capabilities. Their sophisticated sensors and complex software applications make the mobile healthcare (m-health) based applications more feasible and innovative. In a number of scenarios user-friendliness, convenience and effectiveness of these systems have been acknowledged by both patients as well as healthcare providers. M-health technology employs advanced concepts and techniques from multidisciplinary fields of electrical engineering, computer science, biomedical engineering and medicine which benefit the innovations of these fields towards healthcare systems. This paper deals with two important aspects of current mobile phone based sensor applications in healthcare. Firstly, critical review of advanced applications such as; vital sign monitoring, blood glucose monitoring and in-built camera based smartphone sensor applications. Secondly, investigating challenges and critical issues related to the use of smartphones in healthcare including; reliability, efficiency, mobile phone platform variability, cost effectiveness, energy usage, user interface, quality of medical data, and security and privacy. It was found that the mobile based applications have been widely developed in recent years with fast growing deployment by healthcare professionals and patients. However, despite the advantages of smartphones in patient monitoring, education, and management there are some critical issues and challenges related to security and privacy of data, acceptability, reliability and cost that need to be addressed.
A communication efficient and scalable distributed data mining for the astronomical data
NASA Astrophysics Data System (ADS)
Govada, A.; Sahay, S. K.
2016-07-01
In 2020, ∼60PB of archived data will be accessible to the astronomers. But to analyze such a paramount data will be a challenging task. This is basically due to the computational model used to download the data from complex geographically distributed archives to a central site and then analyzing it in the local systems. Because the data has to be downloaded to the central site, the network BW limitation will be a hindrance for the scientific discoveries. Also analyzing this PB-scale on local machines in a centralized manner is challenging. In this, virtual observatory is a step towards this problem, however, it does not provide the data mining model (Zhang et al., 2004). Adding the distributed data mining layer to the VO can be the solution in which the knowledge can be downloaded by the astronomers instead the raw data and thereafter astronomers can either reconstruct the data back from the downloaded knowledge or use the knowledge directly for further analysis. Therefore, in this paper, we present Distributed Load Balancing Principal Component Analysis for optimally distributing the computation among the available nodes to minimize the transmission cost and downloading cost for the end user. The experimental analysis is done with Fundamental Plane (FP) data, Gadotti data and complex Mfeat data. In terms of transmission cost, our approach performs better than Qi et al. and Yue et al. The analysis shows that with the complex Mfeat data ∼90% downloading cost can be reduced for the end user with the negligible loss in accuracy.
Economic benefits of improved insulin stability in insulin pumps.
Weiss, Richard C; van Amerongen, Derek; Bazalo, Gary; Aagren, Mark; Bouchard, Jonathan R
2011-05-01
Insulin pump users discard unused medication and infusion sets according to labeling and manufacturer's instructions. The stability labeling for insulin aspart (rDNA origin] (Novolog) was increased from two days to six. The associated savings was modeled from the perspective of a hypothetical one-million member health plan and the total United States population. The discarded insulin volume and the number of infusion sets used under a two-day stability scenario versus six were modeled. A mix of insulin pumps of various reservoir capacities with a range of daily insulin dosages was used. Average daily insulin dose was 65 units ranging from 10 to 150 units. Costs of discarded insulin aspart [rDNA origin] were calculated using WAC (Average Wholesale Price minus 16.67%). The cost of pump supplies was computed for the two-day scenario assuming a complete infusion set change, including reservoirs, every two days. Under the six-day scenario complete infusion sets were discarded every six days while cannulas at the insertion site were changed midway between complete changes. AWP of least expensive supplies was used to compute their costs. For the hypothetical health plan (1,182 pump users) the annual reduction in discarded insulin volume between scenarios was 19.8 million units. The corresponding cost reduction for the plan due to drug and supply savings was $3.4 million. From the U.S. population perspective, savings of over $1 billion were estimated. Using insulin that is stable for six days in pump reservoirs can yield substantial savings to health plans and other payers, including patients.
Estimating HIV-1 Fitness Characteristics from Cross-Sectional Genotype Data
Gopalakrishnan, Sathej; Montazeri, Hesam; Menz, Stephan; Beerenwinkel, Niko; Huisinga, Wilhelm
2014-01-01
Despite the success of highly active antiretroviral therapy (HAART) in the management of human immunodeficiency virus (HIV)-1 infection, virological failure due to drug resistance development remains a major challenge. Resistant mutants display reduced drug susceptibilities, but in the absence of drug, they generally have a lower fitness than the wild type, owing to a mutation-incurred cost. The interaction between these fitness costs and drug resistance dictates the appearance of mutants and influences viral suppression and therapeutic success. Assessing in vivo viral fitness is a challenging task and yet one that has significant clinical relevance. Here, we present a new computational modelling approach for estimating viral fitness that relies on common sparse cross-sectional clinical data by combining statistical approaches to learn drug-specific mutational pathways and resistance factors with viral dynamics models to represent the host-virus interaction and actions of drug mechanistically. We estimate in vivo fitness characteristics of mutant genotypes for two antiretroviral drugs, the reverse transcriptase inhibitor zidovudine (ZDV) and the protease inhibitor indinavir (IDV). Well-known features of HIV-1 fitness landscapes are recovered, both in the absence and presence of drugs. We quantify the complex interplay between fitness costs and resistance by computing selective advantages for different mutants. Our approach extends naturally to multiple drugs and we illustrate this by simulating a dual therapy with ZDV and IDV to assess therapy failure. The combined statistical and dynamical modelling approach may help in dissecting the effects of fitness costs and resistance with the ultimate aim of assisting the choice of salvage therapies after treatment failure. PMID:25375675
NASA Astrophysics Data System (ADS)
Chetty, S.; Field, L. A.
2013-12-01
The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4
Hurricane Loss Estimation Models: Opportunities for Improving the State of the Art.
NASA Astrophysics Data System (ADS)
Watson, Charles C., Jr.; Johnson, Mark E.
2004-11-01
The results of hurricane loss models are used regularly for multibillion dollar decisions in the insurance and financial services industries. These models are proprietary, and this “black box” nature hinders analysis. The proprietary models produce a wide range of results, often producing loss costs that differ by a ratio of three to one or more. In a study for the state of North Carolina, 324 combinations of loss models were analyzed, based on a combination of nine wind models, four surface friction models, and nine damage models drawn from the published literature in insurance, engineering, and meteorology. These combinations were tested against reported losses from Hurricanes Hugo and Andrew as reported by a major insurance company, as well as storm total losses for additional storms. Annual loss costs were then computed using these 324 combinations of models for both North Carolina and Florida, and compared with publicly available proprietary model results in Florida. The wide range of resulting loss costs for open, scientifically defensible models that perform well against observed losses mirrors the wide range of loss costs computed by the proprietary models currently in use. This outcome may be discouraging for governmental and corporate decision makers relying on this data for policy and investment guidance (due to the high variability across model results), but it also provides guidance for the efforts of future investigations to improve loss models. Although hurricane loss models are true multidisciplinary efforts, involving meteorology, engineering, statistics, and actuarial sciences, the field of meteorology offers the most promising opportunities for improvement of the state of the art.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
NASA Astrophysics Data System (ADS)
Chidburee, P.; Mills, J. P.; Miller, P. E.; Fieber, K. D.
2016-06-01
Close-range photogrammetric techniques offer a potentially low-cost approach in terms of implementation and operation for initial assessment and monitoring of landslide processes over small areas. In particular, the Structure-from-Motion (SfM) pipeline is now extensively used to help overcome many constraints of traditional digital photogrammetry, offering increased user-friendliness to nonexperts, as well as lower costs. However, a landslide monitoring approach based on the SfM technique also presents some potential drawbacks due to the difficulty in managing and processing a large volume of data in real-time. This research addresses the aforementioned issues by attempting to combine a mobile device with cloud computing technology to develop a photogrammetric measurement solution as part of a monitoring system for landslide hazard analysis. The research presented here focusses on (i) the development of an Android mobile application; (ii) the implementation of SfM-based open-source software in the Amazon cloud computing web service, and (iii) performance assessment through a simulated environment using data collected at a recognized landslide test site in North Yorkshire, UK. Whilst the landslide monitoring mobile application is under development, this paper describes experiments carried out to ensure effective performance of the system in the future. Investigations presented here describe the initial assessment of a cloud-implemented approach, which is developed around the well-known VisualSFM algorithm. Results are compared to point clouds obtained from alternative SfM 3D reconstruction approaches considering a commercial software solution (Agisoft PhotoScan) and a web-based system (Autodesk 123D Catch). Investigations demonstrate that the cloud-based photogrammetric measurement system is capable of providing results of centimeter-level accuracy, evidencing its potential to provide an effective approach for quantifying and analyzing landslide hazard at a local-scale.
Multi-stage methodology to detect health insurance claim fraud.
Johnson, Marina Evrim; Nagarur, Nagen
2016-09-01
Healthcare costs in the US, as well as in other countries, increase rapidly due to demographic, economic, social, and legal changes. This increase in healthcare costs impacts both government and private health insurance systems. Fraudulent behaviors of healthcare providers and patients have become a serious burden to insurance systems by bringing unnecessary costs. Insurance companies thus develop methods to identify fraud. This paper proposes a new multistage methodology for insurance companies to detect fraud committed by providers and patients. The first three stages aim at detecting abnormalities among providers, services, and claim amounts. Stage four then integrates the information obtained in the previous three stages into an overall risk measure. Subsequently, a decision tree based method in stage five computes risk threshold values. The final decision stating whether the claim is fraudulent is made by comparing the risk value obtained in stage four with the risk threshold value from stage five. The research methodology performs well on real-world insurance data.
Gurdita, Akshay; Vovko, Heather; Ungrin, Mark
2016-01-01
Basic equipment such as incubation and refrigeration systems plays a critical role in nearly all aspects of the traditional biological research laboratory. Their proper functioning is therefore essential to ensure reliable and repeatable experimental results. Despite this fact, in many academic laboratories little attention is paid to validating and monitoring their function, primarily due to the cost and/or technical complexity of available commercial solutions. We have therefore developed a simple and low-cost monitoring system that combines a "Raspberry Pi" single-board computer with USB-connected sensor interfaces to track and log parameters such as temperature and pressure, and send email alert messages as appropriate. The system is controlled by open-source software, and we have also generated scripts to automate software setup so that no background in programming is required to install and use it. We have applied it to investigate the behaviour of our own equipment, and present here the results along with the details of the monitoring system used to obtain them.
A Low-Cost and Energy-Efficient Multiprocessor System-on-Chip for UWB MAC Layer
NASA Astrophysics Data System (ADS)
Xiao, Hao; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki; Nakase, Yuko; Kimura, Sadahiro
Ultra-wideband (UWB) technology has attracted much attention recently due to its high data rate and low emission power. Its media access control (MAC) protocol, WiMedia MAC, promises a lot of facilities for high-speed and high-quality wireless communication. However, these benefits in turn involve a large amount of computational load, which challenges the traditional uniprocessor architecture based implementation method to provide the required performance. However, the constrained cost and power budget, on the other hand, makes using commercial multiprocessor solutions unrealistic. In this paper, a low-cost and energy-efficient multiprocessor system-on-chip (MPSoC), which tackles at once the aspects of system design, software migration and hardware architecture, is presented for the implementation of UWB MAC layer. Experimental results show that the proposed MPSoC, based on four simple RISC processors and shared-memory infrastructure, achieves up to 45% performance improvement and 65% power saving, but takes 15% less area than the uniprocessor implementation.
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Predicting commuter flows in spatial networks using a radiation model based on temporal ranges
NASA Astrophysics Data System (ADS)
Ren, Yihui; Ercsey-Ravasz, Mária; Wang, Pu; González, Marta C.; Toroczkai, Zoltán
2014-11-01
Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Smith, Tyler; Elson, Leah; Anderson, Christopher; Leone, William
2016-01-01
Despite technological advances in operative technique and component materials, the total knee arthroplasty (TKA) revision burden, in the United States, has remained static for the past decade. In light of an anticipated exponential increase in annual surgical volume, it is important to thoroughly understand contemporary challenges associated with technologically driven TKA. This descriptive literature review harvested 69 relevant publications to extrapolate patient trends, benefits, costs, and complications associated with computer-assisted surgery, patient specific instrumentation, and intra-operative sensors. Due to additional charges, a steep learning curve, and questionable cost-effectiveness, widespread use of these systems has been limited. Intra-operative sensors are a relatively recent development, and have been shown to improve both soft-tissue balance and overall functional outcomes at a relatively low price and without disrupting operative workflow. The introduction of new technology into the operating suite should be considered carefully, especially with respect to combined clinically efficacy and cost.
USGS Telecommunications Responding to Change
Hott, James L.
1985-01-01
The telecommunications industry is undergoing tremendous change due to the court ordered breakup of the monopoly once enjoyed by American Telephone & Telegraph (AT&T). This action has resulted in a plethora of new services and products in all of the communications fields, including traditional voice and data. The new products are making extensive use of computer technology. At the same time, costs of telecommunications services have risen dramatically over the past three years. This article reviews some of the major actions that the Geological Survey has taken in response to these changes.
NASA Astrophysics Data System (ADS)
Wong, G.
The unparalleled cost and form factor advantages of NAND flash memory has driven 35 mm photographic film, floppy disks and one-inch hard drives to extinction. Due to its compelling price/performance characteristics, NAND Flash memory is now expanding its reach into the once-exclusive domain of hard disk drives and DRAM in the form of Solid State Drives (SSDs). Driven by the proliferation of thin and light mobile devices and the need for near-instantaneous accessing and sharing of content through the cloud, SSDs are expected to become a permanent fixture in the computing infrastructure.
Evolving aerodynamic airfoils for wind turbines through a genetic algorithm
NASA Astrophysics Data System (ADS)
Hernández, J. J.; Gómez, E.; Grageda, J. I.; Couder, C.; Solís, A.; Hanotel, C. L.; Ledesma, JI
2017-01-01
Nowadays, genetic algorithms stand out for airfoil optimisation, due to the virtues of mutation and crossing-over techniques. In this work we propose a genetic algorithm with arithmetic crossover rules. The optimisation criteria are taken to be the maximisation of both aerodynamic efficiency and lift coefficient, while minimising drag coefficient. Such algorithm shows greatly improvements in computational costs, as well as a high performance by obtaining optimised airfoils for Mexico City's specific wind conditions from generic wind turbines designed for higher Reynolds numbers, in few iterations.
Report: NSF Instrumentation and Laboratory Improvement Grants in Chemistry
NASA Astrophysics Data System (ADS)
1997-01-01
The 1996 awards in chemistry under the Instrumentation and Laboratory Improvement Program (ILI) of the Division of Undergraduate Education (DUE) have been announced and are listed below. The ILI program provides matching funds in the range of 5,000 to 100,000 for purchasing equipment for laboratory improvement. Since the recipient institution must provide matching funds equaling or exceeding the NSF award, the supported projects range in cost from 10,000 to over 200,000. The 311 chemistry proposals requesting 13 million constituted 21% of the total number of proposals submitted to the ILI program. A total of 3.9 million was awarded in support of 110 projects in chemistry. The instruments requested most frequently were high field NMRs, GC/MS instruments, computers for data analysis, and FT-IRs; next most commonly requested were UV-vis spectrophotometers, followed by HPLCs, lasers, computers for molecular modeling, AAs, and GCs. In addition, one award was made this year in chemistry within the Leadership in Laboratory Development category. The next deadline for submission of ILI proposals is November 14, 1997. Guidelines for the preparation of proposals are found in the DUE Program Announcement (NSF 96-10), which may be obtained by calling (703) 306-1666 or by e-mail: undergrad@nsf.gov. Other information about DUE programs and activities and abstracts of the funded proposals can be found on the DUE Home Page at http://www.ehr.nsf.gov/EHR/DUE/start.htm. We thank Sandra D. Nelson, Science Education Analyst in DUE, for assistance in data gathering.
The Next Generation of Lab and Classroom Computing - The Silver Lining
2016-12-01
desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The research... infrastructure , VDI, hardware cost, software cost, manpower, availability, cloud computing, private cloud, bring your own device, BYOD, thin client...virtual desktop infrastructure (VDI) solution, as well as the computing solutions at three universities, was selected as the basis for comparison. The
1987-12-01
definition 33., below). 7. Commercial VI Production. A completed VI production, purchased off-the- shelf; i.e., from the stocks of a vendor. 8. Computer ...Generated Graphics. The production of graphics through an electronic medium based on a computer or computer techniques. 9. Contract VI Production. A VI...displays, presentations, and exhibits prepared manually, by machine, or by computer . 16. Indirect Costs. An item of cost (or the aggregate thereof) that is
A Computational Model for Predicting Gas Breakdown
NASA Astrophysics Data System (ADS)
Gill, Zachary
2017-10-01
Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.
Cloud computing for comparative genomics
2010-01-01
Background Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. Results We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. Conclusions The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems. PMID:20482786
Cloud computing for comparative genomics.
Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J
2010-05-18
Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.
Low-cost data analysis systems for processing multispectral scanner data
NASA Technical Reports Server (NTRS)
Whitely, S. L.
1976-01-01
The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.
Benchmarking undedicated cloud computing providers for analysis of genomic datasets.
Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.
Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets
Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298
Evaluating Thin Client Computers for Use by the Polish Army
2006-06-01
43 Figure 15. Annual Electricity Cost and Savings for 5 to 100 Users (source: Thin Client Computing...50 percent in hard costs in the first year of thin client network deployment.20 However, the greatest savings come from the reduction in soft costs ...resources from both the classrooms and home. The thin client solution increased the reliability of the IT infrastructure and resulted in cost savings
A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation
Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.
1984-01-01
A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.
NASA Technical Reports Server (NTRS)
1983-01-01
An assessment was made of the impact of developments in computational fluid dynamics (CFD) on the traditional role of aerospace ground test facilities over the next fifteen years. With improvements in CFD and more powerful scientific computers projected over this period it is expected to have the capability to compute the flow over a complete aircraft at a unit cost three orders of magnitude lower than presently possible. Over the same period improvements in ground test facilities will progress by application of computational techniques including CFD to data acquisition, facility operational efficiency, and simulation of the light envelope; however, no dramatic change in unit cost is expected as greater efficiency will be countered by higher energy and labor costs.
ERIC Educational Resources Information Center
Lourey, Eugene D., Comp.
The Minnesota Computer Aided Library System (MCALS) provides a basis of unification for library service program development in Minnesota for eventual linkage to the national information network. A prototype plan for communications functions is illustrated. A cost/benefits analysis was made to show the cost/effectiveness potential for MCALS. System…
Data Bases at a State Institution--Costs, Uses and Needs. AIR Forum Paper 1978.
ERIC Educational Resources Information Center
McLaughlin, Gerald W.
The cost-benefit of administrative data at a state college is placed in perspective relative to the institutional involvement in computer use. The costs of computer operations, personnel, and peripheral equipment expenses related to instruction are analyzed. Data bases and systems support institutional activities, such as registration, and aid…
Computer assisted yarding cost analysis.
Ronald W. Mifflin
1980-01-01
Programs for a programable calculator and a desk-top computer are provided for quickly determining yarding cost and comparing the economics of alternative yarding systems. The programs emphasize the importance of the relationship between production rate and machine rate, which is the hourly cost of owning and operating yarding equipment. In addition to generating the...
7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...
7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...
7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.
Code of Federal Regulations, 2014 CFR
2014-01-01
... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...
The economic burden of occupational non-melanoma skin cancer due to solar radiation.
Mofidi, Amirabbas; Tompa, Emile; Spencer, James; Kalcevich, Christina; Peters, Cheryl E; Kim, Joanne; Song, Chaojie; Mortazavi, Seyed Bagher; Demers, Paul A
2018-06-01
Solar ultraviolet (UV) radiation is the second most prevalent carcinogenic exposure in Canada and is similarly important in other countries with large Caucasian populations. The objective of this article was to estimate the economic burden associated with newly diagnosed non-melanoma skin cancers (NMSCs) attributable to occupational solar radiation exposure. Key cost categories considered were direct costs (healthcare costs, out-of-pocket costs (OOPCs), and informal caregiver costs); indirect costs (productivity/output costs and home production costs); and intangible costs (monetary value of the loss of health-related quality of life (HRQoL)). To generate the burden estimates, we used secondary data from multiple sources applied to computational methods developed from an extensive review of the literature. An estimated 2,846 (5.3%) of the 53,696 newly diagnosed cases of basal cell carcinoma (BCC) and 1,710 (9.2%) of the 18,549 newly diagnosed cases of squamous cell carcinoma (SCC) in 2011 in Canada were attributable to occupational solar radiation exposure. The combined total for direct and indirect costs of occupational NMSC cases is $28.9 million ($15.9 million for BCC and $13.0 million for SCC), and for intangible costs is $5.7 million ($0.6 million for BCC and $5.1 million for SCC). On a per-case basis, the total costs are $5,670 for BCC and $10,555 for SCC. The higher per-case cost for SCC is largely a result of a lower survival rate, and hence higher indirect and intangible costs. Our estimates can be used to raise awareness of occupational solar UV exposure as an important causal factor in NMSCs and can highlight the importance of occupational BCC and SCC among other occupational cancers.
Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements
Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.
2016-01-01
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037
Addition and Removal Energies via the In-Medium Similarity Renormalization Group Method
NASA Astrophysics Data System (ADS)
Yuan, Fei
The in-medium similarity renormalization group (IM-SRG) is an ab initio many-body method suitable for systems with moderate numbers of particles due to its polynomial scaling in computational cost. The formalism is highly flexible and admits a variety of modifications that extend its utility beyond the original goal of computing ground state energies of closed-shell systems. In this work, we present an extension of IM-SRG through quasidegenerate perturbation theory (QDPT) to compute addition and removal energies (single particle energies) near the Fermi level at low computational cost. This expands the range of systems that can be studied from closed-shell ones to nearby systems that differ by one particle. The method is applied to circular quantum dot systems and nuclei, and compared against other methods including equations-of-motion (EOM) IM-SRG and EOM coupled-cluster (CC) theory. The results are in good agreement for most cases. As part of this work, we present an open-source implementation of our flexible and easy-to-use J-scheme framework as well as the HF, IM-SRG, and QDPT codes built upon this framework. We include an overview of the overall structure, the implementation details, and strategies for maintaining high code quality and efficiency. Lastly, we also present a graphical application for manipulation of angular momentum coupling coefficients through a diagrammatic notation for angular momenta (Jucys diagrams). The tool enables rapid derivations of equations involving angular momentum coupling--such as in J-scheme--and significantly reduces the risk of human errors.
A Lumped Computational Model for Sodium Sulfur Battery Analysis
NASA Astrophysics Data System (ADS)
Wu, Fan
Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU
Xia, Yong; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957
Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.
Xia, Yong; Wang, Kuanquan; Zhang, Henggui
2015-01-01
Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.
42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).
Code of Federal Regulations, 2012 CFR
2012-10-01
..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation... 42 Public Health 3 2012-10-01 2012-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...
42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).
Code of Federal Regulations, 2011 CFR
2011-10-01
... MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation of... 42 Public Health 3 2011-10-01 2011-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...
42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).
Code of Federal Regulations, 2010 CFR
2010-10-01
... MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation of... 42 Public Health 3 2010-10-01 2010-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...
hPIN/hTAN: Low-Cost e-Banking Secure against Untrusted Computers
NASA Astrophysics Data System (ADS)
Li, Shujun; Sadeghi, Ahmad-Reza; Schmitz, Roland
We propose hPIN/hTAN, a low-cost token-based e-banking protection scheme when the adversary has full control over the user's computer. Compared with existing hardware-based solutions, hPIN/hTAN depends on neither second trusted channel, nor secure keypad, nor computationally expensive encryption module.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.
Virtualization and cloud computing in dentistry.
Chow, Frank; Muftu, Ali; Shorter, Richard
2014-01-01
The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.
Bringing computational models of bone regeneration to the clinic.
Carlier, Aurélie; Geris, Liesbet; Lammens, Johan; Van Oosterwyck, Hans
2015-01-01
Although the field of bone regeneration has experienced great advancements in the last decades, integrating all the relevant, patient-specific information into a personalized diagnosis and optimal treatment remains a challenging task due to the large number of variables that affect bone regeneration. Computational models have the potential to cope with this complexity and to improve the fundamental understanding of the bone regeneration processes as well as to predict and optimize the patient-specific treatment strategies. However, the current use of computational models in daily orthopedic practice is very limited or inexistent. We have identified three key hurdles that limit the translation of computational models of bone regeneration from bench to bed side. First, there exists a clear mismatch between the scope of the existing and the clinically required models. Second, most computational models are confronted with limited quantitative information of insufficient quality thereby hampering the determination of patient-specific parameter values. Third, current computational models are only corroborated with animal models, whereas a thorough (retrospective and prospective) assessment of the computational model will be crucial to convince the health care providers of the capabilities thereof. These challenges must be addressed so that computational models of bone regeneration can reach their true potential, resulting in the advancement of individualized care and reduction of the associated health care costs. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
NASA Astrophysics Data System (ADS)
Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea
2015-09-01
The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.
NASA Astrophysics Data System (ADS)
Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.
2011-06-01
For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.
Specialized computer architectures for computational aerodynamics
NASA Technical Reports Server (NTRS)
Stevenson, D. K.
1978-01-01
In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.
Ding, Yan; Fei, Yang; Xu, Biao; Yang, Jun; Yan, Weirong; Diwan, Vinod K; Sauerborn, Rainer; Dong, Hengjin
2015-07-25
Studies into the costs of syndromic surveillance systems are rare, especially for estimating the direct costs involved in implementing and maintaining these systems. An Integrated Surveillance System in rural China (ISSC project), with the aim of providing an early warning system for outbreaks, was implemented; village clinics were the main surveillance units. Village doctors expressed their willingness to join in the surveillance if a proper subsidy was provided. This study aims to measure the costs of data collection by village clinics to provide a reference regarding the subsidy level required for village clinics to participate in data collection. We conducted a cross-sectional survey with a village clinic questionnaire and a staff questionnaire using a purposive sampling strategy. We tracked reported events using the ISSC internal database. Cost data included staff time, and the annual depreciation and opportunity costs of computers. We measured the village doctors' time costs for data collection by multiplying the number of full time employment equivalents devoted to the surveillance by the village doctors' annual salaries and benefits, which equaled their net incomes. We estimated the depreciation and opportunity costs of computers by calculating the equivalent annual computer cost and then allocating this to the surveillance based on the percentage usage. The estimated total annual cost of collecting data was 1,423 Chinese Renminbi (RMB) in 2012 (P25 = 857, P75 = 3284), including 1,250 RMB (P25 = 656, P75 = 3000) staff time costs and 134 RMB (P25 = 101, P75 = 335) depreciation and opportunity costs of computers. The total costs of collecting data from the village clinics for the syndromic surveillance system was calculated to be low compared with the individual net income in County A.
Processor Would Find Best Paths On Map
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P.
1990-01-01
Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.
Code of Federal Regulations, 2010 CFR
2010-04-01
... adjustments due to changes in the cost of purchased power or energy. 175.12 Section 175.12 Indians BUREAU OF... adjustments due to changes in the cost of purchased power or energy. Except for adjustments to rates due to changes in the cost of purchased power or energy, the Area Director shall adjust electric power rates...
Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources
NASA Astrophysics Data System (ADS)
Evans, D.; Fisk, I.; Holzman, B.; Melo, A.; Metson, S.; Pordes, R.; Sheldon, P.; Tiradani, A.
2011-12-01
Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "on-demand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.
Hybrid reduced order modeling for assembly calculations
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...
2015-08-14
While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less
Tensor Factorization for Low-Rank Tensor Completion.
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
3D Computer aided treatment planning in endodontics.
van der Meer, Wicher J; Vissink, Arjan; Ng, Yuan Ling; Gulabivala, Kishor
2016-02-01
Obliteration of the root canal system due to accelerated dentinogenesis and dystrophic calcification can challenge the achievement of root canal treatment goals. This paper describes the application of 3D digital mapping technology for predictable navigation of obliterated canal systems during root canal treatment to avoid iatrogenic damage of the root. Digital endodontic treatment planning for anterior teeth with severely obliterated root canal systems was accomplished with the aid of computer software, based on cone beam computer tomography (CBCT) scans and intra-oral scans of the dentition. On the basis of these scans, endodontic guides were created for the planned treatment through digital designing and rapid prototyping fabrication. The custom-made guides allowed for an uncomplicated and predictable canal location and management. The method of digital designing and rapid prototyping of endodontic guides allows for reliable and predictable location of root canals of teeth with calcifically metamorphosed root canal systems. The endodontic directional guide facilitates difficult endodontic treatments at little additional cost. Copyright © 2016. Published by Elsevier Ltd.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
The financial and health burden of diabetic ambulatory care sensitive hospitalisations in Mexico.
Lugo-Palacios, David G; Cairns, John
2016-01-01
To estimate the financial and health burden of diabetic ambulatory care sensitive hospitalisations (ACSH) in Mexico during 2001-2011. We identified ACSH due to diabetic complications in general hospitals run by local health ministries and estimated their financial cost using diagnostic related groups. The health burden estimation assumes that patients would not have experienced complications if they had received appropriate primary care and computes the associated Disability-Adjusted Life Years (DALYs). The financial cost of diabetic ACSH increased by 125% in real terms and their health burden in 2010 accounted for 4.2% of total DALYs associated with diabetes in Mexico. Avoiding preventable hospitalisations could free resources within the health system for other health purposes. In addition, patients with ACSH suffer preventable losses of health that should be considered when assessing the performance of any primary care intervention.
D'Agostino, Fabio; Vellone, Ercole; Tontini, Francesco; Zega, Maurizio; Alvaro, Rosaria
2012-01-01
The aim of a nursing data set is to provide useful information for assessing the level of care and the state of health of the population. Currently, both in Italy and in other countries, this data is incomplete due to the lack of a structured nursing documentation , making it indispensible to develop a Nursing Minimum Data Set (NMDS) using standard nursing language to evaluate care, costs and health requirements. The aim of the project described , is to create a computer system using standard nursing terms with a dedicated software which will aid the decision-making process and provide the relative documentation. This will make it possible to monitor nursing activity and costs and their impact on patients' health : adequate training and involvement of nursing staff will play a fundamental role.
Using MODIS Terra 250 m Imagery to Map Concentrations of Total Suspended Matter in Coastal Waters
NASA Technical Reports Server (NTRS)
Miller, Richard L.; McKee, Brent A.
2004-01-01
High concentrations of suspended particulate matter in coastal waters directly effect or govern numerous water column and benthic processes. The concentration of suspended sediments derived from bottom sediment resuspension or discharge of sediment-laden rivers is highly variable over a wide range of time and space scales. Although there has been considerable effort to use remotely sensed images to provide synoptic maps of suspended particulate matter, there are limited routine applications of this technology due in-part to the low spatial resolution, long revisit period, or cost of most remotely sensed data. In contrast, near daily coverage of medium-resolution data is available from the MODIS Terra instrument without charge from several data distribution gateways. Equally important, several display and processing programs are available that operate on low cost computers.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
... provides an updated cost/benefit analysis providing an assessment of the benefits attained by HUD through... the scope of the existing computer matching program to now include the updated cost/ benefit analysis... change, and find a continued favorable examination of benefit/cost results; and (2) All parties certify...
26 CFR 1.179-5 - Time and manner of making election.
Code of Federal Regulations, 2010 CFR
2010-04-01
... desktop computer costing $1,500. On Taxpayer's 2003 Federal tax return filed on April 15, 2004, Taxpayer elected to expense under section 179 the full cost of the laptop computer and the full cost of the desktop... provided by the Internal Revenue Code, the regulations under the Code, or other guidance published in the...
Costs, needs must be balanced when buying computer systems.
Krantz, G M; Doyle, J J; Stone, S G
1989-06-01
A healthcare institution must carefully examine its internal needs and external requirements before selecting an information system. The system's costs must be carefully weighed because significant computer cost overruns can cripple overall hospital finances. A New Jersey hospital carefully studied these issues and determined that a contract with a regional data center was its best option.
Hohenforst-Schmidt, Wolfgang; Linsmeier, Bernd; Zarogoulidis, Paul; Freitag, Lutz; Darwiche, Kaid; Browning, Robert; Turner, J Francis; Huang, Haidong; Li, Qiang; Vogl, Thomas; Zarogoulidis, Konstantinos; Brachmann, Johannes; Rittger, Harald
2015-01-01
Tracheomalacia or tracheobronchomalacia (TM or TBM) is a common problem especially for elderly patients often unfit for surgical techniques. Several surgical or minimally invasive techniques have already been described. Stenting is one option but in general long-time stenting is accompanied by a high complication rate. Stent removal is more difficult in case of self-expandable nitinol stents or metallic stents in general in comparison to silicone stents. The main disadvantage of silicone stents in comparison to uncovered metallic stents is migration and plugging. We compared the operation time and in particular the duration of a sufficient Dumon stent fixation with different techniques in a patient with severe posttracheotomy TM and strongly reduced mobility of the vocal cords due to Parkinson's disease. The combined approach with simultaneous Dumon stenting and endoluminal transtracheal externalized suture under cone-beam computer tomography guidance with the Berci needle was by far the fastest approach compared to a (not performed) surgical intervention, or even purely endoluminal suturing through the rigid bronchoscope. The duration of the endoluminal transtracheal externalized suture was between 5 minutes and 9 minutes with the Berci needle; the pure endoluminal approach needed 51 minutes. The alternative of tracheobronchoplasty was refused by the patient. In general, 180 minutes for this surgical approach is calculated. The costs of the different approaches are supposed to vary widely due to the fact that in Germany 1 minute in an operation room costs on average approximately 50-60€ inclusive of taxes. In our own hospital (tertiary level), it is nearly 30€ per minute in an operation room for a surgical approach. Calculating an additional 15 minutes for patient preparation and transfer to wake-up room, therefore a total duration inside the investigation room of 30 minutes, the cost per flexible bronchoscopy is per minute on average less than 6€. Although the Dumon stenting requires a set-up with more expensive anesthesiology accompaniment, which takes longer than a flexible investigation estimated at 1 hour in an operation room, still without calculation of the costs of the materials and specialized staff that the surgical approach would consume at least 3,000€ more than a minimally invasive approach performed with the Berci needle. This difference is due to the longer time of the surgical intervention which is calculated at approximately 180 minutes in comparison to the achieved non-surgical approach of 60 minutes in the operation suite.
Filters for Improvement of Multiscale Data from Atomistic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Reynolds, Daniel R.
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Filters for Improvement of Multiscale Data from Atomistic Simulations
Gardner, David J.; Reynolds, Daniel R.
2017-01-05
Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less
Optimized distributed computing environment for mask data preparation
NASA Astrophysics Data System (ADS)
Ahn, Byoung-Sup; Bang, Ju-Mi; Ji, Min-Kyu; Kang, Sun; Jang, Sung-Hoon; Choi, Yo-Han; Ki, Won-Tai; Choi, Seong-Woon; Han, Woo-Sung
2005-11-01
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.
20 CFR 226.13 - Cost-of-living increase in employee vested dual benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... RAILROAD RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee... increase is based on the cost-of-living increases in social security benefits during the period from...
Craniofacial Reconstruction by a Cost-Efficient Template-Based Process Using 3D Printing
Beiglboeck, Fabian; Honigmann, Philipp; Jaquiéry, Claude; Thieringer, Florian
2017-01-01
Summary: Craniofacial defects often result in aesthetic and functional deficits, which affect the patient’s psyche and wellbeing. Patient-specific implants remain the optimal solution, but their use is limited or impractical due to their high costs. This article describes a fast and cost-efficient workflow of in-house manufactured patient-specific implants for craniofacial reconstruction and cranioplasty. As a proof of concept, we present a case of reconstruction of a craniofacial defect with involvement of the supraorbital rim. The following hybrid manufacturing process combines additive manufacturing with silicone molding and an intraoperative, manual fabrication process. A computer-aided design template is 3D printed from thermoplastics by a fused deposition modeling 3D printer and then silicone molded manually. After sterilization of the patient-specific mold, it is used intraoperatively to produce an implant from polymethylmethacrylate. Due to the combination of these 2 straightforward processes, the procedure can be kept very simple, and no advanced equipment is needed, resulting in minimal financial expenses. The whole fabrication of the mold is performed within approximately 2 hours depending on the template’s size and volume. This reliable technique is easy to adopt and suitable for every health facility, especially those with limited financial resources in less privileged countries, enabling many more patients to profit from patient-specific treatment. PMID:29263977
Transmembrane helices containing a charged arginine are thermodynamically stable.
Ulmschneider, Martin B; Ulmschneider, Jakob P; Freites, J Alfredo; von Heijne, Gunnar; Tobias, Douglas J; White, Stephen H
2017-10-01
Hydrophobic amino acids are abundant in transmembrane (TM) helices of membrane proteins. Charged residues are sparse, apparently due to the unfavorable energetic cost of partitioning charges into nonpolar phases. Nevertheless, conserved arginine residues within TM helices regulate vital functions, such as ion channel voltage gating and integrin receptor inactivation. The energetic cost of arginine in various positions along hydrophobic helices has been controversial. Potential of mean force (PMF) calculations from atomistic molecular dynamics simulations predict very large energetic penalties, while in vitro experiments with Sec61 translocons indicate much smaller penalties, even for arginine in the center of hydrophobic TM helices. Resolution of this conflict has proved difficult, because the in vitro assay utilizes the complex Sec61 translocon, while the PMF calculations rely on the choice of simulation system and reaction coordinate. Here we present the results of computational and experimental studies that permit direct comparison with the Sec61 translocon results. We find that the Sec61 translocon mediates less efficient membrane insertion of Arg-containing TM helices compared with our computational and experimental bilayer-insertion results. In the simulations, a combination of arginine snorkeling, bilayer deformation, and peptide tilting is sufficient to lower the penalty of Arg insertion to an extent such that a hydrophobic TM helix with a central Arg residue readily inserts into a model membrane. Less favorable insertion by the translocon may be due to the decreased fluidity of the endoplasmic reticulum (ER) membrane compared with pure palmitoyloleoyl-phosphocholine (POPC). Nevertheless, our results provide an explanation for the differences between PMF- and experiment-based penalties for Arg burial.
a Fast and Flexible Method for Meta-Map Building for Icp Based Slam
NASA Astrophysics Data System (ADS)
Kurian, A.; Morin, K. W.
2016-06-01
Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
DEP : a computer program for evaluating lumber drying costs and investments
Stewart Holmes; George B. Harpole; Edward Bilek
1983-01-01
The DEP computer program is a modified discounted cash flow computer program designed for analysis of problems involving economic analysis of wood drying processes. Wood drying processes are different from other processes because of the large amounts of working capital required to finance inventories, and because of relatively large shares of costs charged to inventory...
Thermodynamic cost of computation, algorithmic complexity and the information metric
NASA Technical Reports Server (NTRS)
Zurek, W. H.
1989-01-01
Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.
Weight and cost estimating relationships for heavy lift airships
NASA Technical Reports Server (NTRS)
Gray, D. W.
1979-01-01
Weight and cost estimating relationships, including additional parameters that influence the cost and performance of heavy-lift airships (HLA), are discussed. Inputs to a closed loop computer program, consisting of useful load, forward speed, lift module positive or negative thrust, and rotors and propellers, are examined. Detail is given to the HLA cost and weight program (HLACW), which computes component weights, vehicle size, buoyancy lift, rotor and propellar thrust, and engine horse power. This program solves the problem of interrelating the different aerostat, rotors, engines and propeller sizes. Six sets of 'default parameters' are left for the operator to change during each computer run enabling slight data manipulation without altering the program.
ERIC Educational Resources Information Center
Chamberlain, Ed
A cost benefit study was conducted to determine the effectiveness of a computer assisted instruction/computer management system (CAI/CMS) as an alternative to conventional methods of teaching reading within Chapter 1 and DPPF funded programs of the Columbus (Ohio) Public Schools. The Chapter 1 funded Compensatory Language Experiences and Reading…
Application of the System Identification Technique to Goal-Directed Saccades.
1984-07-30
1983 to May 31, 1984 by the AFOSR under Grant No. AFOSR-83-0187. 1. Salaries & Wages $7,257 2. Employee Benefits $ 4186 3. Indirect Costs $1,177 *’ 1...Equipment $2,127 DEC VT100 Terminal Computer Terminal Table & Chair Computer Interface 5. Travel $ 672 6. Miscellaneous Expenses 281 Computer Costs ...Telephone Xeroxing Report Costs Total $12,000 A 1cc;3t Ion r . ;. ., ’o n. e, Ef V r CI3 k.i *r 7’r’ ’ - s-I - . CLef • -- * 0 - -- -, r ~ . r w
NASA Technical Reports Server (NTRS)
1973-01-01
An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
New reflective symmetry design capability in the JPL-IDEAS Structure Optimization Program
NASA Technical Reports Server (NTRS)
Strain, D.; Levy, R.
1986-01-01
The JPL-IDEAS antenna structure analysis and design optimization computer program was modified to process half structure models of symmetric structures subjected to arbitrary external static loads, synthesize the performance, and optimize the design of the full structure. Significant savings in computation time and cost (more than 50%) were achieved compared to the cost of full model computer runs. The addition of the new reflective symmetry analysis design capabilities to the IDEAS program allows processing of structure models whose size would otherwise prevent automated design optimization. The new program produced synthesized full model iterative design results identical to those of actual full model program executions at substantially reduced cost, time, and computer storage.
NASA Technical Reports Server (NTRS)
Lansing, F. L.; Strain, D. M.; Chai, V. W.; Higgins, S.
1979-01-01
The energy Comsumption Computer Program was developed to simulate building heating and cooling loads and compute thermal and electric energy consumption and cost. This article reports on the new additional algorithms and modifications made in an effort to widen the areas of application. The program structure was rewritten accordingly to refine and advance the building model and to further reduce the processing time and cost. The program is noted for its very low cost and ease of use compared to other available codes. The accuracy of computations is not sacrificed however, since the results are expected to lie within + or - 10% of actual energy meter readings.
Development of a small-scale computer cluster
NASA Astrophysics Data System (ADS)
Wilhelm, Jay; Smith, Justin T.; Smith, James E.
2008-04-01
An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2013 CFR
2013-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2014 CFR
2014-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2010 CFR
2010-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
14 CFR 152.319 - Monitoring and reporting of program performance.
Code of Federal Regulations, 2012 CFR
2012-01-01
... established for the period, made, if applicable, on a quantitative basis related to cost data for computation... established for the period, made, if applicable, on a quantitative basis related to costs for computation of...
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
Probabilistic Analysis of Gas Turbine Field Performance
NASA Technical Reports Server (NTRS)
Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.
2002-01-01
A gas turbine thermodynamic cycle was computationally simulated and probabilistically evaluated in view of the several uncertainties in the performance parameters, which are indices of gas turbine health. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design, enhance performance, increase system availability and make it cost effective. The analysis leads to the selection of the appropriate measurements to be used in the gas turbine health determination and to the identification of both the most critical measurements and parameters. Probabilistic analysis aims at unifying and improving the control and health monitoring of gas turbine aero-engines by increasing the quality and quantity of information available about the engine's health and performance.
Learning to assign binary weights to binary descriptor
NASA Astrophysics Data System (ADS)
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
A Development Architecture for Serious Games Using BCI (Brain Computer Interface) Sensors
Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun
2012-01-01
Games that use brainwaves via brain–computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories. PMID:23202227
A development architecture for serious games using BCI (brain computer interface) sensors.
Sung, Yunsick; Cho, Kyungeun; Um, Kyhyun
2012-11-12
Games that use brainwaves via brain-computer interface (BCI) devices, to improve brain functions are known as BCI serious games. Due to the difficulty of developing BCI serious games, various BCI engines and authoring tools are required, and these reduce the development time and cost. However, it is desirable to reduce the amount of technical knowledge of brain functions and BCI devices needed by game developers. Moreover, a systematic BCI serious game development process is required. In this paper, we present a methodology for the development of BCI serious games. We describe an architecture, authoring tools, and development process of the proposed methodology, and apply it to a game development approach for patients with mild cognitive impairment as an example. This application demonstrates that BCI serious games can be developed on the basis of expert-verified theories.
Iwamoto, Masami; Nakahira, Yuko; Kimpara, Hideyuki
2015-01-01
Active safety devices such as automatic emergency brake (AEB) and precrash seat belt have the potential to accomplish further reduction in the number of the fatalities due to automotive accidents. However, their effectiveness should be investigated by more accurate estimations of their interaction with human bodies. Computational human body models are suitable for investigation, especially considering muscular tone effects on occupant motions and injury outcomes. However, the conventional modeling approaches such as multibody models and detailed finite element (FE) models have advantages and disadvantages in computational costs and injury predictions considering muscular tone effects. The objective of this study is to develop and validate a human body FE model with whole body muscles, which can be used for the detailed investigation of interaction between human bodies and vehicular structures including some safety devices precrash and during a crash with relatively low computational costs. In this study, we developed a human body FE model called THUMS (Total HUman Model for Safety) with a body size of 50th percentile adult male (AM50) and a sitting posture. The model has anatomical structures of bones, ligaments, muscles, brain, and internal organs. The total number of elements is 281,260, which would realize relatively low computational costs. Deformable material models were assigned to all body parts. The muscle-tendon complexes were modeled by truss elements with Hill-type muscle material and seat belt elements with tension-only material. The THUMS was validated against 35 series of cadaver or volunteer test data on frontal, lateral, and rear impacts. Model validations for 15 series of cadaver test data associated with frontal impacts are presented in this article. The THUMS with a vehicle sled model was applied to investigate effects of muscle activations on occupant kinematics and injury outcomes in specific frontal impact situations with AEB. In the validations using 5 series of cadaver test data, force-time curves predicted by the THUMS were quantitatively evaluated using correlation and analysis (CORA), which showed good or acceptable agreement with cadaver test data in most cases. The investigation of muscular effects showed that muscle activation levels and timing had significant effects on occupant kinematics and injury outcomes. Although further studies on accident injury reconstruction are needed, the THUMS has the potential for predictions of occupant kinematics and injury outcomes considering muscular tone effects with relatively low computational costs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a) Required...
26 CFR 1.611-2 - Rules applicable to mines, oil and gas wells, and other natural deposits.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Rules applicable to mines, oil and gas wells, and other natural deposits. (a) Computation of cost depletion of mines, oil and gas wells, and other natural deposits. (1) The basis upon which cost depletion... for the taxable year, the cost depletion for that year shall be computed by dividing such amount by...
Guastello, Stephen J; Gorin, Hillary; Huschen, Samuel; Peters, Natalie E; Fabisch, Megan; Poston, Kirsten
2012-10-01
It has become well established in laboratory experiments that switching tasks, perhaps due to interruptions at work, incur costs in response time to complete the next task. Conditions are also known that exaggerate or lessen the switching costs. Although switching costs can contribute to fatigue, task switching can also be an adaptive response to fatigue. The present study introduces a new research paradigm for studying the emergence of voluntary task switching regimes, self-organizing processes therein, and the possibly conflicting roles of switching costs and minimum entropy. Fifty-four undergraduates performed 7 different computer-based cognitive tasks producing sets of 49 responses under instructional conditions requiring task quotas or no quotas. The sequences of task choices were analyzed using orbital decomposition to extract pattern types and lengths, which were then classified and compared with regard to Shannon entropy, topological entropy, number of task switches involved, and overall performance. Results indicated that similar but different patterns were generated under the two instructional conditions, and better performance was associated with lower topological entropy. Both entropy metrics were associated with the amount of voluntary task switching. Future research should explore conditions affecting the trade-off between switching costs and entropy, levels of automaticity between task elements, and the role of voluntary switching regimes on fatigue.
Hulten, Edward; Goehler, Alexander; Bittencourt, Marcio Sommer; Bamberg, Fabian; Schlett, Christopher L; Truong, Quynh A; Nichols, John; Nasir, Khurram; Rogers, Ian S; Gazelle, Scott G; Nagurney, John T; Hoffmann, Udo; Blankstein, Ron
2013-09-01
Coronary computed tomographic angiography (cCTA) allows rapid, noninvasive exclusion of obstructive coronary artery disease (CAD). However, concern exists whether implementation of cCTA in the assessment of patients presenting to the emergency department with acute chest pain will lead to increased downstream testing and costs compared with alternative strategies. Our aim was to compare observed actual costs of usual care (UC) with projected costs of a strategy including early cCTA in the evaluation of patients with acute chest pain in the Rule Out Myocardial Infarction Using Computer Assisted Tomography I (ROMICAT I) study. We compared cost and hospital length of stay of UC observed among 368 patients enrolled in the ROMICAT I study with projected costs of management based on cCTA. Costs of UC were determined by an electronic cost accounting system. Notably, UC was not influenced by cCTA results because patients and caregivers were blinded to the cCTA results. Costs after early implementation of cCTA were estimated assuming changes in management based on cCTA findings of the presence and severity of CAD. Sensitivity analysis was used to test the influence of key variables on both outcomes and costs. We determined that in comparison with UC, cCTA-guided triage, whereby patients with no CAD are discharged, could reduce total hospital costs by 23% (P<0.001). However, when the prevalence of obstructive CAD increases, index hospitalization cost increases such that when the prevalence of ≥ 50% stenosis is >28% to 33%, the use of cCTA becomes more costly than UC. cCTA may be a cost-saving tool in acute chest pain populations that have a prevalence of potentially obstructive CAD <30%. However, increased cost would be anticipated in populations with higher prevalence of disease.
Role of Computational Fluid Dynamics and Wind Tunnels in Aeronautics R and D
NASA Technical Reports Server (NTRS)
Malik, Murjeeb R.; Bushnell, Dennis M.
2012-01-01
The purpose of this report is to investigate the status and future projections for the question of supplantation of wind tunnels by computation in design and to intuit the potential impact of computation approaches on wind-tunnel utilization all with an eye toward reducing the infrastructure cost at aeronautics R&D centers. Wind tunnels have been closing for myriad reasons, and such closings have reduced infrastructure costs. Further cost reductions are desired, and the work herein attempts to project which wind-tunnel capabilities can be replaced in the future and, if possible, the timing of such. If the possibility exists to project when a facility could be closed, then maintenance and other associated costs could be rescheduled accordingly (i.e., before the fact) to obtain an even greater infrastructure cost reduction.
Computing at the speed limit (supercomputers)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernhard, R.
1982-07-01
The author discusses how unheralded efforts in the United States, mainly in universities, have removed major stumbling blocks to building cost-effective superfast computers for scientific and engineering applications within five years. These computers would have sustained speeds of billions of floating-point operations per second (flops), whereas with the fastest machines today the top sustained speed is only 25 million flops, with bursts to 160 megaflops. Cost-effective superfast machines can be built because of advances in very large-scale integration and the special software needed to program the new machines. VLSI greatly reduces the cost per unit of computing power. The developmentmore » of such computers would come at an opportune time. Although the US leads the world in large-scale computer technology, its supremacy is now threatened, not surprisingly, by the Japanese. Publicized reports indicate that the Japanese government is funding a cooperative effort by commercial computer manufacturers to develop superfast computers-about 1000 times faster than modern supercomputers. The US computer industry, by contrast, has balked at attempting to boost computer power so sharply because of the uncertain market for the machines and the failure of similar projects in the past to show significant results.« less
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
Performance analysis of a new radial-axial flux machine with SMC cores and ferrite magnets
NASA Astrophysics Data System (ADS)
Liu, Chengcheng; Wang, Youhua; Lei, Gang; Guo, Youguang; Zhu, Jianguo
2017-05-01
Soft magnetic composite (SMC) is a popular material in designing of new 3D flux electrical machines nowadays for it has the merits of isotropic magnetic characteristic, low eddy current loss and high design flexibility over the electric steel. The axial flux machine (AFM) with the extended stator tooth tip both in the radial and circumferential direction is a good example, which has been investigated in the last years. Based on the 3D flux AFM and radial flux machine, this paper proposes a new radial-axial flux machine (RAFM) with SMC cores and ferrite magnets, which has very high torque density though the low cost low magnetic energy ferrite magnet is utilized. Moreover, the cost of RAFM is quite low since the manufacturing cost can be reduced by using the SMC cores and the material cost will be decreased due to the adoption of the ferrite magnets. The 3D finite element method (FEM) is used to calculate the magnetic flux density distribution and electromagnetic parameters. For the core loss calculation, the rotational core loss computation method is used based on the experiment results from previous 3D magnetic tester.
[Measures to reduce lighting-related energy use and costs at hospital nursing stations].
Su, Chiu-Ching; Chen, Chen-Hui; Chen, Shu-Hwa; Ping, Tsui-Chu
2011-06-01
Hospitals have long been expected to deliver medical services in an environment that is comfortable and bright. This expectation keeps hospital energy demand stubbornly high and energy costs spiraling due to escalating utility fees. Hospitals must identify appropriate strategies to control electricity usage in order to control operating costs effectively. This paper proposes several electricity saving measures that both support government policies aimed at reducing global warming and help reduce energy consumption at the authors' hospital. The authors held educational seminars, established a website teaching energy saving methods, maximized facility and equipment use effectiveness (e.g., adjusting lamp placements, power switch and computer saving modes), posted signs promoting electricity saving, and established a regularized energy saving review mechanism. After implementation, average nursing staff energy saving knowledge had risen from 71.8% to 100% and total nursing station electricity costs fell from NT$16,456 to NT$10,208 per month, representing an effective monthly savings of 37.9% (NT$6,248). This project demonstrated the ability of a program designed to slightly modify nursing staff behavior to achieve effective and meaningful results in reducing overall electricity use.
Quasi-static earthquake cycle simulation based on nonlinear viscoelastic finite element analyses
NASA Astrophysics Data System (ADS)
Agata, R.; Ichimura, T.; Hyodo, M.; Barbot, S.; Hori, T.
2017-12-01
To explain earthquake generation processes, simulation methods of earthquake cycles have been studied. For such simulations, the combination of the rate- and state-dependent friction law at the fault plane and the boundary integral method based on Green's function in an elastic half space is widely used (e.g. Hori 2009; Barbot et al. 2012). In this approach, stress change around the fault plane due to crustal deformation can be computed analytically, while the effects of complex physics such as mantle rheology and gravity are generally not taken into account. To consider such effects, we seek to develop an earthquake cycle simulation combining crustal deformation computation based on the finite element (FE) method with the rate- and state-dependent friction law. Since the drawback of this approach is the computational cost associated with obtaining numerical solutions, we adopt a recently developed fast and scalable FE solver (Ichimura et al. 2016), which assumes use of supercomputers, to solve the problem in a realistic time. As in the previous approach, we solve the governing equations consisting of the rate- and state-dependent friction law. In solving the equations, we compute stress changes along the fault plane due to crustal deformation using FE simulation, instead of computing them by superimposing slip response function as in the previous approach. In stress change computation, we take into account nonlinear viscoelastic deformation in the asthenosphere. In the presentation, we will show simulation results in a normative three-dimensional problem, where a circular-shaped velocity-weakening area is set in a square-shaped fault plane. The results with and without nonlinear viscosity in the asthenosphere will be compared. We also plan to apply the developed code to simulate the post-earthquake deformation of a megathrust earthquake, such as the 2011 Tohoku earthquake. Acknowledgment: The results were obtained using the K computer at the RIKEN (Proposal number hp160221).
Vivekanandhan, Sapthagirivasan; Subramaniam, Janarthanam; Mariamichael, Anburajan
2016-10-01
Hip fractures due to osteoporosis are increasing progressively across the globe. It is also difficult for those fractured patients to undergo dual-energy X-ray absorptiometry scans due to its complicated protocol and its associated cost. The utilisation of computed tomography for the fracture treatment has become common in the clinical practice. It would be helpful for orthopaedic clinicians, if they could get some additional information related to bone strength for better treatment planning. The aim of our study was to develop an automated system to segment the femoral neck region, extract the cortical and trabecular bone parameters, and assess the bone strength using an isotropic volume construction from clinical computed tomography images. The right hip computed tomography and right femur dual-energy X-ray absorptiometry measurements were taken from 50 south-Indian females aged 30-80 years. Each computed tomography image volume was re-constructed to form isotropic volumes. An automated system by incorporating active contour models was used to segment the neck region. A minimum distance boundary method was applied to isolate the cortical and trabecular bone components. The trabecular bone was enhanced and segmented using trabecular enrichment approach. The cortical and trabecular bone features were extracted and statistically compared with dual-energy X-ray absorptiometry measured femur neck bone mineral density. The extracted bone measures demonstrated a significant correlation with neck bone mineral density (r > 0.7, p < 0.001). The inclusion of cortical measures, along with the trabecular measures extracted after isotropic volume construction and trabecular enrichment approach procedures, resulted in better estimation of bone strength. The findings suggest that the proposed system using the clinical computed tomography images scanned with low dose could eventually be helpful in osteoporosis diagnosis and its treatment planning. © IMechE 2016.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
ERIC Educational Resources Information Center
Lai, Kwok-Wing
Designed to examine the application and cost-effectiveness of computer-assisted instruction (CAI) for secondary education in developing countries, this document is divided into eight chapters. A general introduction defines the research problem, describes the research methodology, and provides definitions of key terms used throughout the paper.…
Molecular dynamics simulations through GPU video games technologies
Loukatou, Styliani; Papageorgiou, Louis; Fakourelis, Paraskevas; Filntisi, Arianna; Polychronidou, Eleftheria; Bassis, Ioannis; Megalooikonomou, Vasileios; Makałowski, Wojciech; Vlachakis, Dimitrios; Kossida, Sophia
2016-01-01
Bioinformatics is the scientific field that focuses on the application of computer technology to the management of biological information. Over the years, bioinformatics applications have been used to store, process and integrate biological and genetic information, using a wide range of methodologies. One of the most de novo techniques used to understand the physical movements of atoms and molecules is molecular dynamics (MD). MD is an in silico method to simulate the physical motions of atoms and molecules under certain conditions. This has become a state strategic technique and now plays a key role in many areas of exact sciences, such as chemistry, biology, physics and medicine. Due to their complexity, MD calculations could require enormous amounts of computer memory and time and therefore their execution has been a big problem. Despite the huge computational cost, molecular dynamics have been implemented using traditional computers with a central memory unit (CPU). A graphics processing unit (GPU) computing technology was first designed with the goal to improve video games, by rapidly creating and displaying images in a frame buffer such as screens. The hybrid GPU-CPU implementation, combined with parallel computing is a novel technology to perform a wide range of calculations. GPUs have been proposed and used to accelerate many scientific computations including MD simulations. Herein, we describe the new methodologies developed initially as video games and how they are now applied in MD simulations. PMID:27525251
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2013-01-01
Virtual machine (VM) technologies, especially those offered via Cloud platforms, present new dimensions with respect to performance and cost in executing parallel discrete event simulation (PDES) applications. Due to the introduction of overall cost as a metric, the choice of the highest-end computing configuration is no longer the most economical one. Moreover, runtime dynamics unique to VM platforms introduce new performance characteristics, and the variety of possible VM configurations give rise to a range of choices for hosting a PDES run. Here, an empirical study of these issues is undertaken to guide an understanding of the dynamics, trends and trade-offsmore » in executing PDES on VM/Cloud platforms. Performance results and cost measures are obtained from actual execution of a range of scenarios in two PDES benchmark applications on the Amazon Cloud offerings and on a high-end VM host machine. The data reveals interesting insights into the new VM-PDES dynamics that come into play and also leads to counter-intuitive guidelines with respect to choosing the best and second-best configurations when overall cost of execution is considered. In particular, it is found that choosing the highest-end VM configuration guarantees neither the best runtime nor the least cost. Interestingly, choosing a (suitably scaled) low-end VM configuration provides the least overall cost without adversely affecting the total runtime.« less
Novel Highly Parallel and Systolic Architectures Using Quantum Dot-Based Hardware
NASA Technical Reports Server (NTRS)
Fijany, Amir; Toomarian, Benny N.; Spotnitz, Matthew
1997-01-01
VLSI technology has made possible the integration of massive number of components (processors, memory, etc.) into a single chip. In VLSI design, memory and processing power are relatively cheap and the main emphasis of the design is on reducing the overall interconnection complexity since data routing costs dominate the power, time, and area required to implement a computation. Communication is costly because wires occupy the most space on a circuit and it can also degrade clock time. In fact, much of the complexity (and hence the cost) of VLSI design results from minimization of data routing. The main difficulty in VLSI routing is due to the fact that crossing of the lines carrying data, instruction, control, etc. is not possible in a plane. Thus, in order to meet this constraint, the VLSI design aims at keeping the architecture highly regular with local and short interconnection. As a result, while the high level of integration has opened the way for massively parallel computation, practical and full exploitation of such a capability in many applications of interest has been hindered by the constraints on interconnection pattern. More precisely. the use of only localized communication significantly simplifies the design of interconnection architecture but at the expense of somewhat restricted class of applications. For example, there are currently commercially available products integrating; hundreds of simple processor elements within a single chip. However, the lack of adequate interconnection pattern among these processing elements make them inefficient for exploiting a large degree of parallelism in many applications.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Efficient Data Assimilation Algorithms for Bathymetry Applications
NASA Astrophysics Data System (ADS)
Ghorbanidehno, H.; Kokkinaki, A.; Lee, J. H.; Farthing, M.; Hesser, T.; Kitanidis, P. K.; Darve, E. F.
2016-12-01
Information on the evolving state of the nearshore zone bathymetry is crucial to shoreline management, recreational safety, and naval operations. The high cost and complex logistics of using ship-based surveys for bathymetry estimation have encouraged the use of remote sensing monitoring. Data assimilation methods combine monitoring data and models of nearshore dynamics to estimate the unknown bathymetry and the corresponding uncertainties. Existing applications have been limited to the basic Kalman Filter (KF) and the Ensemble Kalman Filter (EnKF). The former can only be applied to low-dimensional problems due to its computational cost; the latter often suffers from ensemble collapse and uncertainty underestimation. This work explores the use of different variants of the Kalman Filter for bathymetry applications. In particular, we compare the performance of the EnKF to the Unscented Kalman Filter and the Hierarchical Kalman Filter, both of which are KF variants for non-linear problems. The objective is to identify which method can better handle the nonlinearities of nearshore physics, while also having a reasonable computational cost. We present two applications; first, the bathymetry of a synthetic one-dimensional cross section normal to the shore is estimated from wave speed measurements. Second, real remote measurements with unknown error statistics are used and compared to in situ bathymetric survey data collected at the USACE Field Research Facility in Duck, NC. We evaluate the information content of different data sets and explore the impact of measurement error and nonlinearities.
Does consideration of GHG reductions change local decision making? A Case Study in Chile
NASA Astrophysics Data System (ADS)
Cifuentes, L. A.; Blumel, G.
2003-12-01
While local air pollution has been a public concern in developing countries for some time, climate change is looked upon as a non-urgent, developed world problem. In this work we present a case study of the interaction of measures to abate air pollution and measures to mitigate GHG emissions in Santiago, Chile, with the purpose of determining if the consideration of reductions in GHG affects the decisions taken to mitigate local air pollution. The emissions reductions of both GHG and local air pollutants were estimated from emission factors (some derived locally) and changes in activity levels. Health benefits due to air pollution abatement were computed using figures derived previously for the cost benefit analysis of Santiago's Decontamination Plan, transferred to the different cities taking into consideration local demographic and income data. The Santiago estimates were obtained using the damage function approach, based on some local epidemiological studies, and on local health and demographic data. Unit social values for the effects were estimated locally (for cost of treatment and lost productivity values) or extrapolated from US values (mainly for WTP values) using the ratio of per-capita income and an income elasticity of 1. The average benefits of emission abatement (in 1997 US\\ per ton) are 1,800 (1,200-2300) for NOx, 3,000 (2,100-3900) for SO2, 31,900 (21,900 - 41,900) for PM, and 630 (430 - 830) for resuspended dust. Economic benefits due to carbon reduction were considered at 3.5, 10 and 20 UStCO2. Marginal abatement cost curves were constructed considering private and net costs (private less the potential sales of carbon credits) Due to the bottom-up approach to constructing the marginal cost curve, many abatement measures (like congestion tolls and CNG instead of diesel buses) amounting to 8% reduction of PM2.5 concentration, exhibit a negative private cost. If the health benefits are considered for the decision, a maximum reduction of 22% in PM2.5 levels is obtained. Although many measures have associated reductions in GHG, due to the relatively low price considered for carbon reductions, when the potential benefits of CO2 sales are considered, this number does not increases. Therefore, consideration of the CO2 benefits did not change the decision for any of the 36 measures analyzed. This confirms that the main driver for air pollution policy is likely to continue to be local concerns, like public health issues.
Accurate chemical master equation solution using multi-finite buffers
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-06-29
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier
2018-01-01
In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645
Accurate chemical master equation solution using multi-finite buffers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
NASA Astrophysics Data System (ADS)
Jeng, Albert; Chang, Li-Chung; Chen, Sheng-Hui
There are many protocols proposed for protecting Radio Frequency Identification (RFID) system privacy and security. A number of these protocols are designed for protecting long-term security of RFID system using symmetric key or public key cryptosystem. Others are designed for protecting user anonymity and privacy. In practice, the use of RFID technology often has a short lifespan, such as commodity check out, supply chain management and so on. Furthermore, we know that designing a long-term security architecture to protect the security and privacy of RFID tags information requires a thorough consideration from many different aspects. However, any security enhancement on RFID technology will jack up its cost which may be detrimental to its widespread deployment. Due to the severe constraints of RFID tag resources (e. g., power source, computing power, communication bandwidth) and open air communication nature of RFID usage, it is a great challenge to secure a typical RFID system. For example, computational heavy public key and symmetric key cryptography algorithms (e. g., RSA and AES) may not be suitable or over-killed to protect RFID security or privacy. These factors motivate us to research an efficient and cost effective solution for RFID security and privacy protection. In this paper, we propose a new effective generic binary tree based key agreement protocol (called BKAP) and its variations, and show how it can be applied to secure the low cost and resource constraint RFID system. This BKAP is not a general purpose key agreement protocol rather it is a special purpose protocol to protect privacy, un-traceability and anonymity in a single RFID closed system domain.
The Application of Quantity Discounts in Army Procurements (Field Test).
1980-04-01
Work Directive (PWD). d. The amended PWD is forwarded to the Procurement and Production (PP) control where quantity increments and delivery schedules are...counts on 97 Army Stock Fund small purchases (less than $10,000) and received 10 I be * 0p cebe * )~ Cb 111 cost effective discounts on 46 or 47.4% of...discount but the computed annualized cost for the QD increment was larger than the computed annualized cost for the EOQ, this was not a cost effective
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
An Investment Case to Prevent the Reintroduction of Malaria in Sri Lanka
Shretta, Rima; Baral, Ranju; Avanceña, Anton L. V.; Fox, Katie; Dannoruwa, Asoka Premasiri; Jayanetti, Ravindra; Jeyakumaran, Arumainayagam; Hasantha, Rasike; Peris, Lalanthika; Premaratne, Risintha
2017-01-01
Sri Lanka has made remarkable gains in reducing the burden of malaria, recording no locally transmitted malaria cases since November 2012 and zero deaths since 2007. The country was recently certified as malaria free by World Health Organization in September 2016. Sri Lanka, however, continues to face a risk of resurgence due to persistent receptivity and vulnerability to malaria transmission. Maintaining the gains will require continued financing to the malaria program to maintain the activities aimed at preventing reintroduction. This article presents an investment case for malaria in Sri Lanka by estimating the costs and benefits of sustaining investments to prevent the reintroduction of the disease. An ingredient-based approach was used to estimate the cost of the existing program. The cost of potential resurgence was estimated using a hypothetical scenario in which resurgence assumed to occur, if all prevention of reintroduction activities were halted. These estimates were used to compute a benefit–cost ratio and a return on investment. The total economic cost of the malaria program in 2014 was estimated at U.S. dollars (USD) 0.57 per capita per year with a financial cost of USD0.37 per capita. The cost of potential malaria resurgence was, however, much higher estimated at 13 times the cost of maintaining existing activities or 21 times based on financial costs alone. This evidence suggests a substantial return on investment providing a compelling argument for advocacy for continued prioritization of funding for the prevention of reintroduction of malaria in Sri Lanka. PMID:28115673
Cyborg beast: a low-cost 3d-printed prosthetic hand for children with upper-limb differences.
Zuniga, Jorge; Katsavelis, Dimitrios; Peck, Jean; Stollberg, John; Petrykowski, Marc; Carson, Adam; Fernandez, Cristina
2015-01-20
There is an increasing number of children with traumatic and congenital hand amputations or reductions. Children's prosthetic needs are complex due to their small size, constant growth, and psychosocial development. Families' financial resources play a crucial role in the prescription of prostheses for their children, especially when private insurance and public funding are insufficient. Electric-powered (i.e., myoelectric) and body-powered (i.e., mechanical) devices have been developed to accommodate children's needs, but the cost of maintenance and replacement represents an obstacle for many families. Due to the complexity and high cost of these prosthetic hands, they are not accessible to children from low-income, uninsured families or to children from developing countries. Advancements in computer-aided design (CAD) programs, additive manufacturing, and image editing software offer the possibility of designing, printing, and fitting prosthetic hands devices at a distance and at very low cost. The purpose of this preliminary investigation was to describe a low-cost three-dimensional (3D)-printed prosthetic hand for children with upper-limb reductions and to propose a prosthesis fitting methodology that can be performed at a distance. No significant mean differences were found between the anthropometric and range of motion measurements taken directly from the upper limbs of subjects versus those extracted from photographs. The Bland and Altman plots show no major bias and narrow limits of agreements for lengths and widths and small bias and wider limits of agreements for the range of motion measurements. The main finding of the survey was that our prosthetic device may have a significant potential to positively impact quality of life and daily usage, and can be incorporated in several activities at home and in school. This investigation describes a low-cost 3D-printed prosthetic hand for children and proposes a distance fitting procedure. The Cyborg Beast prosthetic hand and the proposed distance-fitting procedures may represent a possible low-cost alternative for children in developing countries and those who have limited access to health care providers. Further studies should examine the functionality, validity, durability, benefits, and rejection rate of this type of low-cost 3D-printed prosthetic device.
Hanly, Paul; Skally, Mairead; Fenlon, Helen; Sharp, Linda
2012-10-01
The European Code Against Cancer recommends individuals aged ≥ 50 should participate in colorectal cancer screening. CT-colonography (CTC) is one of several screening tests available. We systematically reviewed evidence on, and identified key factors influencing, cost-effectiveness of CTC screening. PubMed, Medline, and the Cochrane library were searched for cost-effectiveness or cost-utility analyses of CTC-based screening, published in English, January 1999 to July 2010. Data was abstracted on setting, model type and horizon, screening scenario(s), comparator(s), participants, uptake, CTC performance and cost, effectiveness, ICERs, and whether extra-colonic findings and medical complications were considered. Sixteen studies were identified from the United States (n = 11), Canada (n = 2), and France, Italy, and the United Kingdom (1 each). Markov state-transition (n = 14) or microsimulation (n = 2) models were used. Eleven considered direct medical costs only; five included indirect costs. Fourteen compared CTC with no screening; fourteen compared CTC with colonoscopy-based screening; fewer compared CTC with sigmoidoscopy (8) or fecal tests (4). Outcomes assessed were life-years gained/saved (13), QALYs (2), or both (1). Three considered extra-colonic findings; seven considered complications. CTC appeared cost-effective versus no screening and, in general, flexible sigmoidoscopy and fecal occult blood testing. Results were mixed comparing CTC to colonoscopy. Parameters most influencing cost-effectiveness included: CTC costs, screening uptake, threshold for polyp referral, and extra-colonic findings. Evidence on cost-effectiveness of CTC screening is heterogeneous, due largely to between-study differences in comparators and parameter values. Future studies should: compare CTC with currently favored tests, especially fecal immunochemical tests; consider extra-colonic findings; and conduct comprehensive sensitivity analyses.
Dupuis, S; Fecci, J-L; Noyer, P; Lecarpentier, E; Chollet-Xémard, C; Margenet, A; Marty, J; Combes, X
2009-01-01
To assess economical impact after introduction of a bar coding pharmacy stock replenishment system in a prehospital emergency medical unit. Observational before and after study. A computer system using specific software and bare-code technology was introduced in the pre hospital emergency medical unit (Smur). Overall activity and costs related to pharmacy were recorded annually during two periods: the first 2 years period before computer system introduction and the second one during the 4 years following this system installation. The overall clinical activity increased by 10% between the two periods whereas pharmacy related costs continuously decreased after the start of pharmacy management computer system use. Pharmacy stock management was easier after introduction of the new stock replenishment system. The mean pharmacy related cost of one patient management was 13 Euros before and 9 Euros after the introduction of the system. The overall cost savings during the studied period was calculated to reach 134,000 Euros. The introduction of a specific pharmacy management computer system allowed to do important costs savings in a prehospital emergency medical unit.
Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS
NASA Astrophysics Data System (ADS)
Pavia, F.; Curtin, W. A.
2015-07-01
Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.
Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Wang, Peng; Plimpton, Steven J
The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - 1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory,more » 2) minimizing the amount of code that must be ported for efficient acceleration, 3) utilizing the available processing power from both many-core CPUs and accelerators, and 4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.« less
A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potok, Thomas E; Schuman, Catherine D; Young, Steven R
Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determinemore » network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Kinetic particle simulation of discharge and wall erosion of a Hall thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Shinatora; Komurasaki, Kimiya; Arakawa, Yoshihiro
2013-06-15
The primary lifetime limiting factor of Hall thrusters is the wall erosion caused by the ion induced sputtering, which is predominated by dielectric wall sheath and pre-sheath. However, so far only fluid or hybrid simulation models were applied to wall erosion and lifetime studies in which this non-quasi-neutral and non-equilibrium area cannot be treated directly. Thus, in this study, a 2D fully kinetic particle-in-cell model was presented for Hall thruster discharge and lifetime simulation. Because the fully kinetic lifetime simulation was yet to be achieved so far due to the high computational cost, the semi-implicit field solver and the techniquemore » of mass ratio manipulation was employed to accelerate the computation. However, other artificial manipulations like permittivity or geometry scaling were not used in order to avoid unrecoverable change of physics. Additionally, a new physics recovering model for the mass ratio was presented for better preservation of electron mobility at the weakly magnetically confined plasma region. The validity of the presented model was examined by various parametric studies, and the thrust performance and wall erosion rate of a laboratory model magnetic layer type Hall thruster was modeled for different operation conditions. The simulation results successfully reproduced the measurement results with typically less than 10% discrepancy without tuning any numerical parameters. It is also shown that the computational cost was reduced to the level that the Hall thruster fully kinetic lifetime simulation is feasible.« less
NASA Astrophysics Data System (ADS)
Ahangaran, Daryoush Kaveh; Yasrebi, Amir Bijan; Wetherelt, Andy; Foster, Patrick
2012-10-01
Application of fully automated systems for truck dispatching plays a major role in decreasing the transportation costs which often represent the majority of costs spent on open pit mining. Consequently, the application of a truck dispatching system has become fundamentally important in most of the world's open pit mines. Recent experiences indicate that by decreasing a truck's travelling time and the associated waiting time of its associated shovel then due to the application of a truck dispatching system the rate of production will be considerably improved. Computer-based truck dispatching systems using algorithms, advanced and accurate software are examples of these innovations. Developing an algorithm of a computer- based program appropriated to a specific mine's conditions is considered as one of the most important activities in connection with computer-based dispatching in open pit mines. In this paper the changing trend of programming and dispatching control algorithms and automation conditions will be discussed. Furthermore, since the transportation fleet of most mines use trucks with different capacities, innovative methods, operational optimisation techniques and the best possible methods for developing the required algorithm for real-time dispatching are selected by conducting research on mathematical-based planning methods. Finally, a real-time dispatching model compatible with the requirement of trucks with different capacities is developed by using two techniques of flow networks and integer programming.
[Coil Embolization of Bronchial Artery Aneurysm;Report of a Case].
Hagiwara, Kenichi; Moriya, Hiroshi; Sato, Yoshiyuki
2018-05-01
An 82-year-old male was admitted due to mild chest discomfort. Enhanced computed tomography showed a large bronchial artery aneurysm(BAA) of 26×27 mm at the left hilus. To avoid the rupture of BAA, coil embolization alone was performed. There has been no enlargement of BAA for these 4 years. In general, coil embolization only should be indicated in a patient with BAA with a stalk because of thoracic endovascular aortic repair (TEVAR) being off-label and low cost performance. TEVAR would be considered as a last resort only in case of enlarging BAA even after coil embolization.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
Numerical and experimental analyses of lighting columns in terms of passive safety
NASA Astrophysics Data System (ADS)
Jedliński, Tomasz Ireneusz; Buśkiewicz, Jacek
2018-01-01
Modern lighting columns have a very beneficial influence on road safety. Currently, the columns are being designed to keep the driver safe in the event of a car collision. The following work compares experimental results of vehicle impact on a lighting column with FEM simulations performed using the Ansys LS-DYNA program. Due to high costs of experiments and time-consuming research process, the computer software seems to be very useful utility in the development of pole structures, which are to absorb kinetic energy of the vehicle in a precisely prescribed way.
Measuring watershed runoff capability with ERTS data. [Washita River Basin, Oklahoma
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1974-01-01
Parameters of most equations used to predict runoff from an ungaged area are based on characteristics of the watershed and subject to the biases of a hydrologist. Digital multispectral scanner, MSS, data from ERTS was reduced with the aid of computer programs and a Dicomed display. Multivariate analyses of the MSS data indicate that discrimination between watersheds with different runoff capabilities is possible using ERTS data. Differences between two visible bands of MSS data can be used to more accurately evaluate the parameters than present subjective methods, thus reducing construction cost due to overdesign of flood detention structures.
Verification of a Finite Element Model for Pyrolyzing Ablative Materials
NASA Technical Reports Server (NTRS)
Risch, Timothy K.
2017-01-01
Ablating thermal protection system (TPS) materials have been used in many reentering spacecraft and in other applications such as rocket nozzle linings, fire protection materials, and as countermeasures for directed energy weapons. The introduction of the finite element model to the analysis of ablation has arguably resulted in improved computational capabilities due the flexibility and extended applicability of the method, especially to complex geometries. Commercial finite element codes often provide enhanced capability compared to custom, specially written programs based on versatility, usability, pre- and post-processing, grid generation, total life-cycle costs, and speed.
Applying Statistical Models and Parametric Distance Measures for Music Similarity Search
NASA Astrophysics Data System (ADS)
Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph
Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.
On the impact of communication complexity on the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D. B.; Van Rosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical alorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In this second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm-independent upper bounds on system performance are derived for several problems that are important to scientific computation.
[Physically-based model of pesticide application for risk assessment of agricultural workers].
Rubino, F M; Mandic-Rajcevic, S; Vianello, G; Brambilla, G; Colosio, C
2012-01-01
Due to their unavoidable toxicity to non-target organisms, including man, the not of Plant Protection Products requires a thorough risk assessment to rationally advise safe use procedures and protection equipment by farmers. Most information on active substances and formulations, such as dermal absorption rates and exposure limits are available in the large body of regulatory data. Physically-based computational models can be used to forecast risk in real-life conditions (preventive assessment by 'exposure profiles'), to drive the cost-effective use of products and equipment and to understand the sources of unexpected exposure.
Communication Avoiding and Overlapping for Numerical Linear Algebra
2012-05-08
future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Total Annual Loan Cost Rate Computations for Reverse Mortgage Transactions K Appendix K to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. K Appendix K to Part 1026—Total Annual Loan Cost...
McCrone, Paul; Sharpe, Michael; Chalder, Trudie; Knapp, Martin; Johnson, Anthony L.; Goldsmith, Kimberley A.; White, Peter D.
2012-01-01
Background The PACE trial compared the effectiveness of adding adaptive pacing therapy (APT), cognitive behaviour therapy (CBT), or graded exercise therapy (GET), to specialist medical care (SMC) for patients with chronic fatigue syndrome. This paper reports the relative cost-effectiveness of these treatments in terms of quality adjusted life years (QALYs) and improvements in fatigue and physical function. Methods Resource use was measured and costs calculated. Healthcare and societal costs (healthcare plus lost production and unpaid informal care) were combined with QALYs gained, and changes in fatigue and disability; incremental cost-effectiveness ratios (ICERs) were computed. Results SMC patients had significantly lower healthcare costs than those receiving APT, CBT and GET. If society is willing to value a QALY at £30,000 there is a 62.7% likelihood that CBT is the most cost-effective therapy, a 26.8% likelihood that GET is most cost effective, 2.6% that APT is most cost-effective and 7.9% that SMC alone is most cost-effective. Compared to SMC alone, the incremental healthcare cost per QALY was £18,374 for CBT, £23,615 for GET and £55,235 for APT. From a societal perspective CBT has a 59.5% likelihood of being the most cost-effective, GET 34.8%, APT 0.2% and SMC alone 5.5%. CBT and GET dominated SMC, while APT had a cost per QALY of £127,047. ICERs using reductions in fatigue and disability as outcomes largely mirrored these findings. Conclusions Comparing the four treatments using a health care perspective, CBT had the greatest probability of being the most cost-effective followed by GET. APT had a lower probability of being the most cost-effective option than SMC alone. The relative cost-effectiveness was even greater from a societal perspective as additional cost savings due to reduced need for informal care were likely. PMID:22870204
McCrone, Paul; Sharpe, Michael; Chalder, Trudie; Knapp, Martin; Johnson, Anthony L; Goldsmith, Kimberley A; White, Peter D
2012-01-01
The PACE trial compared the effectiveness of adding adaptive pacing therapy (APT), cognitive behaviour therapy (CBT), or graded exercise therapy (GET), to specialist medical care (SMC) for patients with chronic fatigue syndrome. This paper reports the relative cost-effectiveness of these treatments in terms of quality adjusted life years (QALYs) and improvements in fatigue and physical function. Resource use was measured and costs calculated. Healthcare and societal costs (healthcare plus lost production and unpaid informal care) were combined with QALYs gained, and changes in fatigue and disability; incremental cost-effectiveness ratios (ICERs) were computed. SMC patients had significantly lower healthcare costs than those receiving APT, CBT and GET. If society is willing to value a QALY at £30,000 there is a 62.7% likelihood that CBT is the most cost-effective therapy, a 26.8% likelihood that GET is most cost effective, 2.6% that APT is most cost-effective and 7.9% that SMC alone is most cost-effective. Compared to SMC alone, the incremental healthcare cost per QALY was £18,374 for CBT, £23,615 for GET and £55,235 for APT. From a societal perspective CBT has a 59.5% likelihood of being the most cost-effective, GET 34.8%, APT 0.2% and SMC alone 5.5%. CBT and GET dominated SMC, while APT had a cost per QALY of £127,047. ICERs using reductions in fatigue and disability as outcomes largely mirrored these findings. Comparing the four treatments using a health care perspective, CBT had the greatest probability of being the most cost-effective followed by GET. APT had a lower probability of being the most cost-effective option than SMC alone. The relative cost-effectiveness was even greater from a societal perspective as additional cost savings due to reduced need for informal care were likely.
Effect of costing methods on unit cost of hospital medical services.
Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya
2007-04-01
To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.
New horizons in cardiac innervation imaging: introduction of novel 18F-labeled PET tracers.
Kobayashi, Ryohei; Chen, Xinyu; Werner, Rudolf A; Lapa, Constantin; Javadi, Mehrbod S; Higuchi, Takahiro
2017-12-01
Cardiac sympathetic nervous activity can be uniquely visualized by non-invasive radionuclide imaging techniques due to the fast growing and widespread application of nuclear cardiology in the last few years. The norepinephrine analogue 123 I-meta-iodobenzylguanidine ( 123 I-MIBG) is a single photon emission computed tomography (SPECT) tracer for the clinical implementation of sympathetic nervous imaging for both diagnosis and prognosis of heart failure. Meanwhile, positron emission tomography (PET) imaging has become increasingly attractive because of its higher spatial and temporal resolution compared to SPECT, which allows regional functional and dynamic kinetic analysis. Nevertheless, wider use of cardiac sympathetic nervous PET imaging is still limited mainly due to the demand of costly on-site cyclotrons, which are required for the production of conventional 11 C-labeled (radiological half-life, 20 min) PET tracers. Most recently, more promising 18 F-labeled (half-life, 110 min) PET radiopharmaceuticals targeting sympathetic nervous system have been introduced. These tracers optimize PET imaging and, by using delivery networks, cost less to produce. In this article, the latest advances of sympathetic nervous imaging using 18 F-labeled radiotracers along with their possible applications are reviewed.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
NASA Astrophysics Data System (ADS)
Sim, Sung-Han; Spencer, Billie F., Jr.; Park, Jongwoong; Jung, Hyungjo
2012-04-01
Wireless Smart Sensor Networks (WSSNs) facilitates a new paradigm to structural identification and monitoring for civil infrastructure. Conventional monitoring systems based on wired sensors and centralized data acquisition and processing have been considered to be challenging and costly due to cabling and expensive equipment and maintenance costs. WSSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. Thus, several system identification methods have been implemented to process sensor data and extract essential information, including Natural Excitation Technique with Eigensystem Realization Algorithm, Frequency Domain Decomposition (FDD), and Random Decrement Technique (RDT); however, Stochastic Subspace Identification (SSI) has not been fully utilized in WSSNs, while SSI has the strong potential to enhance the system identification. This study presents a decentralized system identification using SSI in WSSNs. The approach is implemented on MEMSIC's Imote2 sensor platform and experimentally verified using a 5-story shear building model.
Hughes, Richard E; Nelson, Nancy A
2009-05-01
A mathematical model was developed for estimating the net present value (NPV) of the cash flow resulting from an investment in an intervention to prevent occupational low back pain (LBP). It combines biomechanics, epidemiology, and finance to give an integrated tool for a firm to use to estimate the investment worthiness of an intervention based on a biomechanical analysis of working postures and hand loads. The model can be used by an ergonomist to estimate the investment worthiness of a proposed intervention. The analysis would begin with a biomechanical evaluation of the current job design and post-intervention job. Economic factors such as hourly labor cost, overhead, workers' compensation costs of LBP claims, and discount rate are combined with the biomechanical analysis to estimate the investment worthiness of the proposed intervention. While this model is limited to low back pain, the simulation framework could be applied to other musculoskeletal disorders. The model uses Monte Carlo simulation to compute the statistical distribution of NPV, and it uses a discrete event simulation paradigm based on four states: (1) working and no history of lost time due to LBP, (2) working and history of lost time due to LBP, (3) lost time due to LBP, and (4) leave job. Probabilities of transitions are based on an extensive review of the epidemiologic review of the low back pain literature. An example is presented.
Radiation Tolerant, FPGA-Based SmallSat Computer System
NASA Technical Reports Server (NTRS)
LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew
2015-01-01
The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.
Software for Tracking Costs of Mars Projects
NASA Technical Reports Server (NTRS)
Wong, Alvin; Warfield, Keith
2003-01-01
The Mars Cost Tracking Model is a computer program that administers a system set up for tracking the costs of future NASA projects that pertain to Mars. Previously, no such tracking system existed, and documentation was written in a variety of formats and scattered in various places. It was difficult to justify costs or even track the history of costs of a spacecraft mission to Mars. The present software enables users to maintain all cost-model definitions, documentation, and justifications of cost estimates in one computer system that is accessible via the Internet. The software provides sign-off safeguards to ensure the reliability of information entered into the system. This system may eventually be used to track the costs of projects other than only those that pertain to Mars.
PACE 2: Pricing and Cost Estimating Handbook
NASA Technical Reports Server (NTRS)
Stewart, R. D.; Shepherd, T.
1977-01-01
An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.
ERIC Educational Resources Information Center
Vanderheiden, Gregg C.; Lee, Charles C.
Many low-cost and no-cost modifications to computers would greatly increase the number of disabled individuals who could use standard computers without requiring custom modifications, and would increase the ability to attach special input and output systems. The purpose of the Guidelines is to provide an awareness of these access problems and a…
Dietz, Dennis C.
2014-01-01
A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070
SAMICS marketing and distribution model
NASA Technical Reports Server (NTRS)
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
Accelerating epistasis analysis in human genetics with consumer graphics hardware.
Sinnott-Armstrong, Nicholas A; Greene, Casey S; Cancare, Fabio; Moore, Jason H
2009-07-24
Human geneticists are now capable of measuring more than one million DNA sequence variations from across the human genome. The new challenge is to develop computationally feasible methods capable of analyzing these data for associations with common human disease, particularly in the context of epistasis. Epistasis describes the situation where multiple genes interact in a complex non-linear manner to determine an individual's disease risk and is thought to be ubiquitous for common diseases. Multifactor Dimensionality Reduction (MDR) is an algorithm capable of detecting epistasis. An exhaustive analysis with MDR is often computationally expensive, particularly for high order interactions. This challenge has previously been met with parallel computation and expensive hardware. The option we examine here exploits commodity hardware designed for computer graphics. In modern computers Graphics Processing Units (GPUs) have more memory bandwidth and computational capability than Central Processing Units (CPUs) and are well suited to this problem. Advances in the video game industry have led to an economy of scale creating a situation where these powerful components are readily available at very low cost. Here we implement and evaluate the performance of the MDR algorithm on GPUs. Of primary interest are the time required for an epistasis analysis and the price to performance ratio of available solutions. We found that using MDR on GPUs consistently increased performance per machine over both a feature rich Java software package and a C++ cluster implementation. The performance of a GPU workstation running a GPU implementation reduces computation time by a factor of 160 compared to an 8-core workstation running the Java implementation on CPUs. This GPU workstation performs similarly to 150 cores running an optimized C++ implementation on a Beowulf cluster. Furthermore this GPU system provides extremely cost effective performance while leaving the CPU available for other tasks. The GPU workstation containing three GPUs costs $2000 while obtaining similar performance on a Beowulf cluster requires 150 CPU cores which, including the added infrastructure and support cost of the cluster system, cost approximately $82,500. Graphics hardware based computing provides a cost effective means to perform genetic analysis of epistasis using MDR on large datasets without the infrastructure of a computing cluster.
Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur
2014-12-01
This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p < 0.001). Significant differences in mean costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p < 0.001). Significant differences were also obtained between the costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Finite element simulation of articular contact mechanics with quadratic tetrahedral elements.
Maas, Steve A; Ellis, Benjamin J; Rawlins, David S; Weiss, Jeffrey A
2016-03-21
Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Northrop, Paul W. C.; Pathak, Manan; Rife, Derek; ...
2015-03-09
Lithium-ion batteries are an important technology to facilitate efficient energy storage and enable a shift from petroleum based energy to more environmentally benign sources. Such systems can be utilized most efficiently if good understanding of performance can be achieved for a range of operating conditions. Mathematical models can be useful to predict battery behavior to allow for optimization of design and control. An analytical solution is ideally preferred to solve the equations of a mathematical model, as it eliminates the error that arises when using numerical techniques and is usually computationally cheap. An analytical solution provides insight into the behaviormore » of the system and also explicitly shows the effects of different parameters on the behavior. However, most engineering models, including the majority of battery models, cannot be solved analytically due to non-linearities in the equations and state dependent transport and kinetic parameters. The numerical method used to solve the system of equations describing a battery operation can have a significant impact on the computational cost of the simulation. In this paper, a model reformulation of the porous electrode pseudo three dimensional (P3D) which significantly reduces the computational cost of lithium ion battery simulation, while maintaining high accuracy, is discussed. This reformulation enables the use of the P3D model into applications that would otherwise be too computationally expensive to justify its use, such as online control, optimization, and parameter estimation. Furthermore, the P3D model has proven to be robust enough to allow for the inclusion of additional physical phenomena as understanding improves. In this study, the reformulated model is used to allow for more complicated physical phenomena to be considered for study, including thermal effects.« less
Perspective: Ab initio force field methods derived from quantum mechanics
NASA Astrophysics Data System (ADS)
Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.
2018-03-01
It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.
Computational Modeling as a Design Tool in Microelectronics Manufacturing
NASA Technical Reports Server (NTRS)
Meyyappan, Meyya; Arnold, James O. (Technical Monitor)
1997-01-01
Plans to introduce pilot lines or fabs for 300 mm processing are in progress. The IC technology is simultaneously moving towards 0.25/0.18 micron. The convergence of these two trends places unprecedented stringent demands on processes and equipments. More than ever, computational modeling is called upon to play a complementary role in equipment and process design. The pace in hardware/process development needs a matching pace in software development: an aggressive move towards developing "virtual reactors" is desirable and essential to reduce design cycle and costs. This goal has three elements: reactor scale model, feature level model, and database of physical/chemical properties. With these elements coupled, the complete model should function as a design aid in a CAD environment. This talk would aim at the description of various elements. At the reactor level, continuum, DSMC(or particle) and hybrid models will be discussed and compared using examples of plasma and thermal process simulations. In microtopography evolution, approaches such as level set methods compete with conventional geometric models. Regardless of the approach, the reliance on empricism is to be eliminated through coupling to reactor model and computational surface science. This coupling poses challenging issues of orders of magnitude variation in length and time scales. Finally, database development has fallen behind; current situation is rapidly aggravated by the ever newer chemistries emerging to meet process metrics. The virtual reactor would be a useless concept without an accompanying reliable database that consists of: thermal reaction pathways and rate constants, electron-molecule cross sections, thermochemical properties, transport properties, and finally, surface data on the interaction of radicals, atoms and ions with various surfaces. Large scale computational chemistry efforts are critical as experiments alone cannot meet database needs due to the difficulties associated with such controlled experiments and costs.
Comparative study of manufacturing condyle implant using rapid prototyping and CNC machining
NASA Astrophysics Data System (ADS)
Bojanampati, S.; Karthikeyan, R.; Islam, MD; Venugopal, S.
2018-04-01
Injuries to the cranio-maxillofacial area caused by road traffic accidents (RTAs), fall from heights, birth defects, metabolic disorders and tumors affect a rising number of patients in the United Arab Emirates (UAE), and require maxillofacial surgery. Mandibular reconstruction poses a specific challenge in both functionality and aesthetics, and involves replacement of the damaged bone by a custom made implant. Due to material, design cycle time and manufacturing process time, such implants are in many instances not affordable to patients. In this paper, the feasibility of designing and manufacturing low-cost, custom made condyle implant is assessed using two different approaches, consisting of rapid prototyping and three-axis computer numerically controlled (CNC) machining. Two candidate rapid prototyping techniques are considered, namely fused deposition modeling (FDM) and three-dimensional printing followed by sand casting The feasibility of the proposed manufacturing processes is evaluated based on manufacturing time, cost, quality, and reliability.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane
2016-05-01
Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.
Pricing strategy in a dual-channel and remanufacturing supply chain system
NASA Astrophysics Data System (ADS)
Jiang, Chengzhi; Xu, Feng; Sheng, Zhaohan
2010-07-01
This article addresses the pricing strategy problems in a supply chain system where the manufacturer sells original products and remanufactured products via indirect retailer channels and direct Internet channels. Due to the complexity of that system, agent technologies that provide a new way for analysing complex systems are used for modelling. Meanwhile, in order to reduce the computational load of searching procedure for optimal prices and profits, a learning search algorithm is designed and implemented within the multi-agent supply chain model. The simulation results show that the proposed model can find out optimal prices of original products and remanufactured products in both channels, which lead to optimal profits of the manufacturer and the retailer. It is also found that the optimal profits are increased by introducing direct channel and remanufacturing. Furthermore, the effect of customer preference, direct channel cost and remanufactured unit cost on optimal prices and profits are examined.