Sample records for implementation matrix spent

  1. Rice Protein Matrix Enhances Circulating Levels of Xanthohumol Following Acute Oral Intake of Spent Hops in Humans.

    PubMed

    O'Connor, Annalouise; Konda, Veera; Reed, Ralph L; Christensen, J Mark; Stevens, Jan F; Contractor, Nikhat

    2018-03-01

    Xanthohumol (XN), a prenylated flavonoid found in hops, exhibits anti-inflammatory and antioxidant properties. However, poor bioavailability may limit therapeutic applications. As food components are known to modulate polyphenol absorption, the objective is to determine whether a protein matrix could enhance the bioavailability of XN post oral consumption in humans. This is a randomized, double-blind, crossover study in healthy participants (n = 6) evaluating XN and its major metabolites (isoxanthohumol [IX], 6- and 8-prenylnaringenin [6-PN, 8-PN]) for 6 h following consumption of 12.4 mg of XN delivered via a spent hops-rice protein matrix preparation or a control spent hops preparation. Plasma XN and metabolites are measured by LC-MS/MS. C max , T max , and area-under-the-curve (AUC) values were determined. Circulating XN and metabolite response to each treatment was not bioequivalent. Plasma concentrations of XN and XN + metabolites (AUC) are greater with consumption of the spent hops-rice protein matrix preparation. Compared to a standard spent hops powder, a protein-rich spent hops matrix demonstrates enhanced plasma levels of XN and metabolites following acute oral intake. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Analysis of Transportation Options for Commercial Spent Fuel in the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinina, Elena; Busch, Ingrid Karin

    The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S.more » Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage The U.S. Department of Energy (DOE) is laying the groundwork for implementing interim storage and associated transportation of spent nuclear fuel (SNF) highand associated transportation of spent nuclear fuel (SNF) and high and associated transportation of spent nuclear fuel (SNF) highand associated transportation of spent nuclear fuel (SNF) and high and associated transportation of spent nuclear fuel (SNF) highand associated transportation of spent nuclear fuel (SNF) and highand associated transportation of spent nuclear fuel (SNF) and high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) highand associated transportation of spent nuclear fuel (SNF) and high and associated transportation of spent nuclear fuel (SNF) high and associated transportation of spent nuclear fuel (SNF) highand associated transportation of spent nuclear fuel (SNF)...« less

  3. An experimental study on Sodalite and SAP matrices for immobilization of spent chloride salt waste

    NASA Astrophysics Data System (ADS)

    Giacobbo, Francesca; Da Ros, Mirko; Macerata, Elena; Mariani, Mario; Giola, Marco; De Angelis, Giorgio; Capone, Mauro; Fedeli, Carlo

    2018-02-01

    In the frame of Generation IV reactors a renewed interest in pyro-processing of spent nuclear fuel is underway. Molten chloride salt waste arising from the recovering of uranium and plutonium through pyro-processing is one of the problematic wastes for direct application of vitrification or ceramization. In this work, Sodalite and SAP have been evaluated and compared as potential matrices for confinement of spent chloride salt waste coming from pyro-processing. To this aim Sodalite and SAP were synthesized both in pure form and mixed with different glass matrices, i.e. commercially available glass frit and borosilicate glass. The confining matrices were loaded with mixed chloride salts to study their retention capacities with respect to the elements of interest. The matrices were characterized and leached for contact times up to 150 days at room temperature and at 90 °C. SEM analyses were also performed in order to compare the matrix surface before and after leaching. Leaching results are discussed and compared in terms of normalized releases with similar results reported in literature. According to this comparative study the SAP matrix with glass frit binder resulted in the best matrix among the ones studied, with respect to retention capacities for both matrix and spent fuel elements.

  4. An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.

    PubMed

    Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir

    2013-01-01

    DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.

  5. Process for immobilizing plutonium into vitreous ceramic waste forms

    DOEpatents

    Feng, Xiangdong; Einziger, Robert E.

    1997-01-01

    Disclosed is a method for converting spent nuclear fuel and surplus plutonium into a vitreous ceramic final waste form wherein spent nuclear fuel is bound in a crystalline matrix which is in turn bound within glass.

  6. Process for immobilizing plutonium into vitreous ceramic waste forms

    DOEpatents

    Feng, X.; Einziger, R.E.

    1997-08-12

    Disclosed is a method for converting spent nuclear fuel and surplus plutonium into a vitreous ceramic final waste form wherein spent nuclear fuel is bound in a crystalline matrix which is in turn bound within glass.

  7. Process for immobilizing plutonium into vitreous ceramic waste forms

    DOEpatents

    Feng, X.; Einziger, R.E.

    1997-01-28

    Disclosed is a method for converting spent nuclear fuel and surplus plutonium into a vitreous ceramic final waste form wherein spent nuclear fuel is bound in a crystalline matrix which is in turn bound within glass.

  8. 76 FR 2243 - List of Approved Spent Fuel Storage Casks: NUHOMS ® HD System Revision 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-13

    ... the requirements of reconstituted fuel assemblies; add requirements to qualify metal matrix composite... requirements to qualify metal matrix composite neutron absorbers with integral aluminum cladding; clarify the... requirements to qualify metal matrix composite neutron absorbers with integral aluminum cladding; clarify the...

  9. Due diligence in the characterization of matrix effects in a total IL-13 Singulex™ method.

    PubMed

    Fraser, Stephanie; Soderstrom, Catherine

    2014-04-01

    After obtaining her PhD in Cellular and Molecular biology from the University of Nevada, Reno, Stephanie has spent the last 15 years in the field of bioanalysis. She has held positions in academia, biotech, contract research and large pharma where she has managed ligand binding assay (discovery to Phase IIb clinical) and flow cytometry (preclinical) laboratories as well as taken the lead on implementing new/emergent technologies. Currently Stephanie leads Pfizer's Regulated Bioanalysis Ligand Binding Assay group, focusing on early clinical biomarker support. Interleukin (IL)-13, a Th2 cytokine, drives a range of physiological responses associated with the induction of allergic airway diseases and inflammatory bowel diseases. Analysis of IL-13 as a biomarker has provided insight into its role in disease mechanisms and progression. Serum IL-13 concentrations are often too low to be measured by standard enzyme-linked immunosorbent assay techniques, necessitating the implementation of a highly sensitive assay. Previously, the validation of a Singulex™ Erenna(®) assay for the quantitation of IL-13 was reported. Herein we describe refinement of this validation; defining the impact of matrix interference on the lower limit of quantification, adding spiked matrix QC samples, and extending endogenous IL-13 stability. A fit-for-purpose validation was conducted and the assay was used to support a Phase II clinical trial.

  10. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    PubMed

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  11. Evaluation of Li{sub 3}N accumulation in a fused LiCl/Li salt matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberle, C.S.

    1998-09-01

    Pyrochemical conditioning of spent nuclear fuel for the purpose of final disposal is currently being demonstrated at Argonne National Laboratory (ANL), and ongoing research in this area includes the demonstration of this process on spent oxide fuel. In conjunction with this research, a pilot scale of the preprocessing stage is being designed by ANL-West to demonstrate the in situ hot cell capability of the chemical reduction process. An impurity evaluation was completed for a Li/LiCl salt matrix in the presence of spent light water reactor uranium oxide fuel. A simple analysis was performed in which the sources of impurities inmore » the salt matrix were only from the cell atmosphere. Only reactions with the lithium were considered. The levels of impurities were shown to be highly sensitive system conditions. A predominance diagram for the Li-O-N system was constructed for the device, and the general oxidation, nitridation, and combined reactions were calculated as a function of oxygen and nitrogen partial pressure. These calculations and hot cell atmosphere data were used to determine the total number and type of impurities expected in the salt matrix, and the mass rate for the device was determined.« less

  12. Minor actinide transmutation in thorium and uranium matrices in heavy water moderated reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatti, Zaki; Hyland, B.; Edwards, G.W.R.

    2013-07-01

    The irradiation of Th{sup 232} breeds fewer of the problematic minor actinides (Np, Am, Cm) than the irradiation of U{sup 238}. This characteristic makes thorium an attractive potential matrix for the transmutation of these minor actinides, as these species can be transmuted without the creation of new actinides as is the case with a uranium fuel matrix. Minor actinides are the main contributors to long term decay heat and radiotoxicity of spent fuel, so reducing their concentration can greatly increase the capacity of a long term deep geological repository. Mixing minor actinides with thorium, three times more common in themore » Earth's crust than natural uranium, has the additional advantage of improving the sustainability of the fuel cycle. In this work, lattice cell calculations have been performed to determine the results of transmuting minor actinides from light water reactor spent fuel in a thorium matrix. 15-year-cooled group-extracted transuranic elements (Np, Pu, Am, Cm) from light water reactor (LWR) spent fuel were used as the fissile component in a thorium-based fuel in a heavy water moderated reactor (HWR). The minor actinide (MA) transmutation rates, spent fuel activity, decay heat and radiotoxicity, are compared with those obtained when the MA were mixed instead with natural uranium and taken to the same burnup. Each bundle contained a central pin containing a burnable neutron absorber whose initial concentration was adjusted to have the same reactivity response (in units of the delayed neutron fraction β) for coolant voiding as standard NU fuel. (authors)« less

  13. A Sparse Self-Consistent Field Algorithm and Its Parallel Implementation: Application to Density-Functional-Based Tight Binding.

    PubMed

    Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias

    2014-06-10

    We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.

  14. Fabrication of simulated DUPIC fuel

    NASA Astrophysics Data System (ADS)

    Kang, Kweon Ho; Song, Ki Chan; Park, Hee Sung; Moon, Je Sun; Yang, Myung Seung

    2000-12-01

    Simulated DUPIC fuel provides a convenient way to investigate the DUPIC fuel properties and behavior such as thermal conductivity, thermal expansion, fission gas release, leaching, and so on without the complications of handling radioactive materials. Several pellets simulating the composition and microstructure of DUPIC fuel are fabricated by resintering the powder, which was treated through OREOX process of simulated spent PWR fuel pellets, which had been prepared from a mixture of UO2 and stable forms of constituent nuclides. The key issues for producing simulated pellets that replicate the phases and microstructure of irradiated fuel are to achieve a submicrometre dispersion during mixing and diffusional homogeneity during sintering. This study describes the powder treatment, OREOX, compaction and sintering to fabricate simulated DUPIC fuel using the simulated spent PWR fuel. The homogeneity of additives in the powder was observed after attrition milling. The microstructure of the simulated spent PWR fuel agrees well with the other studies. The leading structural features observed are as follows: rare earth and other oxides dissolved in the UO2 matrix, small metallic precipitates distributed throughout the matrix, and a perovskite phase finely dispersed on grain boundaries.

  15. Improving efficiency and safety in external beam radiation therapy treatment delivery using a Kaizen approach.

    PubMed

    Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis

    Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  16. First Industrial Tests of a Matrix Monitor Correction for the Differential Die-away Technique of Historical Waste Drums

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, Rodolphe; Passard, Christian; Perot, Bertrand

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA NC La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT). In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (LMN) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor, namely a 3He proportional counter located insidemore » the measurement cavity. After feasibility studies performed with LMN's PROMETHEE 6 laboratory measurement cell and with MCNPX simulations, this paper presents first experimental tests performed on the industrial ACC (hulls and nozzles compaction facility) measurement system. A calculation vs. experiment benchmark has been carried out by performing dedicated calibration measurements with a representative drum and {sup 235}U samples. The comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  17. Biotreatment of refinery spent-sulfidic caustic using an enrichment culture immobilized in a novel support matrix.

    PubMed

    Conner, J A; Beitle, R R; Duncan, K; Kolhatkar, R; Sublette, K L

    2000-01-01

    Sodium hydroxide solutions are used in petroleum refining to remove hydrogen sulfide (H2S) and mercaptans from various hydrocarbon streams. The resulting sulfide-laden waste stream is called spent-sulfidic caustic. An aerobic enrichment culture was previously developed using a gas mixture of H2S and methyl-mercaptan (MeSH) as the sole energy source. This culture has now been immobilized in a novel support matrix, DuPont BIO-SEP beads, and is used to bio-treat a refinery spent-sulfidic caustic containing both inorganic sulfide and mercaptans in a continuous flow, fluidized-bed column bioreactor. Complete oxidation of both inorganic and organic sulfur to sulfate was observed with no breakthrough of H2S and < 2 ppmv of MeSH produced in the bioreactor outlet gas. Excessive buildup of sulfate (> 12 g/L) in the bioreactor medium resulted in an upset condition evidenced by excessive MeSH breakthrough. Therefore, bioreactor performance was limited by the steady-state sulfate concentration. Further improvement in volumetric productivity of a bioreactor system based on this enrichment culture will be dependent on maintenance of sulfate concentrations below inhibitory levels.

  18. 75 FR 25120 - List of Approved Spent Fuel Storage Casks: NUHOMS® HD System Revision 1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-07

    ...-235, clarify the requirements of reconstituted fuel assemblies, add requirements to qualify metal matrix composite neutron absorbers with integral aluminum cladding, delete use of nitrogen for draining...

  19. Dry transfer system for spent fuel: Project report, A system designed to achieve the dry transfer of bare spent fuel between two casks. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, D.M.; Guerra, G.; Neider, T.

    1995-12-01

    This report describes the system developed by EPRI/DOE for the dry transfer of spent fuel assemblies outside the reactor spent fuel pool. The system is designed to allow spent fuel assemblies to be removed from a spent fuel pool in a small cask, transported to the transfer facility, and transferred to a larger cask, either for off-site transportation or on-site storage. With design modifications, this design is capable of transferring single spent fuel assemblies from dry storage casks to transportation casks or visa versa. One incentive for the development of this design is that utilities with limited lifting capacity ormore » other physical or regulatory constraints are limited in their ability to utilize the current, more efficient transportation and storage cask designs. In addition, DOE, in planning to develop and implement the multi-purpose canister (MPC) system for the Civilian Radioactive Waste Management System, included the concept of an on-site dry transfer system to support the implementation of the MPC system at reactors with limitations that preclude the handling of the MPC system transfer casks. This Dry Transfer System can also be used at reactors wi decommissioned spent fuel pools and fuel in dry storage in non-MPC systems to transfer fuel into transportation casks. It can also be used at off-reactor site interim storage facilities for the same purpose.« less

  20. Estimating the time for dissolution of spent fuel exposed to unlimited water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leider, H.R.; Nguyen, S.N.; Stout, R.B.

    1991-12-01

    The release of radionuclides from spent fuel cannot be precisely predicted at this point because a satisfactory dissolution model based on specific chemical processes is not yet available. However, preliminary results on the dissolution rate of UO{sub 2} and spent fuel as a function of temperature and water composition have recently been reported. This information, together with data on fragment size distribution of spent fuel, are used to estimate the dissolution response of spent fuel in excess flowing water within the framework of a simple model. In this model, the reaction/dissolution front advances linearly with time and geometry is preserved.more » This also estimates the dissolution rate of the bulk of the fission products and higher actinides, which are uniformly distributed in the UO{sub 2} matrix and are presumed to dissolve congruently. We have used a fuel fragment distribution actually observed to calculate the time for total dissolution of spent fuel. A worst-case estimate was also made using the initial (maximum) rate of dissolution to predict the total dissolution time. The time for total dissolution of centimeter size particles is estimated to be 5.5 {times} 10{sup 4} years at 25{degrees}C.« less

  1. Determination of mercury distribution inside spent compact fluorescent lamps by atomic absorption spectrometry.

    PubMed

    Rey-Raap, Natalia; Gallardo, Antonio

    2012-05-01

    In this study, spent compact fluorescent lamps were characterized to determine the distribution of mercury. The procedure used in this research allowed mercury to be extracted in the vapor phase, from the phosphor powder, and the glass matrix. Mercury concentration in the three phases was determined by the method known as cold vapor atomic absorption spectrometry. Median values obtained in the study showed that a compact fluorescent lamp contained 24.52±0.4ppb of mercury in the vapor phase, 204.16±8.9ppb of mercury in the phosphor powder, and 18.74±0.5ppb of mercury in the glass matrix. There are differences in mercury concentration between the lamps since the year of manufacture or the hours of operation affect both mercury content and its distribution. The 85.76% of the mercury introduced into a compact fluorescent lamp becomes a component of the phosphor powder, while more than 13.66% is diffused through the glass matrix. By washing and eliminating all phosphor powder attached to the glass surface it is possible to classified the glass as a non-hazardous waste. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Development of an Engineered Producet Storage Concept for the UREX+1 Combined Transuraqnic?Lanthanide Product Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dr. Sean M. McDeavitt; Thomas J. Downar; Dr. Temitope A. Taiwo

    2009-03-01

    The U.S. Department of Energy is developing next generation processing methods to recycle uranium and transuranic (TRU) isotopes from spent nuclear fuel. The objective of the 3-year project described in this report was to develop near-term options for storing TRU oxides isolated through the uranium extraction (UREX+) process. More specifically, a Zircaloy matrix cermet was developed as a storage form for transuranics with the understanding that the cermet also has the ability to serve as a inert matrix fuel form for TRU burning after intermediate storage. The goals of this research projects were: 1) to develop the processing steps requiredmore » to transform the effluent TRU nitrate solutions and the spent Xircaloy cladding into a zireonium matrix cermet sotrage form; and 2) to evaluate the impact of phenomena that govern durability of the storage form, material processing, and TRU utiliztion in fast reactor fuel. This report represents a compilation of the results generated under this program. The information is presented as a brief technical narrative in the following sections with appended papers, presentations and academic theses to provide a detailed review of the project's accomplishments.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Bo-Young; Choi, Daewoong; Park, Se Hwan

    Korea Atomic Energy Research Institute (KAERI) have been developing the design and deployment methodology of Laser- Induced Breakdown Spectroscopy (LIBS) instrument for safeguards application within the argon hot cell environment at Advanced spent fuel Conditioning Process Facility (ACPF), where ACPF is a facility being refurbished for the laboratory-scaled demonstration of advanced spent fuel conditioning process. LIBS is an analysis technology used to measure the emission spectra of excited elements in the local plasma of a target material induced by a laser. The spectra measured by LIBS are analyzed to verify the quality and quantity of the specific element in themore » target matrix. Recently LIBS has been recognized as a promising technology for safeguards purposes in terms of several advantages including a simple sample preparation and in-situ analysis capability. In particular, a feasibility study of LIBS to remotely monitor the nuclear material in a high radiation environment has been carried out for supporting the IAEA safeguards implementation. Fiber-Optic LIBS (FO-LIBS) deployment was proposed by Applied Photonics Ltd because the use of fiber optics had benefited applications of LIBS by delivering the laser energy to the target and by collecting the plasma light. The design of FO-LIBS instrument for the measurement of actinides in the spent fuel and high temperature molten salt at ACPF had been developed in cooperation with Applied Photonics Ltd. FO-LIBS has some advantages as followings: the detectable plasma light wavelength range is not limited by the optical properties of the thick lead-glass shield window and the potential risk of laser damage to the lead-glass shield window is not considered. The remote LIBS instrument had been installed at ACPF and then the feasibility study for monitoring actinide elements such as uranium, plutonium, and curium in process materials has been carried out. (authors)« less

  4. Time Spent on Dedicated Patient Care and Documentation Tasks Before and After the Introduction of a Structured and Standardized Electronic Health Record.

    PubMed

    Joukes, Erik; Abu-Hanna, Ameen; Cornet, Ronald; de Keizer, Nicolette F

    2018-01-01

    Physicians spend around 35% of their time documenting patient data. They are concerned that adopting a structured and standardized electronic health record (EHR) will lead to more time documenting and less time for patient care, especially during consultations. This study measures the effect of the introduction of a structured and standardized EHR on documentation time and time for dedicated patient care during outpatient consultations. We measured physicians' time spent on four task categories during outpatient consultations: documentation, patient care, peer communication, and other activities. Physicians covered various specialties from two university hospitals that jointly implemented a structured and standardized EHR. Preimplementation, one hospital used a legacy-EHR, and one primarily paper-based records. The same physicians were observed 2 to 6 months before and 6 to 8 months after implementation.We analyzed consultation duration, and percentage of time spent on each task category. Differences in time distribution before and after implementation were tested using multilevel linear regression. We observed 24 physicians (162 hours, 439 consultations). We found no significant difference in consultation duration or number of consultations per hour. In the legacy-EHR center, we found the implementation associated with a significant decrease in time spent on dedicated patient care (-8.5%). In contrast, in the previously paper-based center, we found a significant increase in dedicated time spent on documentation (8.3%) and decrease in time on combined patient care and documentation (-4.6%). The effect on dedicated documentation time significantly differed between centers. Implementation of a structured and standardized EHR was associated with 8.5% decrease in time for dedicated patient care during consultations in one center and 8.3% increase in dedicated documentation time in another center. These results are in line with physicians' concerns that the introduction of a structured and standardized EHR might lead to more documentation burden and less time for dedicated patient care. Schattauer GmbH Stuttgart.

  5. Mechanical Characterization of Thermomechanical Matrix Residual Stresses Incurred During MMC Processing

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.

    1998-01-01

    In recent years, much effort has been spent examining the residual stress-strain states of advanced composites. Such examinations are motivated by a number of significant concerns that affect composite development, processing, and analysis. The room-temperature residual stress states incurred in many advanced composite systems are often quite large and can introduce damage even prior to the first external mechanical loading of the material. These stresses, which are induced during the cooldown following high-temperature consolidation, result from the coefficient of thermal expansion mismatch between the fiber and matrix. Experimental techniques commonly used to evaluate composite internal residual stress states are non-mechanical in nature and generally include forms of x-ray and neutron diffraction. Such approaches are usually complex, involving a number of assumptions and limitations associated with a wide range of issues, including the depth of penetration, the volume of material being assessed, and erroneous effects associated with oriented grains. Furthermore, and more important to the present research, these techniques can assess only "single time" stress in the composite. That is, little, if any, information is obtained that addresses the time-dependent point at which internal stresses begin to accumulate, the manner in which the accumulation occurs, and the presiding relationships between thermoelastic, thermoplastic, and thermoviscous behaviors. To address these critical issues, researchers at the NASA Lewis Research Center developed and implemented an innovative mechanical test technique to examine in real time, the time-dependent thermomechanical stress behavior of a matrix alloy as it went through a consolidation cycle.

  6. Zirconia-magnesia inert matrix fuel and waste form: Synthesis, characterization and chemical performance in an advanced fuel cycle

    NASA Astrophysics Data System (ADS)

    Holliday, Kiel Steven

    There is a significant buildup in plutonium stockpiles throughout the world, because of spent nuclear fuel and the dismantling of weapons. The radiotoxicity of this material and proliferation risk has led to a desire for destroying excess plutonium. To do this effectively, it must be fissioned in a reactor as part of a uranium free fuel to eliminate the generation of more plutonium. This requires an inert matrix to volumetrically dilute the fissile plutonium. Zirconia-magnesia dual phase ceramic has been demonstrated to be a favorable material for this task. It is neutron transparent, zirconia is chemically robust, magnesia has good thermal conductivity and the ceramic has been calculated to conform to current economic and safety standards. This dissertation contributes to the knowledge of zirconia-magnesia as an inert matrix fuel to establish behavior of the material containing a fissile component. First, the zirconia-magnesia inert matrix is synthesized in a dual phase ceramic containing a fissile component and a burnable poison. The chemical constitution of the ceramic is then determined. Next, the material performance is assessed under conditions relevant to an advanced fuel cycle. Reactor conditions were assessed with high temperature, high pressure water. Various acid solutions were used in an effort to dissolve the material for reprocessing. The ceramic was also tested as a waste form under environmental conditions, should it go directly to a repository as a spent fuel. The applicability of zirconia-magnesia as an inert matrix fuel and waste form was tested and found to be a promising material for such applications.

  7. Optical implementation of systolic array processing

    NASA Technical Reports Server (NTRS)

    Caulfield, H. J.; Rhodes, W. T.; Foster, M. J.; Horvitz, S.

    1981-01-01

    Algorithms for matrix vector multiplication are implemented using acousto-optic cells for multiplication and input data transfer and using charge coupled devices detector arrays for accumulation and output of the results. No two dimensional matrix mask is required; matrix changes are implemented electronically. A system for multiplying a 50 component nonnegative real vector by a 50 by 50 nonnegative real matrix is described. Modifications for bipolar real and complex valued processing are possible, as are extensions to matrix-matrix multiplication and multiplication of a vector by multiple matrices.

  8. Diffusion of radiogenic helium in natural uranium oxides

    NASA Astrophysics Data System (ADS)

    Roudil, Danièle; Bonhoure, Jessica; Pik, Raphaël; Cuney, Michel; Jégou, Christophe; Gauthier-Lafaye, F.

    2008-08-01

    The issue of nuclear waste management - and especially spent fuel disposal - demands further research on the long-term behavior of helium and its impact on physical changes in UO 2 and (U,Pu)O 2 matrices subjected to self-irradiation. Helium produced by radioactive decay of the actinides concentrates in the grains or is trapped at the grain boundaries. Various scenarios can be considered, and can have a significant effect on the radionuclide source terms that will be accessible to water after the canisters have been breached. Helium production and matrix damage is generally simulated by external irradiation or with actinide-doped materials. A natural uranium oxide sample was studied to acquire data on the behavior of radiogenic helium and its diffusion under self-irradiation in spent fuel. The sample from the Pen Ar Ran deposit in the Vendée region of France dated at 320 ± 9 million of years was selected for its simple geological history, making it a suitable natural analog of spent fuel under repository conditions during the initial period in a closed system not subject to mass transfer with the surrounding environment. Helium outgassing measured by mass spectrometry to determine the He diffusion coefficients through the ore shows that: (i) a maximum of 5% (2.1% on average) of the helium produced during the last 320 Ma in this natural analog was conserved, (ii) about 33% of the residual helium is occluded in the matrix and vacancy defects (about 10 -5 mol g -1) and 67% in bubbles that were analyzed by HRTEM. A similar distribution has been observed in spent fuel and in (U 0.9,Pu 0.1)O 2. The results obtained for the natural Pen Ar Ran sample can be applied by analogy to spent fuel, especially in terms of the apparent solubility limit and the formation, characteristics and behavior of the helium bubbles.

  9. An analysis of the technical status of high level radioactive waste and spent fuel management systems

    NASA Technical Reports Server (NTRS)

    English, T.; Miller, C.; Bullard, E.; Campbell, R.; Chockie, A.; Divita, E.; Douthitt, C.; Edelson, E.; Lees, L.

    1977-01-01

    The technical status of the old U.S. mailine program for high level radioactive nuclear waste management, and the newly-developing program for disposal of unreprocessed spent fuel was assessed. The method of long term containment for both of these waste forms is considered to be deep geologic isolation in bedded salt. Each major component of both waste management systems is analyzed in terms of its scientific feasibility, technical achievability and engineering achievability. The resulting matrix leads to a systematic identification of major unresolved technical or scientific questions and/or gaps in these programs.

  10. Bio-dissolution of Ni, V and Mo from spent petroleum catalyst using iron oxidizing bacteria.

    PubMed

    Pradhan, Debabrata; Kim, Dong J; Roychaudhury, Gautam; Lee, Seoung W

    2010-01-01

    Bioleaching studies of spent petroleum catalyst containing Ni, V and Mo were carried out using iron oxidizing bacteria. Various leaching parameters such as Fe(II) concentration, pulp density, pH, temperature and particle size were studied to evaluate their effects on the leaching efficiency as well as the kinetics of dissolution. The percentage of leaching of Ni and V were higher than Mo. The leaching process followed a diffusion controlled model and the product layer was observed to be impervious due to formation of ammonium jarosite (NH(4))Fe(3)(SO(4))(2)(OH)(6). Apart from this, the lower leaching efficiency of Mo was due to a hydrophobic coating of elemental sulfur over Mo matrix in the spent catalyst. The diffusivities of the attacking species for Ni, V and Mo were also calculated.

  11. Evaluating a scalable model for implementing electronic health records in resource-limited settings.

    PubMed

    Were, Martin C; Emenyonu, Nneka; Achieng, Marion; Shen, Changyu; Ssali, John; Masaba, John P M; Tierney, William M

    2010-01-01

    Current models for implementing electronic health records (EHRs) in resource-limited settings may not be scalable because they fail to address human-resource and cost constraints. This paper describes an implementation model which relies on shared responsibility between local sites and an external three-pronged support infrastructure consisting of: (1) a national technical expertise center, (2) an implementer's community, and (3) a developer's community. This model was used to implement an open-source EHR in three Ugandan HIV-clinics. Pre-post time-motion study at one site revealed that Primary Care Providers spent a third less time in direct and indirect care of patients (p<0.001) and 40% more time on personal activities (p=0.09) after EHRs implementation. Time spent by previously enrolled patients with non-clinician staff fell by half (p=0.004) and with pharmacy by 63% (p<0.001). Surveyed providers were highly satisfied with the EHRs and its support infrastructure. This model offers a viable approach for broadly implementing EHRs in resource-limited settings.

  12. Safeguards-by-Design: Guidance for Independent Spent Fuel Dry Storage Installations (ISFSI)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trond Bjornard; Philip C. Durst

    2012-05-01

    This document summarizes the requirements and best practices for implementing international nuclear safeguards at independent spent fuel storage installations (ISFSIs), also known as Away-from- Reactor (AFR) storage facilities. These installations may provide wet or dry storage of spent fuel, although the safeguards guidance herein focuses on dry storage facilities. In principle, the safeguards guidance applies to both wet and dry storage. The reason for focusing on dry independent spent fuel storage installations is that this is one of the fastest growing nuclear installations worldwide. Independent spent fuel storage installations are typically outside of the safeguards nuclear material balance area (MBA)more » of the reactor. They may be located on the reactor site, but are generally considered by the International Atomic Energy Agency (IAEA) and the State Regulator/SSAC to be a separate facility. The need for this guidance is becoming increasingly urgent as more and more nuclear power plants move their spent fuel from resident spent fuel ponds to independent spent fuel storage installations. The safeguards requirements and best practices described herein are also relevant to the design and construction of regional independent spent fuel storage installations that nuclear power plant operators are starting to consider in the absence of a national long-term geological spent fuel repository. The following document has been prepared in support of two of the three foundational pillars for implementing Safeguards-by-Design (SBD). These are: i) defining the relevant safeguards requirements, and ii) defining the best practices for meeting the requirements. This document was prepared with the design of the latest independent dry spent fuel storage installations in mind and was prepared specifically as an aid for designers of commercial nuclear facilities to help them understand the relevant international requirements that follow from a country’s safeguards agreement with the IAEA. If these requirements are understood at the earliest stages of facility design, it will help eliminate the costly retrofit of facilities that has occurred in the past to accommodate nuclear safeguards, and will help the IAEA implement nuclear safeguards worldwide, especially in countries building their first nuclear facilities. It is also hoped that this guidance document will promote discussion between the IAEA, State Regulator/SSAC, Project Design Team, and Facility Owner/Operator at an early stage to ensure that new ISFSIs will be effectively and efficiently safeguarded. This is intended to be a living document, since the international nuclear safeguards requirements may be subject to revision over time. More importantly, the practices by which the requirements are met are continuously modernized by the IAEA and facility operators for greater efficiency and cost effectiveness. As these improvements are made, it is recommended that the subject guidance document be updated and revised accordingly.« less

  13. Reducing Actinide Production Using Inert Matrix Fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deinert, Mark

    2017-08-23

    The environmental and geopolitical problems that surround nuclear power stem largely from the longlived transuranic isotopes of Am, Cm, Np and Pu that are contained in spent nuclear fuel. New methods for transmuting these elements into more benign forms are needed. Current research efforts focus largely on the development of fast burner reactors, because it has been shown that they could dramatically reduce the accumulation of transuranics. However, despite five decades of effort, fast reactors have yet to achieve industrial viability. A critical limitation to this, and other such strategies, is that they require a type of spent fuel reprocessingmore » that can efficiently separate all of the transuranics from the fission products with which they are mixed. Unfortunately, the technology for doing this on an industrial scale is still in development. In this project, we explore a strategy for transmutation that can be deployed using existing, current generation reactors and reprocessing systems. We show that use of an inert matrix fuel to recycle transuranics in a conventional pressurized water reactor could reduce overall production of these materials by an amount that is similar to what is achievable using proposed fast reactor cycles. Furthermore, we show that these transuranic reductions can be achieved even if the fission products are carried into the inert matrix fuel along with the transuranics, bypassing the critical separations hurdle described above. The implications of these findings are significant, because they imply that inert matrix fuel could be made directly from the material streams produced by the commercially available PUREX process. Zirconium dioxide would be an ideal choice of inert matrix in this context because it is known to form a stable solid solution with both fission products and transuranics.« less

  14. Inert matrix fuel neutronic, thermal-hydraulic, and transient behavior in a light water reactor

    NASA Astrophysics Data System (ADS)

    Carmack, W. J.; Todosow, M.; Meyer, M. K.; Pasamehmetoglu, K. O.

    2006-06-01

    Currently, commercial power reactors in the United States operate on a once-through or open cycle, with the spent nuclear fuel eventually destined for long-term storage in a geologic repository. Since the fissile and transuranic (TRU) elements in the spent nuclear fuel present a proliferation risk, limit the repository capacity, and are the major contributors to the long-term toxicity and dose from the repository, methods and systems are needed to reduce the amount of TRU that will eventually require long-term storage. An option to achieve a reduction in the amount, and modify the isotopic composition of TRU requiring geological disposal is 'burning' the TRU in commercial light water reactors (LWRs) and/or fast reactors. Fuel forms under consideration for TRU destruction in light water reactors (LWRs) include mixed-oxide (MOX), advanced mixed-oxide, and inert matrix fuels. Fertile-free inert matrix fuel (IMF) has been proposed for use in many forms and studied by several researchers. IMF offers several advantages relative to MOX, principally it provides a means for reducing the TRU in the fuel cycle by burning the fissile isotopes and transmuting the minor actinides while producing no new TRU elements from fertile isotopes. This paper will present and discuss the results of a four-bundle, neutronic, thermal-hydraulic, and transient analyses of proposed inert matrix materials in comparison with the results of similar analyses for reference UOX fuel bundles. The results of this work are to be used for screening purposes to identify the general feasibility of utilizing specific inert matrix fuel compositions in existing and future light water reactors. Compositions identified as feasible using the results of these analyses still require further detailed neutronic, thermal-hydraulic, and transient analysis study coupled with rigorous experimental testing and qualification.

  15. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    NASA Astrophysics Data System (ADS)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  16. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms.

    PubMed

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R

    2016-07-07

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  17. Nd and Sm isotopic composition of spent nuclear fuels from three material test reactors

    DOE PAGES

    Sharp, Nicholas; Ticknor, Brian W.; Bronikowski, Michael; ...

    2016-11-17

    Rare earth elements such as neodymium and samarium are ideal for probing the neutron environment that spent nuclear fuels are exposed to in nuclear reactors. The large number of stable isotopes can provide distinct isotopic signatures for differentiating the source material for nuclear forensic investigations. The rare-earth elements were isolated from the high activity fuel matrix via ion exchange chromatography in a shielded cell. The individual elements were then separated using cation exchange chromatography. In conclusion, the neodymium and samarium aliquots were analyzed via MC–ICP–MS, resulting in isotopic compositions with a precision of 0.01–0.3%.

  18. Nd and Sm isotopic composition of spent nuclear fuels from three material test reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharp, Nicholas; Ticknor, Brian W.; Bronikowski, Michael

    Rare earth elements such as neodymium and samarium are ideal for probing the neutron environment that spent nuclear fuels are exposed to in nuclear reactors. The large number of stable isotopes can provide distinct isotopic signatures for differentiating the source material for nuclear forensic investigations. The rare-earth elements were isolated from the high activity fuel matrix via ion exchange chromatography in a shielded cell. The individual elements were then separated using cation exchange chromatography. In conclusion, the neodymium and samarium aliquots were analyzed via MC–ICP–MS, resulting in isotopic compositions with a precision of 0.01–0.3%.

  19. Some Methods for Evaluating Program Implementation.

    ERIC Educational Resources Information Center

    Hardy, Roy A.

    An approach to evaluating program implementation is described. This approach includes the development of a project description which includes a structure matrix, sampling from the structure matrix, and preparing an implementation evaluation plan. The implementation evaluation plan should include: (1) verification of implementation of planned…

  20. Report: EPA Has Not Fully Implemented a National Emergency Response Equipment Tracking System

    EPA Pesticide Factsheets

    Report #11-P-0616, September 13, 2011. Although EPA spent $2.8 million as of October 2010 to develop and implement an EMP emergency equipment tracking module, EPA has not fully implemented the module, and the module suffers from operational issues.

  1. Implementing Ten-Minute Tickers in Secondary Physical Education Classes

    ERIC Educational Resources Information Center

    Lynott, Francis J., III; Hushman, Glenn; Dixon, Jonette; McCarthy, Andrea

    2013-01-01

    In the late 1980s and early 1990s, the time spent in moderate-to-vigorous physical activity (MVPA) during physical education class time started to be measured and questioned (Bar-Or, 1987; Lacey & LaMaster, 1990; McGing, 1989; Simons-Morton, Taylor, Snider, & Huang, 1993). Researchers suggested that the amount of time students spent in…

  2. Post-Doctoral Fellowship for Merton S. Krause. Final Report.

    ERIC Educational Resources Information Center

    Jackson, Philip W.

    The final quarter of Krause's fellowship year was spent in completing his interviews with political socialization researchers in the eastern United States and his work on methodological problems. Krause also completed a long essay on the nature and implications of the "matrix perspective" for research planning, pursued his study of measurement…

  3. Impact of Computerized Provider Order Entry on Pharmacist Productivity

    PubMed Central

    Hatfield, Mark D.; Cox, Rodney; Mhatre, Shivani K.; Flowers, W. Perry

    2014-01-01

    Abstract Purpose: To examine the impact of computerized provider order entry (CPOE) implementation on average time spent on medication order entry and the number of order actions processed. Methods: An observational time and motion study was conducted from March 1 to March 17, 2011. Two similar community hospital pharmacies were compared: one without CPOE implementation and the other with CPOE implementation. Pharmacists in the central pharmacy department of both hospitals were observed in blocks of 1 hour, with 24 hours of observation in each facility. Time spent by pharmacists on distributive, administrative, clinical, and miscellaneous activities associated with order entry were recorded using time and motion instrument documentation. Information on medication order actions and order entry/verifications was obtained using the pharmacy network system. Results: The mean ± SD time spent by pharmacists per hour in the CPOE pharmacy was significantly less than the non-CPOE pharmacy for distributive activities (43.37 ± 7.75 vs 48.07 ± 8.61) and significantly greater than the non-CPOE pharmacy for administrative (8.58 ± 5.59 vs 5.72 ± 6.99) and clinical (7.38 ± 4.27 vs 4.22 ± 3.26) activities. The CPOE pharmacy was associated with a significantly higher number of order actions per hour (191.00 ± 82.52 vs 111.63 ± 25.66) and significantly less time spent (in minutes per hour) on order entry and order verification combined (28.30 ± 9.25 vs 36.56 ± 9.14) than the non-CPOE pharmacy. Conclusion: The implementation of CPOE facilitated pharmacists to allocate more time to clinical and administrative functions and increased the number of order actions processed per hour, thus enhancing workflow efficiency and productivity of the pharmacy department. PMID:24958959

  4. Application of Compton-suppressed self-induced XRF to spent nuclear fuel measurement

    NASA Astrophysics Data System (ADS)

    Park, Se-Hwan; Jo, Kwang Ho; Lee, Seung Kyu; Seo, Hee; Lee, Chaehun; Won, Byung-Hee; Ahn, Seong-Kyu; Ku, Jeong-Hoe

    2017-11-01

    Self-induced X-ray fluorescence (XRF) is a technique by which plutonium (Pu) content in spent nuclear fuel can be directly quantified. In the present work, this method successfully measured the plutonium/uranium (Pu/U) peak ratio of a pressurized water reactor (PWR)'s spent nuclear fuel at the Korea atomic energy research institute (KAERI)'s post irradiation examination facility (PIEF). In order to reduce the Compton background in the low-energy X-ray region, the Compton suppression system additionally was implemented. By use of this system, the spectrum's background level was reduced by a factor of approximately 2. This work shows that Compton-suppressed selfinduced XRF can be effectively applied to Pu accounting in spent nuclear fuel.

  5. Connecting Professional Practice and Technology at the Bedside

    PubMed Central

    Gomes, Melissa; Hash, Pamela; Orsolini, Liana; Watkins, Aimee; Mazzoccoli, Andrea

    2016-01-01

    The purpose of this research is to determine the effects of implementing an electronic health record on medical-surgical registered nurses' time spent in direct professional patient-centered nursing activities, attitudes and beliefs related to implementation, and changes in level of nursing engagement after deployment of the electronic health record. Patient-centered activities were categorized using Watson's Caritas Processes and the Relationship-Based Care Delivery System. Methods included use of an Attitudes and Beliefs Assessment Questionnaire, Nursing Engagement Questionnaire, and Rapid Modeling Corporation's personal digital assistants for time and motion data collection. There was a significant difference in normative belief between nurses with less than 15 years' experience and nurses with more than 15 years' experience (t21 = 2.7, P = .01). While nurses spent less time at the nurses' station, less time charting, significantly more time in patients' rooms and in purposeful interactions, time spent in relationship-based caring behavior categories actually decreased in most categories. Nurses' engagement scores did not significantly increase. These results serve to inform healthcare organizations about potential factors related to electronic health record deployment which create shifts in nursing time spent across care categories and can be used to explore further patient centered care practices. PMID:27496045

  6. Nuclear Forensics Attributing the Source of Spent Fuel Used in an RDD Event

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Mark Robert

    2005-05-01

    An RDD attack against the U.S. is something America needs to prepare against. If such an event occurs the ability to quickly identify the source of the radiological material used in an RDD would aid investigators in identifying the perpetrators. Spent fuel is one of the most dangerous possible radiological sources for an RDD. In this work, a forensics methodology was developed and implemented to attribute spent fuel to a source reactor. The specific attributes determined are the spent fuel burnup, age from discharge, reactor type, and initial fuel enrichment. It is shown that by analyzing the post-event material, thesemore » attributes can be determined with enough accuracy to be useful for investigators. The burnup can be found within a 5% accuracy, enrichment with a 2% accuracy, and age with a 10% accuracy. Reactor type can be determined if specific nuclides are measured. The methodology developed was implemented into a code call NEMASYS. NEMASYS is easy to use and it takes a minimum amount of time to learn its basic functions. It will process data within a few minutes and provide detailed information about the results and conclusions.« less

  7. Solving modal equations of motion with initial conditions using MSC/NASTRAN DMAP. Part 1: Implementing exact mode superposition

    NASA Technical Reports Server (NTRS)

    Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.

    1993-01-01

    Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.

  8. Automated drug dispensing systems in the intensive care unit: a financial analysis.

    PubMed

    Chapuis, Claire; Bedouch, Pierrick; Detavernier, Maxime; Durand, Michel; Francony, Gilles; Lavagne, Pierre; Foroni, Luc; Albaladejo, Pierre; Allenet, Benoit; Payen, Jean-Francois

    2015-09-09

    To evaluate the economic impact of automated-drug dispensing systems (ADS) in surgical intensive care units (ICUs). A financial analysis was conducted in three adult ICUs of one university hospital, where ADS were implemented, one in each unit, to replace the traditional floor stock system. Costs were estimated before and after implementation of the ADS on the basis of floor stock inventories, expired drugs, and time spent by nurses and pharmacy technicians on medication-related work activities. A financial analysis was conducted that included operating cash flows, investment cash flows, global cash flow and net present value. After ADS implementation, nurses spent less time on medication-related activities with an average of 14.7 hours saved per day/33 beds. Pharmacy technicians spent more time on floor-stock activities with an average of 3.5 additional hours per day across the three ICUs. The cost of drug storage was reduced by €44,298 and the cost of expired drugs was reduced by €14,772 per year across the three ICUs. Five years after the initial investment, the global cash flow was €148,229 and the net present value of the project was positive by €510,404. The financial modeling of the ADS implementation in three ICUs showed a high return on investment for the hospital. Medication-related costs and nursing time dedicated to medications are reduced with ADS.

  9. The SBIRT program matrix: a conceptual framework for program implementation and evaluation.

    PubMed

    Del Boca, Frances K; McRee, Bonnie; Vendetti, Janice; Damon, Donna

    2017-02-01

    Screening, Brief Intervention and Referral to Treatment (SBIRT) is a comprehensive, integrated, public health approach to the delivery of services to those at risk for the adverse consequences of alcohol and other drug use, and for those with probable substance use disorders. Research on successful SBIRT implementation has lagged behind studies of efficacy and effectiveness. This paper (1) outlines a conceptual framework, the SBIRT Program Matrix, to guide implementation research and program evaluation and (2) specifies potential implementation outcomes. Overview and narrative description of the SBIRT Program Matrix. The SBIRT Program Matrix has five components, each of which includes multiple elements: SBIRT services; performance sites; provider attributes; patient/client populations; and management structure and activities. Implementation outcomes include program adoption, acceptability, appropriateness, feasibility, fidelity, costs, penetration, sustainability, service provision and grant compliance. The Screening, Brief Intervention and Referral to Treatment Program Matrix provides a template for identifying, classifying and organizing the naturally occurring commonalities and variations within and across SBIRT programs, and for investigating which variables are associated with implementation success and, ultimately, with treatment outcomes and other impacts. © 2017 Society for the Study of Addiction.

  10. Next Generation Safeguards Initiative research to determine the Pu mass in spent fuel assemblies: Purpose, approach, constraints, implementation, and calibration

    NASA Astrophysics Data System (ADS)

    Tobin, S. J.; Menlove, H. O.; Swinhoe, M. T.; Schear, M. A.

    2011-10-01

    The Next Generation Safeguards Initiative (NGSI) of the U.S. Department of Energy has funded a multi-lab/multi-university collaboration to quantify the plutonium mass in spent nuclear fuel assemblies and to detect the diversion of pins from them. The goal of this research effort is to quantify the capability of various non-destructive assay (NDA) technologies as well as to train a future generation of safeguards practitioners. This research is "technology driven" in the sense that we will quantify the capabilities of a wide range of safeguards technologies of interest to regulators and policy makers; a key benefit to this approach is that the techniques are being tested in a unified manner. When the results of the Monte Carlo modeling are evaluated and integrated, practical constraints are part of defining the potential context in which a given technology might be applied. This paper organizes the commercial spent fuel safeguard needs into four facility types in order to identify any constraints on the NDA system design. These four facility types are the following: future reprocessing plants, current reprocessing plants, once-through spent fuel repositories, and any other sites that store individual spent fuel assemblies (reactor sites are the most common facility type in this category). Dry storage is not of interest since individual assemblies are not accessible. This paper will overview the purpose and approach of the NGSI spent fuel effort and describe the constraints inherent in commercial fuel facilities. It will conclude by discussing implementation and calibration of measurement systems. This report will also provide some motivation for considering a couple of other safeguards concepts (base measurement and fingerprinting) that might meet the safeguards need but not require the determination of plutonium mass.

  11. First Industrial Tests of a Drum Monitor Matrix Correction for the Fissile Mass Measurement in Large Volume Historic Metallic Residues with the Differential Die-away Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antoni, R.; Passard, C.; Perot, B.

    2015-07-01

    The fissile mass in radioactive waste drums filled with compacted metallic residues (spent fuel hulls and nozzles) produced at AREVA La Hague reprocessing plant is measured by neutron interrogation with the Differential Die-away measurement Technique (DDT. In the next years, old hulls and nozzles mixed with Ion-Exchange Resins will be measured. The ion-exchange resins increase neutron moderation in the matrix, compared to the waste measured in the current process. In this context, the Nuclear Measurement Laboratory (NML) of CEA Cadarache has studied a matrix effect correction method, based on a drum monitor ({sup 3}He proportional counter inside the measurement cavity).more » A previous study performed with the NML R and D measurement cell PROMETHEE 6 has shown the feasibility of method, and the capability of MCNP simulations to correctly reproduce experimental data and to assess the performances of the proposed correction. A next step of the study has focused on the performance assessment of the method on the industrial station using numerical simulation. A correlation between the prompt calibration coefficient of the {sup 239}Pu signal and the drum monitor signal was established using the MCNPX computer code and a fractional factorial experimental design composed of matrix parameters representative of the variation range of historical waste. Calculations have showed that the method allows the assay of the fissile mass with an uncertainty within a factor of 2, while the matrix effect without correction ranges on 2 decades. In this paper, we present and discuss the first experimental tests on the industrial ACC measurement system. A calculation vs. experiment benchmark has been achieved by performing dedicated calibration measurement with a representative drum and {sup 235}U samples. The preliminary comparison between calculation and experiment shows a satisfactory agreement for the drum monitor. The final objective of this work is to confirm the reliability of the modeling approach and the industrial feasibility of the method, which will be implemented on the industrial station for the measurement of historical wastes. (authors)« less

  12. Implementation of a Matrix Organizational Structure: A Case Study.

    ERIC Educational Resources Information Center

    Whorton, David M.

    The implementation of a matrix structure as an alternative to the traditional collegial/bureaucratic form at a college of education in a medium-size state university is described. Matrix organizational structures are differentiated from hierarchical bureaucratic structures by dividing the organization's tasks into functional units across which an…

  13. Development, implementation, and test results on integrated optics switching matrix

    NASA Technical Reports Server (NTRS)

    Rutz, E.

    1982-01-01

    A small integrated optics switching matrix, which was developed, implemented, and tested, indicates high performance. The matrix serves as a model for the design of larger switching matrices. The larger integrated optics switching matrix should form the integral part of a switching center with high data rate throughput of up to 300 megabits per second. The switching matrix technique can accomplish the design goals of low crosstalk and low distortion. About 50 illustrations help explain and depict the many phases of the integrated optics switching matrix. Many equations used to explain and calculate the experimental data are also included.

  14. An efficient matrix product operator representation of the quantum chemical Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, Sebastian, E-mail: sebastian.keller@phys.chem.ethz.ch; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch; Dolfi, Michele, E-mail: dolfim@phys.ethz.ch

    We describe how to efficiently construct the quantum chemical Hamiltonian operator in matrix product form. We present its implementation as a density matrix renormalization group (DMRG) algorithm for quantum chemical applications. Existing implementations of DMRG for quantum chemistry are based on the traditional formulation of the method, which was developed from the point of view of Hilbert space decimation and attained higher performance compared to straightforward implementations of matrix product based DMRG. The latter variationally optimizes a class of ansatz states known as matrix product states, where operators are correspondingly represented as matrix product operators (MPOs). The MPO construction schememore » presented here eliminates the previous performance disadvantages while retaining the additional flexibility provided by a matrix product approach, for example, the specification of expectation values becomes an input parameter. In this way, MPOs for different symmetries — abelian and non-abelian — and different relativistic and non-relativistic models may be solved by an otherwise unmodified program.« less

  15. Development of Techniques for Spent Fuel Assay – Differential Dieaway Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swinhoe, Martyn Thomas; Goodsell, Alison; Ianakiev, Kiril Dimitrov

    This report summarizes the work done under a DNDO R&D funded project on the development of the differential dieaway method to measure plutonium in spent fuel. There are large amounts of plutonium that are contained in spent fuel assemblies, and currently there is no way to make quantitative non-destructive assay. This has led NA24 under the Next Generation Safeguards Initiative (NGSI) to establish a multi-year program to investigate, develop and implement measurement techniques for spent fuel. The techniques which are being experimentally tested by the existing NGSI project do not include any pulsed neutron active techniques. The present work coversmore » the active neutron differential dieaway technique and has advanced the state of knowledge of this technique as well as produced a design for a practical active neutron interrogation instrument for spent fuel. Monte Carlo results from the NGSI effort show that much higher accuracy (1-2%) for the Pu content in spent fuel assemblies can be obtained with active neutron interrogation techniques than passive techniques, and this would allow their use for nuclear material accountancy independently of any information from the operator. The main purpose of this work was to develop an active neutron interrogation technique for spent nuclear fuel.« less

  16. Sulfuric acid baking and leaching of spent Co-Mo/Al2O3 catalyst.

    PubMed

    Kim, Hong-In; Park, Kyung-Ho; Mishra, Devabrata

    2009-07-30

    Dissolution of metals from a pre-oxidized refinery plant spent Co-Mo/Al(2)O(3) catalyst have been tried through low temperature (200-450 degrees C) sulfuric acid baking followed by mild leaching process. Direct sulfuric acid leaching of the same sample, resulted poor Al and Mo recoveries, whereas leaching after sulfuric acid baking significantly improved the recoveries of above two metals. The pre-oxidized spent catalyst, obtained from a Korean refinery plant found to contain 40% Al, 9.92% Mo, 2.28% Co, 2.5% C and trace amount of other elements such as Fe, Ni, S and P. XRD results indicated the host matrix to be poorly crystalline gamma- Al(2)O(3). The effect of various baking parameters such as catalyst-to-acid ratio, baking temperature and baking time on percentage dissolutions of metals has been studied. It was observed that, metals dissolution increases with increase in the baking temperature up to 300 degrees C, then decreases with further increase in the baking temperature. Under optimum baking condition more than 90% Co and Mo, and 93% Al could be dissolved from the spent catalyst with the following leaching condition: H(2)SO(4)=2% (v/v), temperature=95 degrees C, time=60 min and Pulp density=5%.

  17. Assessing the Role of Online Technologies in Project-Based Learning

    ERIC Educational Resources Information Center

    Ravitz, Jason; Blazevski, Juliane

    2014-01-01

    This study examines the relationships between teacher-reported use of online resources, and preparedness, implementation challenges, and time spent implementing project- or problem-based learning, or approaches that are similar to what we call "PBL" in general. Variables were measured using self-reports from those who teach in reform…

  18. Reciprocal Peer Coaching: A Critical Contributor to Implementing Individual Leadership Plans

    ERIC Educational Resources Information Center

    Goldman, Ellen; Wesner, Marilyn; Karnchanomai, Ornpawee

    2013-01-01

    Billions of dollars are spent annually on programs to develop organizational leaders, yet the effectiveness of these programs is poorly understood. Scholars advise that value is enhanced by the development of individual leadership plans at program completion, followed by implementation experience with subsequent coaching and reflection. The…

  19. A program for thai rubber tappers to improve the cost of occupational health and safety.

    PubMed

    Arphorn, Sara; Chaonasuan, Porntip; Pruktharathikul, Vichai; Singhakajen, Vajira; Chaikittiporn, Chalermchai

    2010-01-01

    The purposes of this research were to determine the cost of occupational health and safety and work-related health problems, accidents, injuries and illnesses in rubber tappers by implementing a program in which rubber tappers were provided training on self-care in order to reduce and prevent work-related accidents, injuries and illnesses. Data on costs for healthcare, the prevention and the treatment of work-related accidents, injuries and illnesses were collected by interview using a questionnaire. The findings revealed that there was no relationship between what was spent on healthcare and the prevention of work-related accidents, injuries and illnesses and that spent on the treatment of work-related accidents, injuries and illnesses. The proportion of the injured subjects after the program implementation was significantly less than that before the program implementation (p<0.001). The level of pain after the program implementation was significantly less than that before the program implementation (p<0.05). The treatment costs incurred after the program implementation were significantly less than those incurred before the program implementation (p<0.001). It was demonstrated that this program raised the health awareness of rubber tappers. It strongly empowered the leadership in health promotion for the community.

  20. Chemical Reactivity Testing for the National Spent Nuclear Fuel Program. Quality Assurance Project Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newsom, H.C.

    This quality assurance project plan (QAPjP) summarizes requirements used by Lockheed Martin Energy Systems, Incorporated (LMES) Development Division at Y-12 for conducting chemical reactivity testing of Department of Energy (DOE) owned spent nuclear fuel, sponsored by the National Spent Nuclear Fuel Program (NSNFP). The requirements are based on the NSNFP Statement of Work PRO-007 (Statement of Work for Laboratory Determination of Uranium Hydride Oxidation Reaction Kinetics.) This QAPjP will utilize the quality assurance program at Y-12, QA-101PD, revision 1, and existing implementing procedures for the most part in meeting the NSNFP Statement of Work PRO-007 requirements, exceptions will be noted.

  1. FPGA-based coprocessor for matrix algorithms implementation

    NASA Astrophysics Data System (ADS)

    Amira, Abbes; Bensaali, Faycal

    2003-03-01

    Matrix algorithms are important in many types of applications including image and signal processing. These areas require enormous computing power. A close examination of the algorithms used in these, and related, applications reveals that many of the fundamental actions involve matrix operations such as matrix multiplication which is of O (N3) on a sequential computer and O (N3/p) on a parallel system with p processors complexity. This paper presents an investigation into the design and implementation of different matrix algorithms such as matrix operations, matrix transforms and matrix decompositions using an FPGA based environment. Solutions for the problem of processing large matrices have been proposed. The proposed system architectures are scalable, modular and require less area and time complexity with reduced latency when compared with existing structures.

  2. Numerical implementation of the S-matrix algorithm for modeling of relief diffraction gratings

    NASA Astrophysics Data System (ADS)

    Yaremchuk, Iryna; Tamulevičius, Tomas; Fitio, Volodymyr; Gražulevičiūte, Ieva; Bobitski, Yaroslav; Tamulevičius, Sigitas

    2013-11-01

    A new numerical implementation is developed to calculate the diffraction efficiency of relief diffraction gratings. In the new formulation, vectors containing the expansion coefficients of electric and magnetic fields on boundaries of the grating layer are expressed by additional constants. An S-matrix algorithm has been systematically described in detail and adapted to a simple matrix form. This implementation is suitable for the study of optical characteristics of periodic structures by using modern object-oriented programming languages and different standard mathematical software. The modeling program has been developed on the basis of this numerical implementation and tested by comparison with other commercially available programs and experimental data. Numerical examples are given to show the usefulness of the new implementation.

  3. Value management program: performance, quantification, and presentation of imaging value-added actions.

    PubMed

    Patel, Samir

    2015-03-01

    Health care is in a state of transition, shifting from volume-based success to value-based success. Hospital executives and referring physicians often do not understand the total value a radiology group provides. A template for easy, cost-effective implementation in clinical practice for most radiology groups to demonstrate the value they provide to their clients (patients, physicians, health care executives) has not been well described. A value management program was developed to document all of the value-added activities performed by on-site radiologists, quantify them in terms of time spent on each activity (investment), and present the benefits to internal and external stakeholders (outcomes). The radiology value-added matrix is the platform from which value-added activities are categorized and synthesized into a template for defining investments and outcomes. The value management program was first implemented systemwide in 2013. Across all serviced locations, 9,931.75 hours were invested. An annual executive summary report template demonstrating outcomes is given to clients. The mean and median individual value-added hours per radiologist were 134.52 and 113.33, respectively. If this program were extrapolated to the entire field of radiology, approximately 30,000 radiologists, this would have resulted in 10,641,161 uncompensated value-added hours documented in 2013, with an estimated economic value of $2.21 billion. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  4. Recovery Act: Increasing the Public's Understanding of What Funds Are Being Spent on and What Outcomes Are Expected. Report to the Republican Leader, U.S. Senate. GAO-10-581

    ERIC Educational Resources Information Center

    US Government Accountability Office, 2010

    2010-01-01

    A hallmark of efforts to implement the $862 billion American Recovery and Reinvestment Act of 2009 (Recovery Act) is to be transparent and accountable about what the money is being spent on and what is being achieved. To help achieve these goals, recipients are to report every 3 months on their award activities and expected outcomes, among other…

  5. Comparison Between Navy and Army Implementation of SIOH and Recommendations for Navy Implementation

    DTIC Science & Technology

    2015-12-01

    8 Figure 3. NAVFAC Matrix Organization and Relationship .......................................9 Figure 4. Matrix Roles...Program DFAS Defense Finance and Accounting Service DOD Department of Defense DOH Departmental Overhead EFA Engineering Field Activity EFD...Command (NAVFAC). (2015). Concept of operations. Washington, DC: Author. p. 8 8 NAVFAC is organized both as a tiered organization, and as a matrix

  6. Utilization of TRISO Fuel with LWR Spent Fuel in Fusion-Fission Hybrid Reactor System

    NASA Astrophysics Data System (ADS)

    Acır, Adem; Altunok, Taner

    2010-10-01

    HTRs use a high performance particulate TRISO fuel with ceramic multi-layer coatings due to the high burn up capability and very neutronic performance. TRISO fuel because of capable of high burn up and very neutronic performance is conducted in a D-T fusion driven hybrid reactor. In this study, TRISO fuels particles are imbedded body-centered cubic (BCC) in a graphite matrix with a volume fraction of 68%. The neutronic effect of TRISO coated LWR spent fuel in the fuel rod used hybrid reactor on the fuel performance has been investigated for Flibe, Flinabe and Li20Sn80 coolants. The reactor operation time with the different first neutron wall loads is 24 months. Neutron transport calculations are evaluated by using XSDRNPM/SCALE 5 codes with 238 group cross section library. The effect of TRISO coated LWR spent fuel in the fuel rod used hybrid reactor on tritium breeding (TBR), energy multiplication (M), fissile fuel breeding, average burn up values are comparatively investigated. It is shown that the high burn up can be achieved with TRISO fuel in the hybrid reactor.

  7. Enzymatic hydrolysis of beer brewers' spent grain and the influence of pretreatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beldman, G.; Hennekam, J.; Voragen, A.G.J.

    1987-01-01

    The enzymatic saccharification of plant material has been shown to be of interest in various fields, such as the production of fruit juices and the utilization of biomass. A combination of cellulase, pectinase, and hemicellulases is usually used because of the chemical composition of the matrix of plant cell walls. For apples, beet pulp, and potato fiber, almost a complete hydrolysis of polysaccharides is obtained by combining cellulase and pectinase. For nonparenchymatic tissue, the situation is somewhat different: pectin is a minor component and the hemicellulose content is much higher. Enzyme action is restricted by the lignin barrier and bymore » the high crystallinity of cellulose in this material. For such materials, mechanical, thermal, or chemical pretreatments are necessary to achieve efficient hydrolysis. This communication describes various enzymatic treatments and chemical and physical pretreatment, using brewers' spent grain as substrate. Spent grain is the residue of malt and grain which remains in the mash-kettle after the liquefied and saccharified starch has been removed by filtration. (Refs. 15).« less

  8. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    PubMed

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  9. Utilization of chemically treated municipal solid waste (spent coffee bean powder) as reinforcement in cellulose matrix for packaging applications.

    PubMed

    Thiagamani, Senthil Muthu Kumar; Nagarajan, Rajini; Jawaid, Mohammad; Anumakonda, Varadarajulu; Siengchin, Suchart

    2017-11-01

    As the annual production of the solid waste generable in the form of spent coffee bean powder (SCBP) is over 6 million tons, its utilization in the generation of green energy, waste water treatment and as a filler in biocomposites is desirable. The objective of this article is to analyze the possibilities to valorize coffee bean powder as a filler in cellulose matrix. Cellulose matrix was dissolved in the relatively safer aqueous solution mixture (8% LiOH and 15% Urea) precooled to -12.5°C. To the cellulose solution (SCBP) was added in 5-25wt% and the composite films were prepared by regeneration method using ethyl alcohol as a coagulant. Some SCBP was treated with aq. 5% NaOH and the composite films were also prepared using alkali treated SCBP as a filler. The films of composites were uniform with brown in color. The cellulose/SCBP films without and with alkali treated SCBP were characterized by FTIR, XRD, optical and polarized optical microscopy, thermogravimetric analysis (TGA) and tensile tests. The maximum tensile strength of the composite films with alkali treated SCBP varied between (106-149MPa) and increased with SCBP content when compared to the composites with untreated SCBP. The thermal stability of the composite was higher at elevated temperatures when alkali treated SCBP was used. Based on the improved tensile properties and photo resistivity, the cellulose/SCBP composite films with alkali treated SCBP may be considered for packaging and wrapping of flowers and vegetables. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Determination of mercury distribution inside spent compact fluorescent lamps by atomic absorption spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rey-Raap, Natalia; Gallardo, Antonio, E-mail: gallardo@emc.uji.es

    Highlights: Black-Right-Pointing-Pointer New treatments for CFL are required considering the aim of Directive 202/96/CE. Black-Right-Pointing-Pointer It is shown that most of the mercury introduced into a CFL is in the phosphor powder. Black-Right-Pointing-Pointer Experimental conditions for microwave-assisted sample digestion followed by AAS measurements are described. Black-Right-Pointing-Pointer By washing the glass it is possible to reduce the concentration below legal limits. - Abstract: In this study, spent compact fluorescent lamps were characterized to determine the distribution of mercury. The procedure used in this research allowed mercury to be extracted in the vapor phase, from the phosphor powder, and the glass matrix.more » Mercury concentration in the three phases was determined by the method known as cold vapor atomic absorption spectrometry. Median values obtained in the study showed that a compact fluorescent lamp contained 24.52 {+-} 0.4 ppb of mercury in the vapor phase, 204.16 {+-} 8.9 ppb of mercury in the phosphor powder, and 18.74 {+-} 0.5 ppb of mercury in the glass matrix. There are differences in mercury concentration between the lamps since the year of manufacture or the hours of operation affect both mercury content and its distribution. The 85.76% of the mercury introduced into a compact fluorescent lamp becomes a component of the phosphor powder, while more than 13.66% is diffused through the glass matrix. By washing and eliminating all phosphor powder attached to the glass surface it is possible to classified the glass as a non-hazardous waste.« less

  11. Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication

    DOE PAGES

    Azad, Ariful; Ballard, Grey; Buluc, Aydin; ...

    2016-11-08

    Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdös-Rényi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achievingmore » significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research.« less

  12. The effect of implementing an automated oxygen control on oxygen saturation in preterm infants.

    PubMed

    Van Zanten, H A; Kuypers, K L A M; Stenson, B J; Bachman, T E; Pauws, S C; Te Pas, A B

    2017-09-01

    To evaluate the effect of implementing automated oxygen control as routine care in maintaining oxygen saturation (SpO 2 ) within target range in preterm infants. Infants <30 weeks gestation in Leiden University Medical Centre before and after the implementation of automated oxygen control were compared. The percentage of time spent with SpO 2 within and outside the target range (90-95%) was calculated. SpO 2 values were collected every minute and included for analysis when infants received extra oxygen. In a period of 9 months, 42 preterm infants (21 manual, 21 automated) were studied. In the automated period, the median (IQR) time spent with SpO 2 within target range increased (manual vs automated: 48.4 (41.5-56.4)% vs 61.9 (48.5-72.3)%; p<0.01) and time SpO 2 >95% decreased (41.9 (30.6-49.4)% vs 19.3 (11.5-24.5)%; p<0.001). The time SpO 2 <90% increased (8.6 (7.2-11.7)% vs 15.1 (14.0-21.1)%; p<0.0001), while SpO 2 <80% was similar (1.1 (0.4-1.7)% vs 0.9 (0.5-2.1)%; ns). During oxygen therapy, preterm infants spent more time within the SpO 2 target range after implementation of automated oxygen control, with a significant reduction in hyperoxaemia, but not hypoxaemia. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  13. Project Energise: Using participatory approaches and real time computer prompts to reduce occupational sitting and increase work time physical activity in office workers.

    PubMed

    Gilson, Nicholas D; Ng, Norman; Pavey, Toby G; Ryde, Gemma C; Straker, Leon; Brown, Wendy J

    2016-11-01

    This efficacy study assessed the added impact real time computer prompts had on a participatory approach to reduce occupational sedentary exposure and increase physical activity. Quasi-experimental. 57 Australian office workers (mean [SD]; age=47 [11] years; BMI=28 [5]kg/m 2 ; 46 men) generated a menu of 20 occupational 'sit less and move more' strategies through participatory workshops, and were then tasked with implementing strategies for five months (July-November 2014). During implementation, a sub-sample of workers (n=24) used a chair sensor/software package (Sitting Pad) that gave real time prompts to interrupt desk sitting. Baseline and intervention sedentary behaviour and physical activity (GENEActiv accelerometer; mean work time percentages), and minutes spent sitting at desks (Sitting Pad; mean total time and longest bout) were compared between non-prompt and prompt workers using a two-way ANOVA. Workers spent close to three quarters of their work time sedentary, mostly sitting at desks (mean [SD]; total desk sitting time=371 [71]min/day; longest bout spent desk sitting=104 [43]min/day). Intervention effects were four times greater in workers who used real time computer prompts (8% decrease in work time sedentary behaviour and increase in light intensity physical activity; p<0.01). Respective mean differences between baseline and intervention total time spent sitting at desks, and the longest bout spent desk sitting, were 23 and 32min/day lower in prompt than in non-prompt workers (p<0.01). In this sample of office workers, real time computer prompts facilitated the impact of a participatory approach on reductions in occupational sedentary exposure, and increases in physical activity. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. Matrix multiplication on the Intel Touchstone Delta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huss-Lederman, S.; Jacobson, E.M.; Tsao, A.

    1993-12-31

    Matrix multiplication is a key primitive in block matrix algorithms such as those found in LAPACK. We present results from our study of matrix multiplication algorithms on the Intel Touchstone Delta, a distributed memory message-passing architecture with a two-dimensional mesh topology. We obtain an implementation that uses communication primitives highly suited to the Delta and exploits the single node assembly-coded matrix multiplication. Our algorithm is completely general, able to deal with arbitrary mesh aspect ratios and matrix dimensions, and has achieved parallel efficiency of 86% with overall peak performance in excess of 8 Gflops on 256 nodes for an 8800more » {times} 8800 matrix. We describe our algorithm design and implementation, and present performance results that demonstrate scalability and robust behavior over varying mesh topologies.« less

  15. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  16. Modeling of Progressive Damage in Fiber-Reinforced Ceramic Matrix Composites

    DTIC Science & Technology

    1996-03-01

    ALAN V. LAIR , Committee Member Professor and Department Head Department of Mathematics and Statistics DAVID D. ROBERTSON, Committee Member Major...other committee members, Prof. Peter Torvik, Prof. Alan Lair , and, representing the dean, Prof. Kirk Mathews for their support and time spent in...34 Journal of Composites Technology and Research, In press (1996). 177. Sorensen B.F. and Talreja R. "Effects of Nonuniformity of Fiber Distribution On

  17. Variational optimization algorithms for uniform matrix product states

    NASA Astrophysics Data System (ADS)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  18. Comparison of two matrix data structures for advanced CSM testbed applications

    NASA Technical Reports Server (NTRS)

    Regelbrugge, M. E.; Brogan, F. A.; Nour-Omid, B.; Rankin, C. C.; Wright, M. A.

    1989-01-01

    The first section describes data storage schemes presently used by the Computational Structural Mechanics (CSM) testbed sparse matrix facilities and similar skyline (profile) matrix facilities. The second section contains a discussion of certain features required for the implementation of particular advanced CSM algorithms, and how these features might be incorporated into the data storage schemes described previously. The third section presents recommendations, based on the discussions of the prior sections, for directing future CSM testbed development to provide necessary matrix facilities for advanced algorithm implementation and use. The objective is to lend insight into the matrix structures discussed and to help explain the process of evaluating alternative matrix data structures and utilities for subsequent use in the CSM testbed.

  19. Installing and implementing a computer-based patient record system in sub-Saharan Africa: the Mosoriot Medical Record System.

    PubMed

    Rotich, Joseph K; Hannan, Terry J; Smith, Faye E; Bii, John; Odero, Wilson W; Vu, Nguyen; Mamlin, Burke W; Mamlin, Joseph J; Einterz, Robert M; Tierney, William M

    2003-01-01

    The authors implemented an electronic medical record system in a rural Kenyan health center. Visit data are recorded on a paper encounter form, eliminating duplicate documentation in multiple clinic logbooks. Data are entered into an MS-Access database supported by redundant power systems. The system was initiated in February 2001, and 10,000 visit records were entered for 6,190 patients in six months. The authors present a summary of the clinics visited, diagnoses made, drugs prescribed, and tests performed. After system implementation, patient visits were 22% shorter. They spent 58% less time with providers (p < 0.001) and 38% less time waiting (p = 0.06). Clinic personnel spent 50% less time interacting with patients, two thirds less time interacting with each other, and more time in personal activities. This simple electronic medical record system has bridged the "digital divide." Financial and technical sustainability by Kenyans will be key to its future use and development.

  20. Modelling the radiolytic corrosion of α-doped UO2 and spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Liu, Nazhen; Qin, Zack; Noël, James J.; Shoesmith, David W.

    2017-10-01

    A model previously developed to predict the corrosion rate of spent fuel (UO2) inside a failed waste container has been adapted to simulate the rates measured on a wide range of α-doped UO2 and spent fuel specimens. This simulation confirms the validity of the model and demonstrates that the steady-state corrosion rate is controlled by the radiolytic production of H2O2 (which has been shown to be the primary oxidant driving fuel corrosion), irrespective of the reactivity of the UO2 matrix. The model was then used to determine the consequences of corrosion inside a failed container resealed by steel corrosion products. The possible accumulation of O2, produced by H2O2 decomposition, was found to accelerate the corrosion rate in a closed system. However, the simultaneous accumulation of radiolytic H2, which is activated as a reductant on the noble metal (ε) particles in the spent fuel, rapidly overcame this acceleration leading to the eventual suppression of the corrosion rate to insignificant values. Calculations also showed that, while the radiation dose rate, the H2O2 decomposition ratio, and the surface coverage of ε particles all influenced the short term corrosion rate, the influence of the radiolytically produced H2 was the overwhelming influence in reducing the rate to negligible level (i.e., <10-20 mol m-2 s-1).

  1. The Strength of Ethical Matrixes as a Tool for Normative Analysis Related to Technological Choices: The Case of Geological Disposal for Radioactive Waste.

    PubMed

    Kermisch, Céline; Depaus, Christophe

    2018-02-01

    The ethical matrix is a participatory tool designed to structure ethical reflection about the design, the introduction, the development or the use of technologies. Its collective implementation, in the context of participatory decision-making, has shown its potential usefulness. On the contrary, its implementation by a single researcher has not been thoroughly analyzed. The aim of this paper is precisely to assess the strength of ethical matrixes implemented by a single researcher as a tool for conceptual normative analysis related to technological choices. Therefore, the ethical matrix framework is applied to the management of high-level radioactive waste, more specifically to retrievable and non-retrievable geological disposal. The results of this analysis show that the usefulness of ethical matrixes is twofold and that they provide a valuable input for further decision-making. Indeed, by using ethical matrixes, implicit ethically relevant issues were revealed-namely issues of equity associated with health impacts and differences between close and remote future generations regarding ethical impacts. Moreover, the ethical matrix framework was helpful in synthesizing and comparing systematically the ethical impacts of the technologies under scrutiny, and hence in highlighting the potential ethical conflicts.

  2. Pebble bed modular reactor safeguards: developing new approaches and implementing safeguards by design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beyer, Brian David; Beddingfield, David H; Durst, Philip

    2010-01-01

    The design of the Pebble Bed Modular Reactor (PBMR) does not fit or seem appropriate to the IAEA safeguards approach under the categories of light water reactor (LWR), on-load refueled reactor (OLR, i.e. CANDU), or Other (prismatic HTGR) because the fuel is in a bulk form, rather than discrete items. Because the nuclear fuel is a collection of nuclear material inserted in tennis-ball sized spheres containing structural and moderating material and a PBMR core will contain a bulk load on the order of 500,000 spheres, it could be classified as a 'Bulk-Fuel Reactor.' Hence, the IAEA should develop unique safeguardsmore » criteria. In a multi-lab DOE study, it was found that an optimized blend of: (i) developing techniques to verify the plutonium content in spent fuel pebbles, (ii) improving burn-up computer codes for PBMR spent fuel to provide better understanding of the core and spent fuel makeup, and (iii) utilizing bulk verification techniques for PBMR spent fuel storage bins should be combined with the historic IAEA and South African approaches of containment and surveillance to verify and maintain continuity of knowledge of PBMR fuel. For all of these techniques to work the design of the reactor will need to accommodate safeguards and material accountancy measures to a far greater extent than has thus far been the case. The implementation of Safeguards-by-Design as the PBMR design progresses provides an approach to meets these safeguards and accountancy needs.« less

  3. Solute transport in a single fracture involving an arbitrary length decay chain with rock matrix comprising different geological layers.

    PubMed

    Mahmoudzadeh, Batoul; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars

    2014-08-01

    A model is developed to describe solute transport and retention in fractured rocks. It accounts for advection along the fracture, molecular diffusion from the fracture to the rock matrix composed of several geological layers, adsorption on the fracture surface, adsorption in the rock matrix layers and radioactive decay-chains. The analytical solution, obtained for the Laplace-transformed concentration at the outlet of the flowing channel, can conveniently be transformed back to the time domain by the use of the de Hoog algorithm. This allows one to readily include it into a fracture network model or a channel network model to predict nuclide transport through channels in heterogeneous fractured media consisting of an arbitrary number of rock units with piecewise constant properties. More importantly, the simulations made in this study recommend that it is necessary to account for decay-chains and also rock matrix comprising at least two different geological layers, if justified, in safety and performance assessment of the repositories for spent nuclear fuel. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. The effect of fission-energy Xe ion irradiation on the structural integrity and dissolution of the CeO2 matrix

    NASA Astrophysics Data System (ADS)

    Popel, A. J.; Le Solliec, S.; Lampronti, G. I.; Day, J.; Petrov, P. K.; Farnan, I.

    2017-02-01

    This work considers the effect of fission fragment damage on the structural integrity and dissolution of the CeO2 matrix in water, as a simulant for the UO2 matrix of spent nuclear fuel. For this purpose, thin films of CeO2 on Si substrates were produced and irradiated by 92 MeV 129Xe23+ ions to a fluence of 4.8 × 1015 ions/cm2 to simulate fission damage that occurs within nuclear fuels along with bulk CeO2 samples. The irradiated and unirradiated samples were characterised and a static batch dissolution experiment was conducted to study the effect of the induced irradiation damage on dissolution of the CeO2 matrix. Complex restructuring took place in the irradiated films and the irradiated samples showed an increase in the amount of dissolved cerium, as compared to the corresponding unirradiated samples. Secondary phases were also observed on the surface of the irradiated CeO2 films after the dissolution experiment.

  5. Financial and workflow analysis of radiology reporting processes in the planning phase of implementation of a speech recognition system

    NASA Astrophysics Data System (ADS)

    Whang, Tom; Ratib, Osman M.; Umamoto, Kathleen; Grant, Edward G.; McCoy, Michael J.

    2002-05-01

    The goal of this study is to determine the financial value and workflow improvements achievable by replacing traditional transcription services with a speech recognition system in a large, university hospital setting. Workflow metrics were measured at two hospitals, one of which exclusively uses a transcription service (UCLA Medical Center), and the other which exclusively uses speech recognition (West Los Angeles VA Hospital). Workflow metrics include time spent per report (the sum of time spent interpreting, dictating, reviewing, and editing), transcription turnaround, and total report turnaround. Compared to traditional transcription, speech recognition resulted in radiologists spending 13-32% more time per report, but it also resulted in reduction of report turnaround time by 22-62% and reduction of marginal cost per report by 94%. The model developed here helps justify the introduction of a speech recognition system by showing that the benefits of reduced operating costs and decreased turnaround time outweigh the cost of increased time spent per report. Whether the ultimate goal is to achieve a financial objective or to improve operational efficiency, it is important to conduct a thorough analysis of workflow before implementation.

  6. Improvement of vegetables elemental quality by espresso coffee residues.

    PubMed

    Cruz, Rebeca; Morais, Simone; Mendes, Eulália; Pereira, José A; Baptista, Paula; Casal, Susana

    2014-04-01

    Spent coffee grounds (SCG) are usually disposed as common garbage, without specific reuse strategies implemented so far. Due to its recognised richness in bioactive compounds, the effect of SCG on lettuce's macro- and micro-elements was assessed to define its effectiveness for agro industrial reuse. A greenhouse pot experiment was conducted with different amounts of fresh and composted spent coffee, and potassium, magnesium, phosphorous, calcium, sodium, iron, manganese, zinc and copper were analysed. A progressive decrease on all lettuce mineral elements was verified with the increase of fresh spent coffee, except for potassium. In opposition, an increment of lettuce's essential macro-elements was verified when low amounts of composted spent coffee were applied (5%, v/v), increasing potassium content by 40%, manganese by 30%, magnesium by 20%, and sodium by 10%, of nutritional relevance This practical approach offers an alternative reuse for this by-product, extendable to other crops, providing value-added vegetable products. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. The density matrix renormalization group algorithm on kilo-processor architectures: Implementation and trade-offs

    NASA Astrophysics Data System (ADS)

    Nemes, Csaba; Barcza, Gergely; Nagy, Zoltán; Legeza, Örs; Szolgay, Péter

    2014-06-01

    In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU-GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix-vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.

  8. Work efficiency improvement of >90% after implementation of an annual inpatient blood products administration consent form

    PubMed Central

    Lindsay, Holly; Bhar, Saleh; Bonifant, Challice; Sartain, Sarah; Whittle, Sarah B.; Lee-Kim, Youngna; Shah, Mona D.

    2018-01-01

    Paediatric haematology, oncology and bone marrow transplant (BMT) patients frequently require transfusion of blood products. Our institution required a new transfusion consent be obtained every admission. The objectives of this project were to: revise inpatient blood products consent form to be valid for 1 year, decrease provider time spent consenting from 15 to <5 min per admission, and improve provider frustration with the consent process. Over 6 months, we determined the average number of hospitalisations requiring transfusions in a random sampling of haematology/oncology/BMT inpatients. We surveyed nurses and providers regarding frustration levels and contact required regarding consents. Four and 12 months after implementation of the annual consent, providers and nurses were resurveyed, and new inpatient cohorts were assessed. Comparison of preintervention and postintervention time data allowed calculation of provider time reduction, a surrogate measure of improved work efficiency. Prior to the annual consent, >33 hours were spent over 6 months obtaining consent on 40 patients, with >19 hours spent obtaining consent when no transfusions were administered during admission. Twelve months after annual consent implementation, 97.5% (39/40) of analysed patients had a completed annual blood products transfusion consent and provider work efficiency had improved by 94.6% (>30 hours). Although several surveyed variables improved following annual consent implementation, provider frustration with consent process remained 6 out of a max score of 10, the same level as prior to the intervention. Development of an annual inpatient blood products consent form decreased provider time from 15 to <1 min per admission, decreased consenting numbers and increased work efficiency by >90%. PMID:29333497

  9. Work efficiency improvement of >90% after implementation of an annual inpatient blood products administration consent form.

    PubMed

    Lindsay, Holly; Bhar, Saleh; Bonifant, Challice; Sartain, Sarah; Whittle, Sarah B; Lee-Kim, Youngna; Shah, Mona D

    2018-01-01

    Paediatric haematology, oncology and bone marrow transplant (BMT) patients frequently require transfusion of blood products. Our institution required a new transfusion consent be obtained every admission. The objectives of this project were to: revise inpatient blood products consent form to be valid for 1 year, decrease provider time spent consenting from 15 to <5 min per admission, and improve provider frustration with the consent process. Over 6 months, we determined the average number of hospitalisations requiring transfusions in a random sampling of haematology/oncology/BMT inpatients. We surveyed nurses and providers regarding frustration levels and contact required regarding consents. Four and 12 months after implementation of the annual consent, providers and nurses were resurveyed, and new inpatient cohorts were assessed. Comparison of preintervention and postintervention time data allowed calculation of provider time reduction, a surrogate measure of improved work efficiency. Prior to the annual consent, >33 hours were spent over 6 months obtaining consent on 40 patients, with >19 hours spent obtaining consent when no transfusions were administered during admission. Twelve months after annual consent implementation, 97.5% (39/40) of analysed patients had a completed annual blood products transfusion consent and provider work efficiency had improved by 94.6% (>30 hours). Although several surveyed variables improved following annual consent implementation, provider frustration with consent process remained 6 out of a max score of 10, the same level as prior to the intervention. Development of an annual inpatient blood products consent form decreased provider time from 15 to <1 min per admission, decreased consenting numbers and increased work efficiency by >90%.

  10. Transport and geotechnical properties of porous media with applications to retorted oil shale. Volume 4. Appendix D. Temperature and toe erosion effects on spent oil shale embankment stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, N.Y.; Wu, T.H.

    1986-01-01

    To evaluate the engineering property of spent shale at elevated temperatures, high temperature triaxial cells were designed and manufactured. The cells were then used in the test program designed to provide the physical and engineering properties of spent shale (TOSCO-II) at elevated temperatures. A series of consolidated drained triaxial tests were conducted at high temperatures. Duncan-Chang hyperbolic model was adopted to simulate the laboratory stress versus strain behavior of spent shale at various temperatures. This model provides very good fit to the laboratory stress-strain-volumetric strain characteristics of spent shale at various temperatures. The parameters of this model were then formulatedmore » as functions of temperatures and the Duncan-Chang model was implemented in a finite element analysis computer code for predicting the stress-deformation behavior of large spent shale embankments. Modified Bishop method was also used in analyzing the stability of spent shale embankments. The stability of three different spent shale embankments at three different temperatures were investigated in the study. Additionally the stability of embankments with different degrees of toe erosion was also studied. Results of this study indicated that (1) the stress-strain-strength properties of soils are affected by temperature variation; (2) the stress-strain-strength behavior of spent shale can be simulated by Duncan-Chang hyperbolic model, (3) the factor of safety of embankment slope decreases with rising temperatures; (4) the embankment deformation increases with rising temperatures; and (5) the toe erosion induced by floods causes the embankment slope to become less stable. It is strongly recommended, to extend this study to investigate the effect of internal seepage on the stability of large spent shale embankment. 68 refs., 53 figs., 16 tabs.« less

  11. Time to audit.

    PubMed

    Smyth, L G; Martin, Z; Hall, B; Collins, D; Mealy, K

    2012-09-01

    Public and political pressures are increasing on doctors and in particular surgeons to demonstrate competence assurance. While surgical audit is an integral part of surgical practice, its implementation and delivery at a national level in Ireland is poorly developed. Limits to successful audit systems relate to lack of funding and administrative support. In Wexford General Hospital, we have a comprehensive audit system which is based on the Lothian Surgical Audit system. We wished to analyse the amount of time required by the Consultant, NCHDs and clerical staff on one surgical team to run a successful audit system. Data were collected over a calendar month. This included time spent coding and typing endoscopy procedures, coding and typing operative procedures, and typing and signing discharge letters. The total amount of time spent to run the audit system for one Consultant surgeon for one calendar month was 5,168 min or 86.1 h. Greater than 50% of this time related to work performed by administrative staff. Only the intern and administrative staff spent more than 5% of their working week attending to work related to the audit. An integrated comprehensive audit system requires a very little time input by Consultant surgeons. Greater than 90% of the workload in running the audit was performed by the junior house doctors and administrative staff. The main financial implications for national audit implementation would relate to software and administrative staff recruitment. Implementation of the European Working Time Directive in Ireland may limit the time available for NCHD's to participate in clinical audit.

  12. Stress Testing of an Artificial Pancreas System With Pizza and Exercise Leads to Improvements in the System's Fuzzy Logic Controller.

    PubMed

    Mauseth, Richard; Lord, Sandra M; Hirsch, Irl B; Kircher, Robert C; Matheson, Don P; Greenbaum, Carla J

    2015-09-14

    Under controlled conditions, the Dose Safety artificial pancreas (AP) system controller, which utilizes "fuzzy logic" (FL) methodology to calculate and deliver appropriate insulin dosages based on changes in blood glucose, successfully managed glycemic excursions. The aim of this study was to show whether stressing the system with pizza (high carbohydrate/high fat) meals and exercise would reveal deficits in the performance of the Dose Safety FL controller (FLC) and lead to improvements in the dosing matrix. Ten subjects with type 1 diabetes (T1D) were enrolled and participated in 30 studies (17 meal, 13 exercise) using 2 versions of the FLC. After conducting 13 studies with the first version (FLC v2.0), interim results were evaluated and the FLC insulin-dosing matrix was modified to create a new controller version (FLC v2.1) that was validated through regression testing using v2.0 CGM datasets prior to its use in clinical studies. The subsequent 17 studies were performed using FLC v2.1. Use of FLC v2.1 vs FLC v2.0 in the pizza meal tests showed improvements in mean blood glucose (205 mg/dL vs 232 mg/dL, P = .04). FLC v2.1 versus FLC v2.0 in exercise tests showed improvements in mean blood glucose (146 mg/dL vs 201 mg/dL, P = .004), percentage time spent >180 mg/dL (19.3% vs 46.7%, P = .001), and percentage time spent 70-180 mg/dL (80.0% vs 53.3%, P = .002). Stress testing the AP system revealed deficits in the FLC performance, which led to adjustments to the dosing matrix followed by improved FLC performance when retested. © 2015 Diabetes Technology Society.

  13. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  14. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  15. Savannah River Site Spent Nuclear Fuel Management Final Environmental Impact Statement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    N /A

    The proposed DOE action considered in this environmental impact statement (EIS) is to implement appropriate processes for the safe and efficient management of spent nuclear fuel and targets at the Savannah River Site (SRS) in Aiken County, South Carolina, including placing these materials in forms suitable for ultimate disposition. Options to treat, package, and store this material are discussed. The material included in this EIS consists of approximately 68 metric tons heavy metal (MTHM) of spent nuclear fuel 20 MTHM of aluminum-based spent nuclear fuel at SRS, as much as 28 MTHM of aluminum-clad spent nuclear fuel from foreign andmore » domestic research reactors to be shipped to SRS through 2035, and 20 MTHM of stainless-steel or zirconium-clad spent nuclear fuel and some Americium/Curium Targets stored at SRS. Alternatives considered in this EIS encompass a range of new packaging, new processing, and conventional processing technologies, as well as the No Action Alternative. A preferred alternative is identified in which DOE would prepare about 97% by volume (about 60% by mass) of the aluminum-based fuel for disposition using a melt and dilute treatment process. The remaining 3% by volume (about 40% by mass) would be managed using chemical separation. Impacts are assessed primarily in the areas of water resources, air resources, public and worker health, waste management, socioeconomic, and cumulative impacts.« less

  16. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  17. Extending Spent Fuel Storage until Transport for Reprocessing or Disposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsen, Brett; Chiguer, Mustapha; Grahn, Per

    Spent fuel (SF) must be stored until an end point such as reprocessing or geologic disposal is imple-mented. Selection and implementation of an end point for SF depends upon future funding, legisla-tion, licensing and other factors that cannot be predicted with certainty. Past presumptions related to the availability of an end point have often been wrong and resulted in missed opportunities for properly informing spent fuel management policies and strategies. For example, dry cask storage systems were originally conceived to free up needed space in reactor spent fuel pools and also to provide SFS of up to 20 years untilmore » reprocessing and/or deep geological disposal became available. Hundreds of dry cask storage systems are now employed throughout the world and will be relied upon well beyond the originally envisioned design life. Given present and projected rates for the use of nuclear power coupled with projections for SF repro-cessing and disposal capacities, one concludes that SF storage will be prolonged, potentially for several decades. The US Nuclear Regulatory Commission has recently considered 300 years of storage to be appropriate for the characterization and prediction of ageing effects and ageing management issues associated with extending SF storage and subsequent transport. This paper encourages addressing the uncertainty associated with the duration of SF storage by de-sign – rather than by default. It suggests ways that this uncertainty may be considered in design, li-censing, policy, and strategy decisions and proposes a framework for safely extending spent fuel storage until SF can be transported for reprocessing or disposal – regardless of how long that may be. The paper however is not intended to either encourage or facilitate needlessly extending spent fuel storage durations. Its intent is to ensure a design and safety basis with sufficient margin to accommodate the full range of potential future scenarios. Although the focus is primarily on storage of SF from commercial operation, the principles described are equally applicable to SF from research and production reactors as well as high-level radioactive waste.« less

  18. Development of spent fuel reprocessing process based on selective sulfurization: Study on the Pu, Np and Am sulfurization

    NASA Astrophysics Data System (ADS)

    Kirishima, Akira; Amano, Yuuki; Nihei, Toshifumi; Mitsugashira, Toshiaki; Sato, Nobuaki

    2010-03-01

    For the recovery of fissile materials from spent nuclear fuel, we have proposed a novel reprocessing process based on selective sulfurization of fission products (FPs). The key concept of this process is utilization of unique chemical property of carbon disulfide (CS2), i.e., it works as a reductant for U3O8 but works as a sulfurizing agent for minor actinides and lanthanides. Sulfurized FPs and minor actinides (MA) are highly soluble to dilute nitric acid while UO2 and PuO2 are hardly soluble, therefore, FPs and MA can be removed from Uranium and Plutonium matrix by selective dissolution. As a feasibility study of this new concept, the sulfurization behaviours of U, Pu, Np, Am and Eu are investigated in this paper by the thermodynamical calculation, phase analysis of chemical analogue elements and tracer experiments.

  19. States' implementation of the Section 510 abstinence education program, FY 1999.

    PubMed

    Sonfield, A; Gold, R B

    2001-01-01

    As part of its reworking of the nation's welfare system in 1996, Congress enacted a major new abstinence education initiative (Section 510 of Title V of the Social Security Act), projected to spend $87.5 million in federal, state and local funds per year for five years. The new program is designed to emphasize abstinence from sexual activity outside of marriage, at any age, rather than premarital abstinence for adolescents, which was typical of earlier efforts. The actual message and impact of the program, however, will depend on how it is implemented. Program coordinators in all 50 states, the District of Columbia and Puerto Rico were surveyed concerning implementation of the Section 510 abstinence education program in FY 1999. The questionnaire asked about expenditures and activities performed, about policies established for a variety of specific situations and about how the term "sexual activity" is defined and what specific components of the federal definition of "abstinence education" are emphasized. Forty-five jurisdictions spent a total of $69 million through the Section 510 program in FY 1999. Of this total, $33 million was spent through public entities, $28 million was spent through private entities and $7million (in 22 jurisdictions) was spent through faith-based entities. Almost all jurisdictions reported funding school-related activities, with 38 reporting in-school instruction and presentations. Twenty-eight jurisdictions prohibited organizations from providing information about contraception (aside from failure rates), even at a client's request, while only six jurisdictions prohibited information about sexually transmitted diseases. Few reported having a policy or rendering guidance about providing services addressing sexual abuse, sexual orientation or existing pregnancy and parenthood. Only six respondents said they defined "sexual activity" for purposes of the program, and 16 reported focusing on specific portions of the federal definition of "abstinence education." More than one in 10 Section 510 dollars were spent through faith-based entities. Programs commonly conducted in-school activities, particularly instruction and presentations, not only through public entities, but also through private and faith-based entities. Most jurisdictions prohibited the provision of information about contraception, about providers of contraceptive services or about both topics, even in response to a direct question and when using other sources of funding. Most also left definitions of "abstinence" and "sexual activity" as local decisions, thus not clearly articulating what the program is designed to encourage clients to abstain from.

  20. Spent coffee ground extract suppresses ultraviolet B-induced photoaging in hairless mice.

    PubMed

    Choi, Hyeon-Son; Park, Eu Ddeum; Park, Yooheon; Suh, Hyung Joo

    2015-12-01

    The aim of this study is to evaluate the effect of spent coffee ground (SCG) ethanol extract on UVB-induced skin aging in hairless mice. An ethanol extract of SCG (ESCG) was prepared using the residue remaining after extraction of oil from roasted SCG. High performance liquid chromatography (HPLC) analysis showed that the content of caffeine (41.58 ± 0.54 μg/mg) was higher than that of chlorogenic acid isomers (~9.17 μg/mg) in ESCG. ESCG significantly decreased the UVB-induced intracellular reactive oxygen species in HaCaT cells. UVB-induced wrinkle formation in mice dorsal skin was effectively reduced by ESCG administration; high dose of ESCG (5 g/L) caused the reduction of wrinkle area by 30% compared with UVB-treated control (UVBC). This result correlated with the ESCG-mediated decrease in epidermis thickness (25%). In addition, ESCG administration significantly reduced transdermal water loss (20%) and erythema formation (35%) derived from UVB exposure. Collagen type I (COL-1) level in dorsal skin was effectively recovered by ESCG administration. These results were supported by down-regulation of collagen-degrading matrix metalloproteinase 2 (MMP2) and 9 (MMP9) expressions. Our results indicate that ESCG protects mouse skin from UVB-induced photoaging by suppressing the expression of matrix metalloproteinases. Our study suggests that ESCG may be anti-photoaging agent. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Emergency Physician Task Switching Increases With the Introduction of a Commercial Electronic Health Record.

    PubMed

    Benda, Natalie C; Meadors, Margaret L; Hettinger, A Zachary; Ratwani, Raj M

    2016-06-01

    We evaluate how the transition from a homegrown electronic health record to a commercial one affects emergency physician work activities from initial introduction to long-term use. We completed a quasi-experimental study across 3 periods during the transition from a homegrown system to a commercially available electronic health record with computerized provider order entry. Observation periods consisted of pre-implementation, 1 month before the implementation of the commercial electronic health record; "go-live" 1 week after implementation; and post-implementation, 3 to 4 months after use began. Fourteen physicians were observed in each period (N=42) with a minute-by-minute observation template to record emergency physician time allocation across 5 task-based categories (computer, verbal communication, patient room, paper [chart/laboratory results], and other). The average number of tasks physicians engaged in per minute was also analyzed as an indicator of task switching. From pre- to post-implementation, there were no significant differences in the amount of time spent on the various task categories. There were changes in time allocation from pre-implementation to go-live and go-live to pre-implementation, characterized by a significant increase in time spent on computer tasks during go-live relative to the other periods. Critically, the number of tasks physicians engaged in per minute increased from 1.7 during pre-implementation to 1.9 during post-implementation (difference 0.19 tasks per minute; 95% confidence interval 0.039 to 0.35). The increase in the number of tasks physicians engaged in per minute post-implementation indicates that physicians switched tasks more frequently. Frequent task switching behavior raises patient safety concerns. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  2. Biotechnological possibilities for waste tyre-rubber treatment.

    PubMed

    Holst, O; Stenberg, B; Christiansson, M

    1998-01-01

    Every year large amounts of spent rubber material, mainly from rubber tyres, are discarded. Of the annual total global production of rubber material, which amounts to 16-17 million tonnes, approximately 65% is used for the production of tyres. About 250 millions spent car tyres are generated yearly in USA only. This huge amount of waste rubber material is an environmental problem of great concern. Various ways to remediate the problem have been proposed. Among these are road fillings and combustion in kilns. Spent tyres, however, comprise valuable material that could be recycled if a proper technique can be developed. One way of recycling old tyres is to blend ground spent rubber with virgin material followed by vulcanization. The main obstacle to this recycling is bad adhesion between the crumb and matrix of virgin rubber material due to little formation of interfacial sulphur crosslinks. Micro-organisms able to break sulphur-sulphur and sulphur-carbon bonds can be used to devulcanize waste rubber in order to make polymer chains on the surface more flexible and facilitate increased binding upon vulcanization. Several species belonging to both Bacteria and Archaea have this ability. Mainly sulphur oxidizing species, such as different species of the genus Thiobacillus and thermoacidophiles of the order of Sulfolobales, have been studied in this context. The present paper will give a background to the problem and an overview of the biotechnological possibilities for solutions of waste rubber as an environmental problem, focusing on microbial desulphurization.

  3. Development of sustainable dye adsorption system using nutraceutical industrial fennel seed spent-studies using Congo red dye.

    PubMed

    Taqui, Syed Noeman; Yahya, Rosiyah; Hassan, Aziz; Nayak, Nayan; Syed, Akheel Ahmed

    2017-07-03

    Fennel seed spent (FSS)-an inexpensive nutraceutical industrial spent has been used as an efficient biosorbent for the removal of Congo red (CR) from aqueous media. Results show that the conditions for maximum adsorption would be pH 2-4 and 30°C were ideal for maximum adsorption. Based on regression fitting of the data, it was determined that the Sips isotherm (R 2 = 0.994, χ 2 = 0.5) adequately described the mechanism of adsorption, suggesting that the adsorption occurs homogeneously with favorable interaction between layers with favorable interaction between layers. Thermodynamic analysis showed that the adsorption is favorable (negative values for ΔG°) and endothermic (ΔH° = 12-20 kJ mol -1 ) for initial dye concentrations of 25, 50, and 100 ppm. The low ΔH° value indicates that the adsorption is a physical process involving weak chemical interactions like hydrogen bonds and van der Waals interactions. The kinetics revealed that the adsorption process showed pseudo-second-order tendencies with the equal influence of intraparticle as well as film diffusion. The scanning electron microscopy images of FSS show a highly fibrous matrix with a hierarchical porous structure. The Fourier transform infrared spectroscopy analysis of the spent confirmed the presence of cellulosic and lignocellulosic matter, giving it both hydrophilic and hydrophobic properties. The investigations indicate that FSS is a cost-effective and efficient biosorbent for the remediation of toxic CR dye.

  4. Sparse matrix-vector multiplication on network-on-chip

    NASA Astrophysics Data System (ADS)

    Sun, C.-C.; Götze, J.; Jheng, H.-Y.; Ruan, S.-J.

    2010-12-01

    In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.

  5. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  6. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  7. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  8. Organizational context associated with time spent evaluating language and cognitive-communicative impairments in skilled nursing facilities: Survey results within an implementation science framework.

    PubMed

    Douglas, Natalie F

    2016-01-01

    The Consolidated Framework for Implementation Research (CFIR) was developed to merge research and practice in healthcare by accounting for the many elements that influence evidence-based treatment implementation. These include characteristics of the individuals involved, features of the treatment itself, and aspects of the organizational culture where the treatment is being provided. The purpose of this study was to apply the CFIR to a measurement of current practice patterns of speech-language pathologists (SLPs) working in the skilled nursing facility (SNF) environment. In an effort to inform future evidence-based practice implementation interventions, research questions addressed current practice patterns, clinician treatment use and preferences, and perceptions of the organizational context including leadership, resources, and other staff. Surveys were mailed to each SLP working in a SNF in the state of Michigan. Participants (N=77, 19% response rate) completed a survey mapping on to CFIR components impacting evidence-based practice implementation. Quantitative descriptive and nonparametric correlational analyses were completed. Use of evidence-based treatments by SLPs in SNFs was highly variable. Negative correlations between treating speech and voice disorders and treating swallowing disorders (rs=-.35, p<.01), evaluating language and cognitive-communicative disorders and treating swallowing disorders (rs=-.30, p<.01), treating language and cognitive-communicative disorders and treating swallowing disorders (rs=-.67, p<.01), and evaluating swallowing disorders and treating language and cognitive-communicative disorders (rs=-.37, p<.01) were noted. A positive correlation between the SLPs' perception of organizational context and time spent evaluating language and other cognitive-communicative disorders (rs=.27, p<.05) was also present. Associative data suggest that the more an SLP in the SNF evaluates and treats swallowing disorders, the less he or she will evaluate speech, voice, language or other cognitive-communicative disorders. Further, SLPs in this sample spent more time evaluating language and cognitive-communicative impairments if they perceived their organizational context in a more positive way. The CFIR may guide treatment and implementation research to increase the uptake of evidence-based practices for SLPs working in the SNF setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Characterisation of corn extrudates with the addition of brewers' spent grain as a raw material for the production of functional batters.

    PubMed

    Żelaziński, Tomasz; Ekielski, Adam; Siwek, Adam; Dardziński, Leszek

    2017-01-01

    Novel food batters, recommended for various products, are at present manufactured by extru- sion. Thanks to this, it is possible to look for and process new raw materials, if their processing has so far been considered impossible or economically unviable. The purpose of the work was therefore to investigate the extrudates produced from the corn and brewers' spent grain compounds that are subsequently used as raw material for food batter production. The work presents the findings of research on extrusion of corn mixes with varying levels of brewers' spent grains, to the maximum amount of 30%. Tests were conducted using a co-rotating double screw extruder, equipped with a single-outlet matrix with a diameter of 2.5mm. The products obtained were subjected to analysis of their parameters (apparent density, strength parameters, abrasiveness index) and the granulation of a single fraction was checked. The sample for which the percentage content was the highest was subjected to a detailed analysis of particle shape using vision software. It was found that an increase in the content of brewers' spent grains resulted in increased hardness of the products obtained. During the tests it was observed that the increasing hardness of the measured sam- ples is opposite to their abrasion resistance. The maximum decrement of the brasion parameters was seen for extrudates with 30% spent grain addition and was 1.4%, while the minimum decrement values for extrudates with brewers' grain content (10%) amounted to 0.85%. It was noted that this may prove the high brittleness of such products, particularly on the outer surface. It was also observed that lower grindability was recorded for samples produced by extrusion at a temperature of 140°C. On the other hand, higher grindability obtained at a temperature of 120°C may facilitate the grinding of such products, which may be particularly important in the production of food batter. Brewers' spent grains used as an addition to corn groats contribute to substantial changes in the extrudates obtained. It is also possible to produce compact extrudates with a brewers' spent grain content of 30%. After grinding, extrudates with higher brewers' spent grain content are distinguished by more rounded grains. The packing index of the samples indicates the increased accuracy of covering products with such batter, which indicates an advantage of food batters containing brewers' spent grains.

  10. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  11. Multi-stakeholder policy modeling for collection and recycling of spent portable battery waste.

    PubMed

    Gupta, Vimal Kumar; Kaushal, Rajendra Kumar; Shukla, Sheo Prasad

    2018-06-01

    Policies have been structured for collection and recycling of spent portable battery waste within a framework of stakeholders (recycling council body, producer, recycler and consumer) especially for those battery units that are discarded worldwide because of their expensive cost of recycling. Applicability of stakeholders' policies in their coalition framework have been reviewed and critically analyzed using the Shapley value of cooperative game theory models. Coalition models for 'manufacturer and recycler' indicated the dominating role of manufacturers over the recyclers, and waste management is highly influenced by producer responsibility. But, the take-back policy enables recyclers' dominance role in the management and yields maximum benefit to both recyclers and consumers. The polluter pays principle has been implemented in formulating policies to key stakeholders, 'manufacturers' as well as 'consumers', of battery products by the introduction of penalties to encourage their willingness to join the Environment, Health and Safety program. Results indicated that the policies of the framework have the potential to be implemented within a marginal rise in battery price by 12% to 14.3% in the range of recycling cost per tonne of US$2000 to US$5000. The policy of the stakeholders' framework presented in the study could be an important aid to achieve high collection and recycling rates of spent portable batteries.

  12. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  13. EURATOM safeguards efforts in the development of spent fuel verification methods by non-destructive assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matloch, L.; Vaccaro, S.; Couland, M.

    The back end of the nuclear fuel cycle continues to develop. The European Commission, particularly the Nuclear Safeguards Directorate of the Directorate General for Energy, implements Euratom safeguards and needs to adapt to this situation. The verification methods for spent nuclear fuel, which EURATOM inspectors can use, require continuous improvement. Whereas the Euratom on-site laboratories provide accurate verification results for fuel undergoing reprocessing, the situation is different for spent fuel which is destined for final storage. In particular, new needs arise from the increasing number of cask loadings for interim dry storage and the advanced plans for the construction ofmore » encapsulation plants and geological repositories. Various scenarios present verification challenges. In this context, EURATOM Safeguards, often in cooperation with other stakeholders, is committed to further improvement of NDA methods for spent fuel verification. In this effort EURATOM plays various roles, ranging from definition of inspection needs to direct participation in development of measurement systems, including support of research in the framework of international agreements and via the EC Support Program to the IAEA. This paper presents recent progress in selected NDA methods. These methods have been conceived to satisfy different spent fuel verification needs, ranging from attribute testing to pin-level partial defect verification. (authors)« less

  14. A review of the processes and lab-scale techniques for the treatment of spent rechargeable NiMH batteries

    NASA Astrophysics Data System (ADS)

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Prisciandaro, Marina; Medici, Franco; Vegliò, Francesco

    2017-09-01

    The purpose of this work is to describe and review the current status of the recycling technologies of spent NiMH batteries. In the first part of the work, the structure and characterization of NiMH accumulators are introduced followed by the description of the main scientific studies and the industrial processes. Various recycling routes including physical, pyrometallurgical and hydrometallurgical ones are discussed. The hydrometallurgical methods for the recovery of base metals and rare earths are mainly developed on the laboratory and pilot scale. The operating industrial methods are pyrometallurgical ones and are efficient only on the recovery of certain components of spent batteries. In particular fraction rich in nickel and other materials are recovered; instead the rare earths are lost in the slag and must be further refined by hydrometallurgical process to recover them. Considering the actual legislation regarding the disposal of spent batteries and the preservation of raw materials issues, implementations on laboratory scale and plant optimization studies should be conducted in order to overcome the industrial problems of the scale up for the hydrometallurgical processes.

  15. Management aspects of Gemini's base facility operations project

    NASA Astrophysics Data System (ADS)

    Arriagada, Gustavo; Nitta, Atsuko; Adamson, A. J.; Nunez, Arturo; Serio, Andrew; Cordova, Martin

    2016-08-01

    Gemini's Base Facilities Operations (BFO) Project provided the capabilities to perform routine nighttime operations without anyone on the summit. The expected benefits were to achieve money savings and to become an enabler of the future development of remote operations. The project was executed using a tailored version of Prince2 project management methodology. It was schedule driven and managing it demanded flexibility and creativity to produce what was needed, taking into consideration all the constraints present at the time: Time available to implement BFO at Gemini North (GN), two years. The project had to be done in a matrix resources environment. There were only three resources assigned exclusively to BFO. The implementation of new capabilities had to be done without disrupting operations. And we needed to succeed, introducing the new operational model that implied Telescope and instrumentation Operators (Science Operations Specialists - SOS) relying on technology to assess summit conditions. To meet schedule we created a large number of concurrent smaller projects called Work Packages (WP). To be reassured that we would successfully implement BFO, we initially spent a good portion of time and effort, collecting and learning about user's needs. This was done through close interaction with SOSs, Observers, Engineers and Technicians. Once we had a clear understanding of the requirements, we took the approach of implementing the "bare minimum" necessary technology that would meet them and that would be maintainable in the long term. Another key element was the introduction of the "gradual descent" concept. In this, we increasingly provided tools to the SOSs and Observers to prevent them from going outside the control room during nighttime operations, giving them the opportunity of familiarizing themselves with the new tools over a time span of several months. Also, by using these tools at an early stage, Engineers and Technicians had more time for debugging, problem fixing and systems usage and servicing training as well.

  16. Direct measurement of 235U in spent fuel rods with Gamma-ray mirrors

    NASA Astrophysics Data System (ADS)

    Ruz, J.; Brejnholt, N. F.; Alameda, J. B.; Decker, T. A.; Descalle, M. A.; Fernandez-Perea, M.; Hill, R. M.; Kisner, R. A.; Melin, A. M.; Patton, B. W.; Soufli, R.; Ziock, K.; Pivovaroff, M. J.

    2015-03-01

    Direct measurement of plutonium and uranium X-rays and gamma-rays is a highly desirable non-destructive analysis method for the use in reprocessing fuel environments. The high background and intense radiation from spent fuel make direct measurements difficult to implement since the relatively low activity of uranium and plutonium is masked by the high activity from fission products. To overcome this problem, we make use of a grazing incidence optic to selectively reflect Kα and Kβ fluorescence of Special Nuclear Materials (SNM) into a high-purity position-sensitive germanium detector and obtain their relative ratios.

  17. XDATA

    DTIC Science & Technology

    2017-05-01

    Parallelizing PINT The main focus of our research into the parallelization of the PINT algorithm has been to find appropriately scalable matrix math algorithms...leading eigenvector of the adjacency matrix of the pairwise affinity graph. We reviewed the matrix math implementation currently being used in PINT and...the new versions support a feature called matrix.distributed, which is some level of support for distributed matrix math ; however our code is not

  18. Writer's Workshop: Implementing Units of Study, Findings from a Teacher Study Group, and Student Success in Writing

    ERIC Educational Resources Information Center

    Chaney, Sandra Lynne

    2011-01-01

    Background: An elementary teacher study group supports each other in a year-long journey as they learn how to work through writer's workshop curriculum in order to implement Units of Study by Lucy Calkins at a K-6 school. Time spent in writing instruction has been largely neglected, and a teacher-researcher wants to document the support found from…

  19. Army Status of Recommendations on Officers’ Professional Military Education

    DTIC Science & Technology

    1991-03-21

    give oral reports, and prepare and participate in case studies, exercises, reviews, analyses, and other forms of active learning . Student performance is...officials intend to retain a small group/ active learning mode of instruction. enior School Implemented. ,haracterization This school’s ratio remains at 3.7...p. 169.) ermediate School Implemented. iaracterization The school defines active learning as time spent by students primarily in the classroom

  20. Addicted to discovery: Does the quest for new knowledge hinder practice improvement?

    PubMed

    Perl, Harold I

    2011-06-01

    Despite the billions of dollars spent on health-focused research and the hundreds of billions spent on delivering health services each year, relatively little money and effort are directed toward investigating how best to connect the two. This results in missed opportunities to assure that research findings inform and improve quality across healthcare in general and for addiction prevention and treatment in particular. There is an asymmetrical focus that favors the identification of new interventions and neglects the implementation of science-based knowledge in actual practice. The consequences of that neglect are severe: significantly diminished progress in research on how to implement treatments that could improve the lives of persons with addiction problems, their families, and the rest of society. While the advancement of knowledge regarding effective implementation is lagging, it is clear that existing systemic incentives in the conduct of science inhibit rather than facilitate widespread adoption of evidence-based practices. This commentary proposes three interrelated strategies for improving the implementation process. First, develop scientific tools to understand implementation better, by expanding investigations on the science of implementation and broadening approaches to the design and execution of research. Second, nurture and support a collaborative implementation workforce comprised of scientists and on-the-ground practitioners, with an explicit focus on enhancing appropriate incentives for both. Third, pay closer attention to crafting research that seeks answers that are most relevant to clinicians' actual needs, primarily by ensuring that the anticipated users of the evidence-based practice are full partners in developing the questions right from the start. Published by Elsevier Ltd.

  1. Chemical reactivity testing for the National Spent Nuclear Fuel Program. Revision 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koester, L.W.

    This quality assurance project plan (QAPjP) summarizes requirements used by Lockheed Martin Energy Systems, Incorporated (LMES) Development Division at Y-12 for conducting chemical reactivity testing of Department of Energy (DOE) owned spent nuclear fuel, sponsored by the National Spent Nuclear Fuel Program (NSNFP). The requirements are based on the NSNFP Statement of work PRO-007 (Statement of Work for Laboratory Determination of Uranium Hydride Oxidation Reaction Kinetics.) This QAPjP will utilize the quality assurance program at Y-12, Y60-101PD, Quality Program Description, and existing implementing procedures for the most part in meeting the NSNFP Statement of Work PRO-007 requirements, exceptions will bemore » noted. The project consists of conducting three separate series of related experiments, ''Passivation of Uranium Hydride Powder With Oxygen and Water'', '''Passivation of Uranium Hydride Powder with Surface Characterization'', and ''Electrochemical Measure of Uranium Hydride Corrosion Rate''.« less

  2. Radioactive Waste Management, its Global Implication on Societies, and Political Impact

    NASA Astrophysics Data System (ADS)

    Matsui, Kazuaki

    2009-05-01

    Reprocessing plant in Rokkasho, Japan is under commissioning at the end of 2008, and it starts soon to reprocess about 800 Mt of spent fuel per annum, which have been stored at each nuclear power plant sites in Japan. Fission products together with minor actinides separated from uranium and plutonium in the spent fuel contain almost all radioactivity of it and will be vitrified with glass matrix, which then will fill the canisters. The canisters with the high level radioactive waste (HLW) are so hot in both thermal and radiological meanings that they have to be cooled off for decades before bringing out to any destination. Where is the final destination for HLW in Japan, which is located at the rim of the Pacific Ocean with volcanoes? Although geological formation in Japan is not so static and rather active as the other parts of the planet, experts concluded with some intensive studies and researches that there will be a lot of variety of geological formations even in Japan which can host the HLW for so long times of more than million years. Then an organization to implement HLW disposal program was set up and started to campaign for volunteers to accept the survey on geological suitability for HLW disposal. Some local governments wanted to apply, but were crashed down by local and neighbor governments and residents. The above development is not peculiar only to Japan, but generally speaking more or less common for those with radioactive waste programs. This is why the radioactive waste management is not any more science and technology issue but socio-political one. It does not mean further R&D on geological disposal is not any more necessary, but rather we, each of us, should face much more sincerely the societal and political issues caused by the development of the science and technology. Second topic might be how effective partitioning and transformation technology may be to reduce the burden of waste disposal and denature the waste toxicity? The third one might be the proposal of international nuclear fuel centers which supply nuclear fuel to the nuclear power plants in the region and take back spent fuel which will be reprocessed to recover useful energy resources of uranium and plutonium. This may help non proliferation issue due to world nuclear development beyond renaissance.

  3. Implementation of an Automated Road Maintenance Machine (ARMM)

    DOT National Transportation Integrated Search

    1999-08-01

    Crack sealing is a hazardous, costly, and labor-intensive operation. In North America, approximately $200 million is spent each year on crack sealing. Prompted by concerns of safety and cost, the University of Texas at Austin, in cooperation with the...

  4. Blockwise conjugate gradient methods for image reconstruction in volumetric CT.

    PubMed

    Qiu, W; Titley-Peloquin, D; Soleimani, M

    2012-11-01

    Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Cucheb: A GPU implementation of the filtered Lanczos procedure

    NASA Astrophysics Data System (ADS)

    Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef

    2017-11-01

    This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.

  6. Implementing a disease management intervention for depression in primary care: a random work sampling study.

    PubMed

    Harpole, Linda H; Stechuchak, Karen M; Saur, Carol D; Steffens, David C; Unützer, Jürgen; Oddone, Eugene

    2003-01-01

    We describe the daily work activities of 13 Depression Clinical Specialists (DCSs) at 7 national sites who served as care managers in an effective multisite randomized trial of a disease management model for depression in primary care. DCSs carried portable random-reminder beepers for a total of 147 consecutive workdays and recorded 4,030 work activities. Patient care activity comprised the largest percentage of the workday, 49.4% (95% confidence interval [CI], 42.0 to 56.7%), followed by research-related activity, 18.3 % (95% CI, 14.7 to 21.9%), administrative work, 17.9% (95% CI, 12.2 to 23.7%), personal time, 9.4% (95% CI, 5.4 to 13.4%), and time in transit, 5.1% (95% CI, 2.8 to 7.4%). The DCSs delivered 19.2% (95% CI, 14.4 to 24.1%) of direct patient care by telephone. The DCSs spent a significant portion of the day alone 48.7% (95% CI, 43.3 to 54.1%), followed by time spent with patients, 37.5% (95% CI, 31.6 to 43.3%). Less than 10% (7.8%) (95% CI, 5.1 to 10.6%) of their time was spent with local study staff. Less than 4% of their time was spent with other health care providers. Our results demonstrate that the DCSs' time was primarily devoted to clinical care, a significant portion of which was delivered by telephone. They functioned independently, making efficient use of the limited amount of time that they interacted with other health care providers. This information will be helpful to those who may wish to implement this disease management strategy.

  7. A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction

    DOE PAGES

    Kumar, B.; Huang, C. -H.; Sadayappan, P.; ...

    1995-01-01

    In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required working storagemore » of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less

  8. Processing and microstructural characterisation of a UO2-based ceramic for disposal studies on spent AGR fuel

    NASA Astrophysics Data System (ADS)

    Hiezl, Z.; Hambley, D. I.; Padovani, C.; Lee, W. E.

    2015-01-01

    Preparation and characterisation of a Simulated Spent Nuclear Fuel (SIMFuel), which replicates the chemical state and microstructure of Spent Nuclear Fuel (SNF) discharged from a UK Advanced Gas-cooled Reactor (AGR) after a cooling time of 100 years is described. Given the relatively small differences in radionuclide inventory expected over longer time periods, the SIMFuel studied in this work is expected to be also representative of spent fuel after significantly longer periods (e.g. 1000 years). Thirteen stable elements were added to depleted UO2 and sintered to simulate the composition of fuel pellets after burn-ups of 25 and 43 GWd/tU and, as a reference, pure UO2 pellets were also investigated. The fission product distribution was calculated using the FISPIN code provided by the UK National Nuclear Laboratory. SIMFuel pellets were up to 92% dense and during the sintering process in H2 atmosphere Mo-Ru-Rh-Pd metallic precipitates and grey-phase ((Ba, Sr)(Zr, RE) O3 oxide precipitates) formed within the UO2 matrix. These secondary phases are present in real PWR and AGR SNF. Metallic precipitates are generally spherical and have submicron particle size (0.8 ± 0.7 μm). Spherical oxide precipitates in SIMFuel measured up to 30 μm in diameter, but no data were available in the public domain to compare this to AGR SNF. The grain size of actual AGR SNF (∼ 3-30 μm) is larger than that measured in AGR SIMFuel (∼ 2-5 μm).

  9. In-Field Performance Testing of the Fork Detector for Quantitative Spent Fuel Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauld, Ian C.; Hu, Jianwei; De Baere, P.

    Expanding spent fuel dry storage activities worldwide are increasing demands on safeguards authorities that perform inspections. The European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) require measurements to verify declarations when spent fuel is transferred to difficult-to-access locations, such as dry storage casks and the repositories planned in Finland and Sweden. EURATOM makes routine use of the Fork detector to obtain gross gamma and total neutron measurements during spent fuel inspections. Data analysis is performed by modules in the integrated Review and Analysis Program (iRAP) software, developed jointly by EURATOM and the IAEA. Under the frameworkmore » of the US Department of Energy–EURATOM cooperation agreement, a module for automated Fork detector data analysis has been developed by Oak Ridge National Laboratory (ORNL) using the ORIGEN code from the SCALE code system and implemented in iRAP. EURATOM and ORNL recently performed measurements on 30 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel (Clab), operated by the Swedish Nuclear Fuel and Waste Management Company (SKB). The measured assemblies represent a broad range of fuel characteristics. Neutron count rates for 15 measured pressurized water reactor assemblies are predicted with an average relative standard deviation of 4.6%, and gamma signals are predicted on average within 2.6% of the measurement. The 15 measured boiling water reactor assemblies exhibit slightly larger deviations of 5.2% for the gamma signals and 5.7% for the neutron count rates, compared to measurements. These findings suggest that with improved analysis of the measurement data, existing instruments can provide increased verification of operator declarations of the spent fuel and thereby also provide greater ability to confirm integrity of an assembly. These results support the application of the Fork detector as a fully quantitative spent fuel verification technique.« less

  10. Theory and implementation of H-matrix based iterative and direct solvers for Helmholtz and elastodynamic oscillatory kernels

    NASA Astrophysics Data System (ADS)

    Chaillat, Stéphanie; Desiderio, Luca; Ciarlet, Patrick

    2017-12-01

    In this work, we study the accuracy and efficiency of hierarchical matrix (H-matrix) based fast methods for solving dense linear systems arising from the discretization of the 3D elastodynamic Green's tensors. It is well known in the literature that standard H-matrix based methods, although very efficient tools for asymptotically smooth kernels, are not optimal for oscillatory kernels. H2-matrix and directional approaches have been proposed to overcome this problem. However the implementation of such methods is much more involved than the standard H-matrix representation. The central questions we address are twofold. (i) What is the frequency-range in which the H-matrix format is an efficient representation for 3D elastodynamic problems? (ii) What can be expected of such an approach to model problems in mechanical engineering? We show that even though the method is not optimal (in the sense that more involved representations can lead to faster algorithms) an efficient solver can be easily developed. The capabilities of the method are illustrated on numerical examples using the Boundary Element Method.

  11. Report: Improvements Needed to Ensure Grant Funds for U.S.-Mexico Border Water Infrastructure Program Are Spent More Timely

    EPA Pesticide Factsheets

    Report #08-P-0121, March 31, 2008. From 2005 to 2007, EPA took actions to implement timeframes for Border Program projects, reduce the scope of projects, and reduce unliquidated obligations of projects.

  12. Matrix management in hospitals: testing theories of matrix structure and development.

    PubMed

    Burns, L R

    1989-09-01

    A study of 315 hospitals with matrix management programs was used to test several hypotheses concerning matrix management advanced by earlier theorists. The study verifies that matrix management involves several distinctive elements that can be scaled to form increasingly complex types of lateral coordinative devices. The scalability of these elements is evident only cross-sectionally. The results show that matrix complexity is not an outcome of program age, nor does matrix complexity at the time of implementation appear to influence program survival. Matrix complexity, finally, is not determined by the organization's task diversity and uncertainty. The results suggest several modifications in prevailing theories of matrix organization.

  13. Administrative work consumes one-sixth of U.S. physicians' working hours and lowers their career satisfaction.

    PubMed

    Woolhandler, Steffie; Himmelstein, David U

    2014-01-01

    Doctors often complain about the burden of administrative work, but few studies have quantified how much time clinicians devote to administrative tasks. We quantified the time U.S. physicians spent on administrative tasks, and its relationship to their career satisfaction, based on a nationally representative survey of 4,720 U.S. physicians working 20 or more hours per week in direct patient care. The average doctor spent 8.7 hours per week (16.6% of working hours) on administration. Psychiatrists spent the highest proportion of their time on administration (20.3%), followed by internists (17.3%) and family/general practitioners (17.3%). Pediatricians spent the least amount of time, 6.7 hours per week or 14.1 percent of professional time. Doctors in large practices, those in practices owned by a hospital, and those with financial incentives to reduce services spent more time on administration. More extensive use of electronic medical records was associated with a greater administrative burden. Doctors spending more time on administration had lower career satisfaction, even after controlling for income and other factors. Current trends in U.S. health policy--a shift to employment in large practices, the implementation of electronic medical records, and the increasing prevalence of financial risk sharing--are likely to increase doctors' paperwork burdens and may decrease their career satisfaction.

  14. Spent lead-acid battery recycling in China - A review and sustainable analyses on mass flow of lead.

    PubMed

    Sun, Zhi; Cao, Hongbin; Zhang, Xihua; Lin, Xiao; Zheng, Wenwen; Cao, Guoqing; Sun, Yong; Zhang, Yi

    2017-06-01

    Lead is classified to be one of the top heavy metal pollutants in China. The corresponding environmental issues especially during the management of spent lead-acid battery have already caused significant public awareness and concern. This research gives a brief overview on the recycling situation based on an investigation of the lead industry in China and also the development of technologies for spent lead-acid batteries. The main principles and research focuses of different technologies including pyrometallurgy, hydrometallurgy and greener technologies are summarized and compared. Subsequently, the circulability of lead based on the entire life cycle analyses of lead-acid battery is calculated. By considering different recycling schemes, the recycling situation of spent lead-acid battery in China can be understood semi-quantitatively. According to this research, 30% of the primary lead production can be shut down that the lead production can still ensure consecutive life cycle operation of lead-acid battery, if proper management of the spent lead-acid battery is implemented according to current lead industry situation in China. This research provides a methodology on the view of lead circulability in the whole life cycle of a specific product and is aiming to contribute more quantitative guidelines for efficient organization of lead industry in China. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Effects of guest feeding programs on captive giraffe behavior.

    PubMed

    Orban, David A; Siegford, Janice M; Snider, Richard J

    2016-01-01

    Zoological institutions develop human-animal interaction opportunities for visitors to advance missions of conservation, education, and recreation; however, the animal welfare implications largely have yet to be evaluated. This behavioral study was the first to quantify impacts of guest feeding programs on captive giraffe behavior and welfare, by documenting giraffe time budgets that included both normal and stereotypic behaviors. Thirty giraffes from nine zoos (six zoos with varying guest feeding programs and three without) were observed using both instantaneous scan sampling and continuous behavioral sampling techniques. All data were collected during summer 2012 and analyzed using linear mixed models. The degree of individual giraffe participation in guest feeding programs was positively associated with increased time spent idle and marginally associated with reduced time spent ruminating. Time spent participating in guest feeding programs had no effect on performance of stereotypic behaviors. When time spent eating routine diets was combined with time spent participating in guest feeding programs, individuals that spent more time engaged in total feeding behaviors tended to perform less oral stereotypic behavior such as object-licking and tongue-rolling. By extending foraging time and complexity, guest feeding programs have the potential to act as environmental enrichment and alleviate unfulfilled foraging motivations that may underlie oral stereotypic behaviors observed in many captive giraffes. However, management strategies may need to be adjusted to mitigate idleness and other program consequences. Further studies, especially pre-and-post-program implementation comparisons, are needed to better understand the influence of human-animal interactions on zoo animal behavior and welfare. © 2016 Wiley Periodicals, Inc.

  16. An Analysis of the Use of Social Software and Its Impact on Organizational Processes

    NASA Astrophysics Data System (ADS)

    Pascual-Miguel, Félix; Chaparro-Peláez, Julián; Hernández-García, Ángel

    This article proposes a study on the implementation rate of the most relevant 2.0 tools and technologies in Spanish enterprises, and their impact on 12 important aspects of business processes. In order to characterize the grade of implementation and the perceived improvements on the processes two indexes, Implementation Index and Impact Rate, have been created and displayed in a matrix called "2.0 Success Matrix". Data has been analyzed from a survey taken to directors and executives of large companies and small and medium businesses.

  17. An Efficient Scheme for Updating Sparse Cholesky Factors

    NASA Technical Reports Server (NTRS)

    Raghavan, Padma

    2002-01-01

    Raghavan had earlier developed the software package DCSPACK which can be used for solving sparse linear systems where the coefficient matrix is symmetric and positive definite (this project was not funded by NASA but by agencies such as NSF). DSCPACK-S is the serial code and DSCPACK-P is a parallel implementation suitable for multiprocessors or networks-of-workstations with message passing using MCI. The main algorithm used is the Cholesky factorization of a sparse symmetric positive positive definite matrix A = LL(T). The code can also compute the factorization A = LDL(T). The complexity of the software arises from several factors relating to the sparsity of the matrix A. A sparse N x N matrix A has typically less that cN nonzeroes where c is a small constant. If the matrix were dense, it would have O(N2) nonzeroes. The most complicated part of such sparse Cholesky factorization relates to fill-in, i.e., zeroes in the original matrix that become nonzeroes in the factor L. An efficient implementation depends to a large extent on complex data structures and on techniques from graph theory to reduce, identify, and manage fill. DSCPACK is based on an efficient multifrontal implementation with fill-managing algorithms and implementation arising from earlier research by Raghavan and others. Sparse Cholesky factorization is typically a four step process: (1) ordering to compute a fill-reducing numbering, (2) symbolic factorization to determine the nonzero structure of L, (3) numeric factorization to compute L, and, (4) triangular solution to solve L(T)x = y and Ly = b. The first two steps are symbolic and are performed using the graph of the matrix. The numeric factorization step is of dominant cost and there are several schemes for improving performance by exploiting the nested and dense structure of groups of columns in the factor. The latter are aimed at better utilization of the cache-memory hierarchy on modem processors to prevent cache-misses and provide execution rates (operations/second) that are close to the peak rates for dense matrix computations. Currently, EPISCOPACY is being used in an application at NASA directed by J. Newman and M. James. We propose the implementation of efficient schemes for updating the LL(T) or LDL(T) factors computed in DSCPACK-S to meet the computational requirements of their project. A brief description is provided in the next section.

  18. Delirium monitoring and patient outcomes in a general intensive care unit.

    PubMed

    Andrews, Lois; Silva, Susan G; Kaplan, Susan; Zimbro, Kathie

    2015-01-01

    Use of an evidence-based tool for routine assessment for delirium by bedside nurses in the intensive care unit is recommended. However, little is known about patient outcomes after implementation of such a tool. To evaluate the implementation and effects of the Confusion Assessment Method for the Intensive Care Unit as a bedside assessment for delirium in a general intensive care unit in a tertiary care hospital. Charts of patients admitted to the unit during a 3-month period before implementation of the assessment tool and 1 year after implementation were reviewed retrospectively. Patient outcomes were incidence of delirium diagnosis, duration of mechanical ventilation, length of stay in the intensive care unit, and time spent in restraints. The 2 groups of patients did not differ in demographics, clinical characteristics, or predisposing factors. The groups also did not differ significantly in delirium diagnosis, duration of mechanical ventilation, length of stay in the intensive care unit, or time spent in restraints. Barriers to use of the tool included nurses' lack of confidence in performing the assessment, concerns about use of the tool in patients receiving mechanical ventilation, and lack of interdisciplinary response to findings obtained with the tool. No change in patient outcomes or diagnosis of delirium occurred 1 year after implementation of the Confusion Assessment Method for the Intensive Care Unit. Lessons learned and barriers to adoption and use, however, were identified. ©2015 American Association of Critical-Care Nurses.

  19. Direct disposal of spent fuel: developing solutions tailored to Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawamura, Hideki; McKinley, Ian G

    2013-07-01

    With the past Government policy of 100% reprocessing in Japan now open to discussion, options for direct disposal of spent fuel (SF) are now being considered in Japan. The need to move rapidly ahead in developing spent fuel management concepts is closely related to the ongoing debate on the future of nuclear power in Japan and the desire to understand the true costs of the entire life cycle of different options. Different scenarios for future nuclear power - and associated decisions on extent of reprocessing - will give rise to quite different inventories of SF with different disposal challenges. Althoughmore » much work has been carried out spent fuel disposal within other national programmes, the potential for mining the international knowledge base is limited by the boundary conditions for disposal in Japan. Indeed, with a volunteer approach to siting, no major salt deposits and few undisturbed sediments, high tectonic activity, relatively corrosive groundwater and no deserts, it is evident that a tailored solution is needed. Nevertheless, valuable lessons can be learned from projects carried out worldwide, if focus is placed on basic principles rather than implementation details. (authors)« less

  20. Measuring the Daily Activity of Lying Down, Sitting, Standing and Stepping of Obese Children Using the ActivPALTM Activity Monitor.

    PubMed

    Wafa, Sharifah Wajihah; Aziz, Nur Nadzirah; Shahril, Mohd Razif; Halib, Hasmiza; Rahim, Marhasiyah; Janssen, Xanne

    2017-04-01

    This study describes the patterns of objectively measured sitting, standing and stepping in obese children using the activPALTM and highlights possible differences in sedentary levels and patterns during weekdays and weekends. Sixty-five obese children, aged 9-11 years, were recruited from primary schools in Terengganu, Malaysia. Sitting, standing and stepping were objectively measured using an activPALTM accelerometer over a period of 4-7 days. Obese children spent an average of 69.6% of their day sitting/lying, 19.1% standing and 11.3% stepping. Weekdays and weekends differed significantly in total time spent sitting/lying, standing, stepping, step count, number of sedentary bouts and length of sedentary bouts (p < 0.05, respectively). Obese children spent a large proportion of their time sedentarily, and they spent more time sedentarily during weekends compared with weekdays. This study on sedentary behaviour patterns presents valuable information for designing and implementing strategies to decrease sedentary time among obese children, particularly during weekends. © The Author [2016]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. 10 CFR 51.61 - Environmental report-independent spent fuel storage installation (ISFSI) or monitored retrievable...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... NUCLEAR REGULATORY COMMISSION (CONTINUED) ENVIRONMENTAL PROTECTION REGULATIONS FOR DOMESTIC LICENSING AND RELATED REGULATORY FUNCTIONS National Environmental Policy Act-Regulations Implementing Section 102(2... Control Desk, Director, Office of Nuclear Material Safety and Safeguards, a separate document entitled...

  2. Participants' evaluation of a group-based organisational assessment tool in Danish general practice: the Maturity Matrix.

    PubMed

    Buch, Martin Sandberg; Edwards, Adrian; Eriksson, Tina

    2009-01-01

    The Maturity Matrix is a group-based formative self-evaluation tool aimed at assessing the degree of organisational development in general practice and providing a starting point for local quality improvement. Earlier studies of the Maturity Matrix have shown that participants find the method a useful way of assessing their practice's organisational development. However, little is known about participants' views on the resulting efforts to implement intended changes. To explore users' perspectives on the Maturity Matrix method, the facilitation process, and drivers and barriers for implementation of intended changes. Observation of two facilitated practice meetings, 17 semi-structured interviews with participating general practitioners (GPs) or their staff, and mapping of reasons for continuing or quitting the project. General practices in Denmark Main outcomes: Successful change was associated with: a clearly identified anchor person within the practice, a shared and regular meeting structure, and an external facilitator who provides support and counselling during the implementation process. Failure to implement change was associated with: a high patient-related workload, staff or GP turnover (that seemed to affect small practices more), no clearly identified anchor person or anchor persons who did not do anything, no continuous support from an external facilitator, and no formal commitment to working with agreed changes. Future attempts to improve the impact of the Maturity Matrix, and similar tools for quality improvement, could include: (a) attention to matters of variation caused by practice size, (b) systematic counselling on barriers to implementation and support to structure the change processes, (c) a commitment from participants that goes beyond participation in two-yearly assessments, and (d) an anchor person for each identified goal who takes on the responsibility for improvement in practice.

  3. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  4. Automated flow quantification in valvular heart disease based on backscattered Doppler power analysis: implementation on matrix-array ultrasound imaging systems.

    PubMed

    Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A

    2008-06-01

    Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.

  5. Method for treating materials for solidification

    DOEpatents

    Jantzen, Carol M.; Pickett, John B.; Martin, Hollis L.

    1995-01-01

    A method for treating materials such as wastes for solidification to form a solid, substantially nonleachable product. Addition of reactive silica rather than ordinary silica to the material when bringing the initial molar ratio of its silica constituent to a desired ratio within a preselected range increases the solubility and retention of the materials in the solidified matrix. Materials include hazardous, radioactive, mixed, and heavy metal species. Amounts of other constituents of the material, in addition to its silica content are also added so that the molar ratio of each of these constituents is within the preselected ranges for the final solidified product. The mixture is then solidified by cement solidification or vitrification. The method can be used to treat a variety of wastes, including but not limited to spent filter aids from waste water treatment, waste sludges, combinations of spent filter aids and waste sludges, combinations of supernate and waste sludges, incinerator ash, incinerator offgas blowdown, combinations of incinerator ash and offgas blowdown, cementitious wastes and contaminated soils.

  6. An efficient matrix-matrix multiplication based antisymmetric tensor contraction engine for general order coupled cluster.

    PubMed

    Hanrath, Michael; Engels-Putzka, Anna

    2010-08-14

    In this paper, we present an efficient implementation of general tensor contractions, which is part of a new coupled-cluster program. The tensor contractions, used to evaluate the residuals in each coupled-cluster iteration are particularly important for the performance of the program. We developed a generic procedure, which carries out contractions of two tensors irrespective of their explicit structure. It can handle coupled-cluster-type expressions of arbitrary excitation level. To make the contraction efficient without loosing flexibility, we use a three-step procedure. First, the data contained in the tensors are rearranged into matrices, then a matrix-matrix multiplication is performed, and finally the result is backtransformed to a tensor. The current implementation is significantly more efficient than previous ones capable of treating arbitrary high excitations.

  7. Effective implementation of wavelet Galerkin method

    NASA Astrophysics Data System (ADS)

    Finěk, Václav; Šimunková, Martina

    2012-11-01

    It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.

  8. Finite and spectral cell method for wave propagation in heterogeneous materials

    NASA Astrophysics Data System (ADS)

    Joulaian, Meysam; Duczek, Sascha; Gabbert, Ulrich; Düster, Alexander

    2014-09-01

    In the current paper we present a fast, reliable technique for simulating wave propagation in complex structures made of heterogeneous materials. The proposed approach, the spectral cell method, is a combination of the finite cell method and the spectral element method that significantly lowers preprocessing and computational expenditure. The spectral cell method takes advantage of explicit time-integration schemes coupled with a diagonal mass matrix to reduce the time spent on solving the equation system. By employing a fictitious domain approach, this method also helps to eliminate some of the difficulties associated with mesh generation. Besides introducing a proper, specific mass lumping technique, we also study the performance of the low-order and high-order versions of this approach based on several numerical examples. Our results show that the high-order version of the spectral cell method together requires less memory storage and less CPU time than other possible versions, when combined simultaneously with explicit time-integration algorithms. Moreover, as the implementation of the proposed method in available finite element programs is straightforward, these properties turn the method into a viable tool for practical applications such as structural health monitoring [1-3], quantitative ultrasound applications [4], or the active control of vibrations and noise [5, 6].

  9. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.; Collins, Stuart A., Jr.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  10. Implementation of a fast digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic.

    PubMed

    Habiby, S F; Collins, S A

    1987-11-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. A Hughes liquid crystal light valve, the residue arithmetic representation, and a holographic optical memory are used to construct position coded optical look-up tables. All operations are performed in effectively one light valve response time with a potential for a high information density.

  11. Mathematical investigation of one-way transform matrix options.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, James Arlin

    2006-01-01

    One-way transforms have been used in weapon systems processors since the mid- to late-1970s in order to help recognize insertion of correct pre-arm information while maintaining abnormal-environment safety. Level-One, Level-Two, and Level-Three transforms have been designed. The Level-One and Level-Two transforms have been implemented in weapon systems, and both of these transforms are equivalent to matrix multiplication applied to the inserted information. The Level-Two transform, utilizing a 6 x 6 matrix, provided the basis for the ''System 2'' interface definition for Unique-Signal digital communication between aircraft and attached weapons. The investigation described in this report was carried out to findmore » out if there were other size matrices that would be equivalent to the 6 x 6 Level-Two matrix. One reason for the investigation was to find out whether or not other dimensions were possible, and if so, to derive implementation options. Another important reason was to more fully explore the potential for inadvertent inversion. The results were that additional implementation methods were discovered, but no inversion weaknesses were revealed.« less

  12. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    NASA Technical Reports Server (NTRS)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  13. Branch Campus Leadership: Like Running a Three-Ring Circus?

    ERIC Educational Resources Information Center

    Gillie Gossom, J.; Deckert Pelton, M.

    2011-01-01

    Members of National Association of Branch Campus Administrators (NABCA) have spent three years crafting a survey instrument for assessing the leadership abilities and skills of branch administrators. In pursuit of the goal to investigate four leadership dimensions: diagnosing, implementing, visioning, and entrepreneurial, a pilot survey was…

  14. Identifying Trustworthiness Deficit in Legacy Systems Using the NFR Approach

    DTIC Science & Technology

    2014-01-01

    trustworthy envi- ronment. These adaptations can be stated in terms of design modifications and/or implementation mechanisms (for example, wrappers) that will...extensions to the VHSIC Hardware Description Language ( VHDL -AMS). He has spent the last 10 years leading research in high performance embedded computing

  15. 29 CFR 2204.107 - Allowable fees and expenses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH REVIEW COMMISSION IMPLEMENTATION OF THE EQUAL ACCESS TO JUSTICE ACT IN PROCEEDINGS BEFORE THE OCCUPATIONAL SAFETY AND HEALTH REVIEW... of the applicant; (4) The time reasonably spent in light of the difficulty or complexity of the...

  16. An A-76 Survival Guide

    DTIC Science & Technology

    2002-04-09

    25 THE SAVINGS BUGABOO ...fail to capture some important costs, particularly initial investment costs to conduct the competition and implement the contract or MEO (e.g...would be well spent as current employees are aware that the government continues to look at outsourcing more functions. CONCLUSION THE SAVINGS BUGABOO

  17. Implementation of hierarchical clustering using k-mer sparse matrix to analyze MERS-CoV genetic relationship

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Ulul, E. D.; Hura, H. F. A.; Siswantining, T.

    2017-07-01

    Hierarchical clustering is one of effective methods in creating a phylogenetic tree based on the distance matrix between DNA (deoxyribonucleic acid) sequences. One of the well-known methods to calculate the distance matrix is k-mer method. Generally, k-mer is more efficient than some distance matrix calculation techniques. The steps of k-mer method are started from creating k-mer sparse matrix, and followed by creating k-mer singular value vectors. The last step is computing the distance amongst vectors. In this paper, we analyze the sequences of MERS-CoV (Middle East Respiratory Syndrome - Coronavirus) DNA by implementing hierarchical clustering using k-mer sparse matrix in order to perform the phylogenetic analysis. Our results show that the ancestor of our MERS-CoV is coming from Egypt. Moreover, we found that the MERS-CoV infection that occurs in one country may not necessarily come from the same country of origin. This suggests that the process of MERS-CoV mutation might not only be influenced by geographical factor.

  18. An Interview with Matthew P. Greving, PhD. Interview by Vicki Glaser.

    PubMed

    Greving, Matthew P

    2011-10-01

    Matthew P. Greving is Chief Scientific Officer at Nextval Inc., a company founded in early 2010 that has developed a discovery platform called MassInsight™.. He received his PhD in Biochemistry from Arizona State University, and prior to that he spent nearly 7 years working as a software engineer. This experience in solving complex computational problems fueled his interest in developing technologies and algorithms related to acquisition and analysis of high-dimensional biochemical data. To address the existing problems associated with label-based microarray readouts, he beganwork on a technique for label-free mass spectrometry (MS) microarray readout compatible with both matrix-assisted laser/desorption ionization (MALDI) and matrix-free nanostructure initiator mass spectrometry (NIMS). This is the core of Nextval’s MassInsight technology, which utilizes picoliter noncontact deposition of high-density arrays on mass-readout substrates along with computational algorithms for high-dimensional data processingand reduction.

  19. [A cost-benefit analysis of a Mexican food-support program].

    PubMed

    Ventura-Alfaro, Carmelita E; Gutiérrez-Reyes, Juan P; Bertozzi-Kenefick, Stefano M; Caldés-Gómez, Natalia

    2011-06-01

    Objective Presenting an estimate of a Mexican food-support program (FSP) program's cost transfer ratio (CTR) from start-up (2003) to May 2005. Methods The program's activities were listed by constructing a time allocation matrix to ascertain how much time was spent on each of the program's activities by the personnel so involved. Another cost matrix was also constructed which was completed with information from the program's accountancy records. The program's total cost, activity cost and the value of given FSP transfers were thus estimated. Results Food delivery CRT for 2003, 2004 and 2005 was 0.150, 0.218, 0.230, respectively; cash CTR was 0.132in 2004 and 0.105 in 2005. Conclusion Comparing CTR values according to transfer type is a good way to promote discussion related to this topic; however, the decision for making a transfer does not depend exclusively on efficiency but on both mechanisms' effectiveness.

  20. Multi-GPU implementation of a VMAT treatment plan optimization algorithm.

    PubMed

    Tian, Zhen; Peng, Fei; Folkerts, Michael; Tan, Jun; Jia, Xun; Jiang, Steve B

    2015-06-01

    Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU's relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors' group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors' method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors' method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors' method. The authors' multi-GPU implementation can finish the optimization process within ∼ 1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23-46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. The results demonstrate that the multi-GPU implementation of the authors' column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors' study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.

  1. 3S (Safeguards, Security, Safety) based pyroprocessing facility safety evaluation plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ku, J.H.; Choung, W.M.; You, G.S.

    The big advantage of pyroprocessing for the management of spent fuels against the conventional reprocessing technologies lies in its proliferation resistance since the pure plutonium cannot be separated from the spent fuel. The extracted materials can be directly used as metal fuel in a fast reactor, and pyroprocessing reduces drastically the volume and heat load of the spent fuel. KAERI has implemented the SBD (Safeguards-By-Design) concept in nuclear fuel cycle facilities. The goal of SBD is to integrate international safeguards into the entire facility design process since the very beginning of the design phase. This paper presents a safety evaluationmore » plan using a conceptual design of a reference pyroprocessing facility, in which 3S (Safeguards, Security, Safety)-By-Design (3SBD) concept is integrated from early conceptual design phase. The purpose of this paper is to establish an advanced pyroprocessing hot cell facility design concept based on 3SBD for the successful realization of pyroprocessing technology with enhanced safety and proliferation resistance.« less

  2. The Creation of a CPU Timer for High Fidelity Programs

    NASA Technical Reports Server (NTRS)

    Dick, Aidan A.

    2011-01-01

    Using C and C++ programming languages, a tool was developed that measures the efficiency of a program by recording the amount of CPU time that various functions consume. By inserting the tool between lines of code in the program, one can receive a detailed report of the absolute and relative time consumption associated with each section. After adapting the generic tool for a high-fidelity launch vehicle simulation program called MAVERIC, the components of a frequently used function called "derivatives ( )" were measured. Out of the 34 sub-functions in "derivatives ( )", it was found that the top 8 sub-functions made up 83.1% of the total time spent. In order to decrease the overall run time of MAVERIC, a launch vehicle simulation program, a change was implemented in the sub-function "Event_Controller ( )". Reformatting "Event_Controller ( )" led to a 36.9% decrease in the total CPU time spent by that sub-function, and a 3.2% decrease in the total CPU time spent by the overarching function "derivatives ( )".

  3. A review on methods of regeneration of spent pickling solutions from steel processing.

    PubMed

    Regel-Rosocka, Magdalena

    2010-05-15

    The review presents various techniques of regeneration of spent pickling solutions, including the methods with acid recovery, such as diffusion dialysis, electrodialysis, membrane electrolysis and membrane distillation, evaporation, precipitation and spray roasting as well as those with acid and metal recovery: ion exchange, retardation, crystallization solvent and membrane extraction. Advantages and disadvantages of the techniques are presented, discussed and confronted with the best available techniques requirements. Most of the methods presented meet the BAT requirements. The best available techniques are electrodialysis, diffusion dialysis and crystallization; however, in practice spray roasting and retardation/ion-exchange are applied most frequently for spent pickling solution regeneration. As "waiting for their chance" solvent extraction, non-dispersive solvent extraction and membrane distillation should be indicated because they are well investigated and developed. Environmental and economic benefits of the methods presented in the review depend on the cost of chemicals and wastewater treatment, legislative regulations and cost of modernization of existing technologies or implementation of new ones. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  4. Implementation of Laminate Theory Into Strain Rate Dependent Micromechanics Analysis of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2000-01-01

    A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to impact loads. Previously, strain rate dependent inelastic constitutive equations developed to model the polymer matrix were implemented into a mechanics of materials based micromechanics method. In the current work, the computation of the effective inelastic strain in the micromechanics model was modified to fully incorporate the Poisson effect. The micromechanics equations were also combined with classical laminate theory to enable the analysis of symmetric multilayered laminates subject to in-plane loading. A quasi-incremental trapezoidal integration method was implemented to integrate the constitutive equations within the laminate theory. Verification studies were conducted using an AS4/PEEK composite using a variety of laminate configurations and strain rates. The predicted results compared well with experimentally obtained values.

  5. Assessing Skills from Placement to Completion.

    ERIC Educational Resources Information Center

    Armstrong, Judy

    A system to provide objective measures of institutional effectiveness was implemented at the Roswell branch of Eastern New Mexico University to determine whether the college was accountable to students, staff, and taxpayers; to improve the curriculum and programs; and to prepare for accreditation review in 1991. A task force spent the summer of…

  6. Invest in Financial Literacy

    ERIC Educational Resources Information Center

    Bush, Sarah B.; McGatha, Maggie B.; Bay-Williams, Jennifer M.

    2012-01-01

    The current state of the economy elevates the need to build awareness of financial markets and personal finance among the nation's young people through implementing a financial literacy curriculum in schools. A limited amount of time spent on financial literacy can have a positive effect on students' budgeting skills. This knowledge will only add…

  7. Improving Time Management for the Working Student.

    ERIC Educational Resources Information Center

    Anderson, Tim; Lott, Rod; Wieczorek, Linda

    This action research project implemented and evaluated a program for increasing time spent on homework. The project was intended to improve academic achievement among five employed high school students taking geometry and physical science who were also employed more than 15 hours per week. The problem of lower academic achievement due to…

  8. 77 FR 26314 - National Environmental Policy Act: Implementing Procedures; Addition to Categorical Exclusions...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-03

    ... leasing and funding for single- family homesites on Indian land, including associated improvements and...) reducing the resources spent analyzing proposals which generally do not have potentially significant... reviews of actions associated with single-family homes by preparing EAs; the addition of a categorical...

  9. Gamma and fast neutron radiation monitoring inside spent reactor fuel assemblies

    NASA Astrophysics Data System (ADS)

    Lakosi, L.; Tam Nguyen, C.

    2007-09-01

    Gamma and neutron signatures of spent reactor fuel were monitored by small-size silicon diode and track etch detectors, respectively, in a nuclear power plant (NPP). These signatures, reflecting gross gamma intensity and the 242,244Cm content, contain information on the burn-up (BU) and cooling time (CT) of the fuel. The small size of the detectors allows close access to inside parts of the assemblies out of reach of other methods. A commercial Si diode was encapsulated in a cylindrical steel case and was used for gross γ monitoring. CR-39 detectors were used for neutron measurements. Irradiation exposures at the NPP were implemented in the central dosimetric channel of spent fuel assemblies (SFAs) stored in borated water. Gross γ and neutron axial profiles were taken up by scanning with the aid of a long steel guide tube, lowered down to the spent fuel pond by crane and fitted to the headpiece of the fuel assemblies. Gamma measurements were performed using a long cable introduced in this tube, with the Si diode at the end. A long steel wire was also led through the guide tube, to which a chain of 15 sample holder capsules was attached, each containing a track detector. Gamma dose rates of 0.1-10 kGy h -1, while neutron fluxes in a range of (0.25-26) 10 4 cm -2 s -1 were recorded. The results are in good correlation with those of a calculation for spent fuel neutron yield.

  10. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE PAGES

    Vaccaro, S.; Gauld, I. C.; Hu, J.; ...

    2018-01-31

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  11. Advancing the Fork detector for quantitative spent nuclear fuel verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaccaro, S.; Gauld, I. C.; Hu, J.

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations.more » A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This study describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. Finally, the results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.« less

  12. Advancing the Fork detector for quantitative spent nuclear fuel verification

    NASA Astrophysics Data System (ADS)

    Vaccaro, S.; Gauld, I. C.; Hu, J.; De Baere, P.; Peterson, J.; Schwalbach, P.; Smejkal, A.; Tomanin, A.; Sjöland, A.; Tobin, S.; Wiarda, D.

    2018-04-01

    The Fork detector is widely used by the safeguards inspectorate of the European Atomic Energy Community (EURATOM) and the International Atomic Energy Agency (IAEA) to verify spent nuclear fuel. Fork measurements are routinely performed for safeguards prior to dry storage cask loading. Additionally, spent fuel verification will be required at the facilities where encapsulation is performed for acceptance in the final repositories planned in Sweden and Finland. The use of the Fork detector as a quantitative instrument has not been prevalent due to the complexity of correlating the measured neutron and gamma ray signals with fuel inventories and operator declarations. A spent fuel data analysis module based on the ORIGEN burnup code was recently implemented to provide automated real-time analysis of Fork detector data. This module allows quantitative predictions of expected neutron count rates and gamma units as measured by the Fork detectors using safeguards declarations and available reactor operating data. This paper describes field testing of the Fork data analysis module using data acquired from 339 assemblies measured during routine dry cask loading inspection campaigns in Europe. Assemblies include both uranium oxide and mixed-oxide fuel assemblies. More recent measurements of 50 spent fuel assemblies at the Swedish Central Interim Storage Facility for Spent Nuclear Fuel are also analyzed. An evaluation of uncertainties in the Fork measurement data is performed to quantify the ability of the data analysis module to verify operator declarations and to develop quantitative go/no-go criteria for safeguards verification measurements during cask loading or encapsulation operations. The goal of this approach is to provide safeguards inspectors with reliable real-time data analysis tools to rapidly identify discrepancies in operator declarations and to detect potential partial defects in spent fuel assemblies with improved reliability and minimal false positive alarms. The results are summarized, and sources and magnitudes of uncertainties are identified, and the impact of analysis uncertainties on the ability to confirm operator declarations is quantified.

  13. Explaining and Controlling for the Psychometric Properties of Computer-Generated Figural Matrix Items

    ERIC Educational Resources Information Center

    Freund, Philipp Alexander; Hofer, Stefan; Holling, Heinz

    2008-01-01

    Figural matrix items are a popular task type for assessing general intelligence (Spearman's g). Items of this kind can be constructed rationally, allowing the implementation of computerized generation algorithms. In this study, the influence of different task parameters on the degree of difficulty in matrix items was investigated. A sample of N =…

  14. Factors Influencing Acceptability and Perceived Impacts of a Mandatory ePortfolio Implemented by an Occupational Therapy Regulatory Organization.

    PubMed

    Vachon, Brigitte; Foucault, Marie-Lyse; Giguère, Charles-Édouard; Rochette, Annie; Thomas, Aliki; Morel, Martine

    2018-01-01

    The use of ePortfolios has been implemented in several regulatory organizations to encourage clinicians' engagement in continuing professional development (CPD). However, their use has achieved mixed success, and multiple personal and contextual factors can influence their impacts on practice change. The aim of this study was to identify which factors influence the acceptability and perceived impacts of an ePortfolio implemented by an occupational therapy regulatory organization in one Canadian province. A cross-sectional online survey design was used. The survey was sent to registered occupational therapists in Quebec. Multiple regression analyses were conducted to identify factors influencing acceptability and outcomes: ease of use, satisfaction, impact on implementation of the CPD plan, and competence improvement. The survey was fully completed by 546 participants. Factors significantly influencing the ePortfolio acceptability and perceived impacts were attitude toward and familiarity with the portfolio, confidence in reflective skills, engagement in the CPD plan, and desire for feedback. Time spent completing the ePortfolio and the fact of completing it in teams were negatively associated with the outcomes. Shaping more favorable user attitudes, helping users recognize and experience the tool's benefits for their practice, and fostering confidence in their reflective skills are important factors that can be addressed to improve ePortfolio acceptability and outcomes. Contextual factors, such as time spent completing the ePortfolio and completing it in teams, seem to reflect greater difficulty with using the tool. Study findings can contribute to improving ePortfolio implementation in the CPD context.

  15. Positive Matrix Factorization Model for environmental data analyses

    EPA Pesticide Factsheets

    Positive Matrix Factorization is a receptor model developed by EPA to provide scientific support for current ambient air quality standards and implement those standards by identifying and quantifying the relative contributions of air pollution sources.

  16. Neutronics calculations on the impact of burnable poisons to safety and non-proliferation aspects of inert matrix fuel

    NASA Astrophysics Data System (ADS)

    Pistner, C.; Liebert, W.; Fujara, F.

    2006-06-01

    Inert matrix fuels (IMF) with plutonium may play a significant role to dispose of stockpiles of separated plutonium from military or civilian origin. For reasons of reactivity control of such fuels, burnable poisons (BP) will have to be used. The impact of different possible BP candidates (B, Eu, Er and Gd) on the achievable burnup as well as on safety and non-proliferation aspects of IMF are analyzed. To this end, cell burnup calculations have been performed and burnup dependent reactivity coefficients (boron worth, fuel temperature and moderator void coefficient) were calculated. All BP candidates were analyzed for one initial BP concentration and a range of different initial plutonium-concentrations (0.4-1.0 g cm-3) for reactor-grade plutonium isotopic composition as well as for weapon-grade plutonium. For the two most promising BP candidates (Er and Gd), a range of different BP concentrations was investigated to study the impact of BP concentration on fuel burnup. A set of reference fuels was identified to compare the performance of uranium-fuels, MOX and IMF with respect to (1) the fraction of initial plutonium being burned, (2) the remaining absolute plutonium concentration in the spent fuel and (3) the shift in the isotopic composition of the remaining plutonium leading to differences in the heat and neutron rate produced. In the case of IMF, the remaining Pu in spent fuel is unattractive for a would be proliferator. This underlines the attractiveness of an IMF approach for disposal of Pu from a non-proliferation perspective.

  17. Factors influencing nursing time spent on administration of medication in an Australian residential aged care home.

    PubMed

    Qian, Siyu; Yu, Ping; Hailey, David M; Wang, Ning

    2016-04-01

    To examine nursing time spent on administration of medications in a residential aged care (RAC) home, and to determine factors that influence the time to medicate a resident. Information on nursing time spent on medication administration is useful for planning and implementation of nursing resources. Nurses were observed over 12 morning medication rounds using a time-motion observational method and field notes, at two high-care units in an Australian RAC home. Nurses spent between 2.5 and 4.5 hours in a medication round. Administration of medication averaged 200 seconds per resident. Four factors had significant impact on medication time: number of types of medication, number of tablets taken by a resident, methods used by a nurse to prepare tablets and methods to provide tablets. Administration of medication consumed a substantial, though variable amount of time in the RAC home. Nursing managers need to consider the factors that influenced the nursing time required for the administration of medication in their estimation of nursing workload and required resources. To ensure safe medication administration for older people, managers should regularly assess the changes in the factors influencing nursing time on the administration of medication when estimating nursing workload and required resources. © 2015 John Wiley & Sons Ltd.

  18. Architecture studies and system demonstrations for optical parallel processor for AI and NI

    NASA Astrophysics Data System (ADS)

    Lee, Sing H.

    1988-03-01

    In solving deterministic AI problems the data search for matching the arguments of a PROLOG expression causes serious bottleneck when implemented sequentially by electronic systems. To overcome this bottleneck we have developed the concepts for an optical expert system based on matrix-algebraic formulation, which will be suitable for parallel optical implementation. The optical AI system based on matrix-algebraic formation will offer distinct advantages for parallel search, adult learning, etc.

  19. V2.2 L2AS Detailed Release Description April 15, 2002

    Atmospheric Science Data Center

    2013-03-14

    ... 'optically thick atmosphere' algorithm. Implement new experimental aerosol retrieval algorithm over homogeneous surface types. ... Change values: cloud_mask_decision_matrix(1,1): .true. -> .false. cloud_mask_decision_matrix(2,1): .true. -> .false. ...

  20. A New Pipelined Systolic Array-Based Architecture for Matrix Inversion in FPGAs with Kalman Filter Case Study

    NASA Astrophysics Data System (ADS)

    Bigdeli, Abbas; Biglari-Abhari, Morteza; Salcic, Zoran; Tin Lai, Yat

    2006-12-01

    A new pipelined systolic array-based (PSA) architecture for matrix inversion is proposed. The pipelined systolic array (PSA) architecture is suitable for FPGA implementations as it efficiently uses available resources of an FPGA. It is scalable for different matrix size and as such allows employing parameterisation that makes it suitable for customisation for application-specific needs. This new architecture has an advantage of[InlineEquation not available: see fulltext.] processing element complexity, compared to the[InlineEquation not available: see fulltext.] in other systolic array structures, where the size of the input matrix is given by[InlineEquation not available: see fulltext.]. The use of the PSA architecture for Kalman filter as an implementation example, which requires different structures for different number of states, is illustrated. The resulting precision error is analysed and shown to be negligible.

  1. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  2. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  3. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Z; Shi, F; Jia, X

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires accessmore » to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.« less

  4. Influence of phosphate glass recrystallization on the stability of a waste matrix to leaching

    NASA Astrophysics Data System (ADS)

    Yudintsev, S. V.; Pervukhina, A. M.; Mokhov, A. V.; Malkovsky, V. I.; Stefanovsky, S. V.

    2017-04-01

    In Russia, highly radioactive liquid wastes from recycling of spent fuel of nuclear reactors are solidified into Na-Al-P glass for underground storage. The properties of the matrix including the radionuclide fixation will change with time due to crystallization. This is supported by the results of study of the interaction between glassy matrices, products of their crystallization, and water. The concentration of Cs in a solution at the contact of a recrystallized sample increased by three orders of magnitude in comparison with an experiment with glass. This difference is nearly one order of magnitude for Sr, Ce, and Nd (simulators of actinides) and U due to their incorporation into phases with low solubility in water. Based on data on the compositional change of solutions after passing through filters of various diameters, it is concluded that Cs occurs in the dissolved state in runs with a glass and recrystallized matrix. At the same time, Sr, lanthanides, and U occur in the dissolved state and in the composition of colloids in runs with glass, and mostly in colloid particles after contact with the recrystallized sample. These results should be regarded for substantiation of safety for geological waste storage.

  5. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  6. Spent Nuclear Fuel Project Configuration Management Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reilly, M.A.

    This document is a rewrite of the draft ``C`` that was agreed to ``in principle`` by SNF Project level 2 managers on EDT 609835, dated March 1995 (not released). The implementation process philosphy was changed in keeping with the ongoing reengineering of the WHC Controlled Manuals to achieve configuration management within the SNF Project.

  7. Project Portfolio Management for Academic Libraries: A Gentle Introduction

    ERIC Educational Resources Information Center

    Vinopal, Jennifer

    2012-01-01

    In highly dynamic, service-oriented environments like academic libraries, much staff time is spent on initiatives to implement new products and services to meet users' evolving needs. Yet even in an environment where a sound project management process is applied, if we're not properly planning, managing, and controlling the organization's work in…

  8. Use of fellowships

    NASA Technical Reports Server (NTRS)

    Gierasch, Peter J.

    1990-01-01

    The effective use of Space Grant Program fellowships are critical in meeting program objectives. In the first year of operation, the 21 colleges/consortia will expend from 30-40 percent of their grants for fellowships; program policy will allow up to 50 percent to be spent for fellowships. Thus, fellowship policy must be carefully implemented and monitored.

  9. The Dark Side of the Sun.

    ERIC Educational Resources Information Center

    Fry, Tom

    2002-01-01

    Describes easy-to-implement strategies parents can use to ensure their children's safety in the sun and avoid skin cancer, which is the most prevalent form of cancer in United States. Suggestions include: limit the amount of time spent in the sun, wear protective clothing, use sunscreening agents, and have knowledge of skin cancer and its…

  10. Women in History--Judy Heumann: Giving Voice and Creating Change

    ERIC Educational Resources Information Center

    Hall, Sarah A.

    2008-01-01

    This article profiles Judy Heumann, who has spent her life as an advocate for the rights of people with disabilities. She advocates for the full appropriate implementation of the Individuals with Disabilities Education Act and other related antidiscrimination legislation. Her ultimate goal is for people with disabilities "not to be seen as…

  11. 10 CFR Appendix D to Subpart D of... - Classes of Actions that Normally Require EISs

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... [Reserved] D7 Contracts, policies, and marketing and allocation plans for electric power D8 Import or export... operational change D10 Treatment, storage, and disposal facilities for high-level waste and spent nuclear fuel... Contracts, Policies, and Marketing and Allocation Plans for Electric Power Establishment and implementation...

  12. Surviving the Implementation of a New Science Curriculum

    ERIC Educational Resources Information Center

    Lowe, Beverly; Appleton, Ken

    2015-01-01

    Queensland schools are currently teaching with the first National Curriculum for Australia. This new curriculum was one of a number of political responses to address the recurring low scores in literacy, mathematics, and science that continue to hold Australia in poor international rankings. Teachers have spent 2 years getting to know the new…

  13. Once Upon a Time: A Grimm Approach to Character Education

    ERIC Educational Resources Information Center

    Bryan, Laura

    2005-01-01

    Many school districts have implemented "packaged" programs designed to teach character education. Millions of dollars have been spent on these programs, yet society continues to produce more "characters" than students "with character." This article describes a shift from the "programmatic" mindset to a solution that is not packaged or purchased.…

  14. 40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... B C Appendix C to Part 191 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... SPENT NUCLEAR FUEL, HIGH-LEVEL AND TRANSURANIC RADIOACTIVE WASTES Pt. 191, App. C Appendix C to Part 191... establish appropriate markers and records, consistent with § 191.14(c). The Agency assumes that, as long as...

  15. 40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... B C Appendix C to Part 191 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... SPENT NUCLEAR FUEL, HIGH-LEVEL AND TRANSURANIC RADIOACTIVE WASTES Pt. 191, App. C Appendix C to Part 191... establish appropriate markers and records, consistent with § 191.14(c). The Agency assumes that, as long as...

  16. Augmenting and updating NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1989-01-01

    The development of Spacelink during its gestation, birth, infancy, and childhood are described. In addition to compiling and developing more material for implementation in Spacelink, Summer 1989 was spent scanning the insignias of the various manned missions into Spacelink. Material for the above was extracted from existing NASA publications, documents and photographs.

  17. Implementation of Rocking Chair Therapy for Veterans in Residential Substance Use Disorder Treatment [Formula: see text].

    PubMed

    Cross, Rene' L; White, Justin; Engelsher, Jaclyn; O'Connor, Stephen S

    Substance use disorder (SUD) and mental health diagnosis negatively affect Veteran homelessness. Assess the acceptance and feasibility of rocking chair therapy as a self-implemented intervention for mood and substance cravings. For homeless Veterans in SUD treatment, how does adding vestibular stimulation by use of a rocking chair compared with treatment as usual affect levels of anxiety and substance cravings? Two significant findings were observed. First, a greater number of minutes spent rocking was associated with significantly greater scores on the Expectancy scale of the Alcohol Craving Questionnaire (ACQ; p = .05), suggesting participants experiencing higher urges and desires to drink rocked to self-soothe. Second, a significant association was observed between a greater number of minutes spent rocking and lower scores on the ACQ Purposefulness subscale ( p = .03), indicating greater time rocking was associated with fewer urges and desires that are connected with the intent and plan to drink. Vestibular stimulation by rocking in a rocking chair may increase the ability to self-regulate mood and substance cravings, thereby potentially reducing risk of relapse and recurrent chronic homelessness.

  18. Depot effect of bioactive components in experimental membrane filtrations

    NASA Astrophysics Data System (ADS)

    Mitev, D.; Peshev, D.; Peev, G.; Peeva, L.

    2017-01-01

    Depot effects were found to be accompanying phenomena of membrane separation processes. Accumulation of target species in the membrane matrix during feasibility tests can hamper proper conclusions or compromise the filtration results. Therefore, we investigated the effects of delayed membrane release of chlorogenic acid and caffeine, considered as key compounds of interest in spent coffee products’ recovery treatment. Permeate fluxes and key components release were studied in course of 24 hours via nanofiltration of pure solvent, both immediately after the mock solution filtration and after idle stay. Conclusions are drawn and recommendations advised for proper analysis of experimental data on membrane screening.

  19. Solidification of spent ion exchange resins into the SIAL matrix at the Dukovany NPP, Czech Republic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatransky, Peter; Prazska, Milena; Harvan, David

    2013-07-01

    Based on the decision of the State Office for Nuclear Safety, the Dukovany NPP has been obliged to secure the efficient capacities for the disposal of spent ion exchange resins. Therefore, in September 2010, based on the contract with supplier company AMEC Nuclear Slovakia s.r.o. has begun with pumping and treatment of ion exchange resins from the storage tank 0TW30B02, situated in the auxiliary building. The SIAL{sup R} technology, developed in AMEC Nuclear Slovakia, has been used for the solidification purposes. This technology allows an on-site treatment of various special radioactive waste streams (resins, sludge, sludge/resins and borates) at themore » room temperature. The SIAL{sup R} matrix and technology were licensed by the Czech State Office for Nuclear Safety in 2007. On-site treatment and solidification of spent ion exchange resins at Dukovany NPP involves process of resin removal from tank using remotely operated manipulator, resin transportation, resin separation from free water, resin filling into 200 dm{sup 3} drums and solidification into SIAL{sup R} matrix in 200 dm{sup 3} drums using the FIZA S 200 facility. The final product is observed for compressive strength, leachability, radionuclide composition, dose rate, solids and total weight. After meeting the requirements for final disposal and consolidation, the drums are being transported for the final disposal to the Repository at Dukovany site. During the 3 month's trial operation in 2010, and the normal operation in 2011 and 2012, 189 tons of dewatered resins have been treated into 1960 drums, with total activity higher than 920 GBq. At the end of trial run (2010), 22 tons of dewatered resins were treated into 235 drums. During standard operation approximately 91 tons in 960 drums (2011) and 76 tons in 765 drums (2012) were treated. The weights of resins in the drum ware in the range from 89 - 106 kg and compressive strength limit (10 MPa) has already been achieved 24 hours after fixation. The final measured strength values ranged from 19.0 - 34.7 MPa and real leachability values ranged from 0.03 - 0.65%, far below the 4% limit value. Collective effective dose of all workers in 2012 was 7.7 mSv (12.6 mSv in 2011, 6.2 mSv in 2010). Average individual effective dose in 2012 was 0.55 mSv (14 workers), and maximal individual effective dose was 2.25 mSv. This approach allows fast, safe and cost effective immobilization and transformation of dangerous radioactive waste such as sludge and resins into the solid form, which is suitable for long term storage or disposal. (authors)« less

  20. LS-DYNA Implementation of Polymer Matrix Composite Model Under High Strain Rate Impact

    NASA Technical Reports Server (NTRS)

    Zheng, Xia-Hua; Goldberg, Robert K.; Binienda, Wieslaw K.; Roberts, Gary D.

    2003-01-01

    A recently developed constitutive model is implemented into LS-DYNA as a user defined material model (UMAT) to characterize the nonlinear strain rate dependent behavior of polymers. By utilizing this model within a micromechanics technique based on a laminate analogy, an algorithm to analyze the strain rate dependent, nonlinear deformation of a fiber reinforced polymer matrix composite is then developed as a UMAT to simulate the response of these composites under high strain rate impact. The models are designed for shell elements in order to ensure computational efficiency. Experimental and numerical stress-strain curves are compared for two representative polymers and a representative polymer matrix composite, with the analytical model predicting the experimental response reasonably well.

  1. Time-motion studies of internal medicine residents' duty hours: a systematic review and meta-analysis.

    PubMed

    Leafloor, Cameron W; Lochnan, Heather A; Code, Catherine; Keely, Erin J; Rothwell, Deanna M; Forster, Alan J; Huang, Allen R

    2015-01-01

    Since the mid-1980s, medical residents' long duty hours have been under scrutiny as a factor affecting patient safety and the work environment for the residents. After several mandated changes in duty hours, it is important to understand how residents spend their time before proposing and implementing future changes. Time-motion methodology may provide reliable information on what residents do while on duty. The purpose of this study is to review all available literature pertaining to time-motion studies of internal medicine residents while on a medicine service and to understand how much of their time is apportioned to various categories of tasks, and also to determine the effects of the Accreditation Council for Graduate Medical Education (ACGME)-mandated duty hour changes on resident workflow in North America. Electronic bibliographic databases were searched for articles in English between 1941 and April 2013 reporting time-motion studies of internal medicine residents rotating through a general medicine service. Eight articles were included. Residents spent 41.8% of time in patient care activities, 18.1% communicating, 13.8% in educational activities, 19.7% in personal/other, and 6.6% in transit. North American data showed the following changes after the implementation of the ACGME 2003 duty hours standard: patient care activities from 41.8% to 40.8%, communication activities from 19.0% to 22.3%, educational activities from 17.7% to 11.6%, and personal/other activities from 21.5% to 17.1%. There was a paucity of time-motion data. There was great variability in the operational definitions of task categories reported in the studies. Implementation of the ACGME duty hour standards did not have a significant effect on the percentage of time spent in particular tasks. There are conflicting reports on how duty hour changes have affected patient safety. A low proportion of time spent in educational activities deserves further study and may point to a review of the educational models used.

  2. Mathematical foundations of the GraphBLAS

    DOE PAGES

    Kepner, Jeremy; Aaltonen, Peter; Bader, David; ...

    2016-12-01

    The GraphBLAS standard (GraphBlas.org) is being developed to bring the potential of matrix-based graph algorithms to the broadest possible audience. Mathematically, the GraphBLAS defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This study provides an introduction to the mathematics of the GraphBLAS. Graphs represent connections between vertices with edges. Matrices can represent a wide range of graphs using adjacency matrices or incidence matrices. Adjacency matrices are often easier to analyze while incidence matrices are often better for representing data. Fortunately, themore » two are easily connected by matrix multiplication. A key feature of matrix mathematics is that a very small number of matrix operations can be used to manipulate a very wide range of graphs. This composability of a small number of operations is the foundation of the GraphBLAS. A standard such as the GraphBLAS can only be effective if it has low performance overhead. Finally, performance measurements of prototype GraphBLAS implementations indicate that the overhead is low.« less

  3. Unifying time evolution and optimization with matrix product states

    NASA Astrophysics Data System (ADS)

    Haegeman, Jutho; Lubich, Christian; Oseledets, Ivan; Vandereycken, Bart; Verstraete, Frank

    2016-10-01

    We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we introduce a new integration scheme for studying time evolution, which can cope with arbitrary Hamiltonians, including those with long-range interactions. Rather than a Suzuki-Trotter splitting of the Hamiltonian, which is the idea behind the adaptive time-dependent density matrix renormalization group method or time-evolving block decimation, our method is based on splitting the projector onto the matrix product state tangent space as it appears in the Dirac-Frenkel time-dependent variational principle. We discuss how the resulting algorithm resembles the density matrix renormalization group (DMRG) algorithm for finding ground states so closely that it can be implemented by changing just a few lines of code and it inherits the same stability and efficiency. In particular, our method is compatible with any Hamiltonian for which ground-state DMRG can be implemented efficiently. In fact, DMRG is obtained as a special case of our scheme for imaginary time evolution with infinite time step.

  4. Gamma-ray mirror technology for NDA of spent fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Descalle, M. A.; Ruz-Armendariz, J.; Decker, T.

    Direct measurements of gamma rays emitted by fissile material have been proposed as an alternative to measurements of the gamma rays from fission products. From a safeguards applications perspective, direct detection of uranium (U) and plutonium (Pu) K-shell fluorescence emission lines and specific lines from some of their isotopes could lead to improved shipper-receiver difference or input accountability at the start of Pu reprocessing. However, these measurements are difficult to implement when the spent fuel is in the line-of-sight of the detector, as the detector is exposed to high rates dominated by fission product emissions. To overcome the combination ofmore » high rates and high background, grazing incidence multilayer mirrors have been proposed as a solution to selectively reflect U and Pu hard X-ray and soft gamma rays in the 90 to 420 keV energy into a high-purity germanium (HPGe) detector shielded from the direct line-of-sight of spent fuel. Several groups demonstrated that K-shell fluorescence lines of U and Pu in spent fuel could be detected with Ge detectors. In the field of hard X-ray optics the performance of reflective multilayer coated reflective optics was demonstrated up to 645 keV at the European Synchrotron Radiation Facility. Initial measurements conducted at Oak Ridge National Laboratory with sealed sources and scoping experiments conducted at the ORNL Irradiated Fuels Examination Laboratory (IFEL) with spent nuclear fuel further demonstrated the pass-band properties of multilayer mirrors for reflecting specific emission lines into 1D and 2D HPGe detectors, respectively.« less

  5. Clay-based matrices incorporating radioactive silts: A case study of sediments from spent fuel pool

    NASA Astrophysics Data System (ADS)

    Antonenko, Mikhail; Myshkin, Vyacheslav; Grigoriev, Alexander; Chubreev, Dmitry

    2018-03-01

    Radioactive silt sediments from uranium reactors may be effectively and safely included by ceramic compounds. The purpose of the paper is to determine the influence of composition and preparation conditions on physicochemical and mechanical properties of clay-based matrices containing radioactive silt. Clay matrices were prepared from four minerals, took from Siberian regions, as kaolin, loan, bentonite and red clay, and they included radioactive silt sediments collected from Spent Fuel Pool of a Uranium-graphite Reactor. The rate of 137Cs leaching from the matrices of different compositions was studied. The results of the studies allowed determining the optimal compositions and the preparation conditions of the matrices. It has been shown that red clay from "Zykovskaya" career (Krasnoyarsk region, Russia) is preferable for use as a matrix for incorporating the silt sediments compared to kaolin, loam and bentonite due to the maximum values tensile strength and minimal change in ultimate strength for compression after irradiation, freezing and water exposure. Nevertheless, 137Cs leaching rate of all studied composites did not exceed 10-3 g/cm2.day.

  6. New Factorization Techniques and Fast Serial and Parrallel Algorithms for Operational Space Control of Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Djouani, Karim; Fried, George; Pontnau, Jean

    1997-01-01

    In this paper a new factorization technique for computation of inverse of mass matrix, and the operational space mass matrix, as arising in implementation of the operational space control scheme, is presented.

  7. The Progress of US Hospitals in Addressing Community Health Needs.

    PubMed

    Cramer, Geri Rosen; Singh, Simone R; Flaherty, Stephen; Young, Gary J

    2017-02-01

    To identify how US tax-exempt hospitals are progressing in regard to community health needs assessment (CHNA) implementation following the Patient Protection and Affordable Care Act. We analyzed data on more than 1500 tax-exempt hospitals in 2013 to assess patterns in CHNA implementation and to determine whether a hospital's institutional and community characteristics are associated with greater progress. Our findings show wide variation among hospitals in CHNA implementation. Hospitals operating as part of a health system as well as hospitals participating in a Medicare accountable care organization showed greater progress in CHNA implementation whereas hospitals serving a greater proportion of uninsured showed less progress. We also found that hospitals reporting the highest level of CHNA implementation progress spent more on community health improvement. Hospitals widely embraced the regulations to perform a CHNA. Less is known about how hospitals are moving forward to improve population health through the implementation of programs to meet identified community needs.

  8. Implementing an Integrated Commitment Management System at the Savannah River Site Tank Farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.

    1999-06-16

    Recently, the Savannah River Site Tank Farms have been transitioning from pre-1990 Authorization Basis requirements to new 5480.22/.23 requirements. Implementation of the new Authorization Basis has resulted in more detailed requirements, a completely new set of implementing procedures, and the expectation of even more disciplined operations. Key to the success of this implementation has been the development of an Integrated Commitment Management System (ICMS) by Westinghouse Safety Management Solutions. The ICMS has two elements: the Authorization Commitment Matrix (ACM), and a Procedure Consistency Review methodology. The Authorization Commitment Matrix is a linking database, which ties requirements and implementing documents together.more » The associated Procedure Consistency Review process ensures that the procedures to be credited in the ACM do in fact correctly and completely meet all intended commitments. This Integrated Commitment Management System helps Westinghouse Safety Management Solutions and the facility operations and engineering organizations take ownership in the implementation of the requirements that have been developed.« less

  9. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  10. Systematically evaluating the impact of diagnosis-related groups (DRGs) on health care delivery: a matrix of ethical implications.

    PubMed

    Fourie, Carina; Biller-Andorno, Nikola; Wild, Verina

    2014-04-01

    Swiss hospitals were required to implement a prospective payment system for reimbursement using a diagnosis-related groups (DRGs) classification system by the beginning of 2012. Reforms to a health care system should be assessed for their impact, including their impact on ethically relevant factors. Over a number of years and in a number of countries, questions have been raised in the literature about the ethical implications of the implementation of DRGs. However, despite this, researchers have not attempted to identify the major ethical issues associated with DRGs systematically. To address this gap in the literature, we have developed a matrix for identifying the ethical implications of the implementation of DRGs. It was developed using a literature review, and empirical studies on DRGs, as well as a review and analysis of existing ethics frameworks. The matrix consists of the ethically relevant parameters of health care systems on which DRGs are likely to have an impact; the ethical values underlying these parameters; and examples of specific research questions associated with DRGs to illustrate how the matrix can be applied. While the matrix has been developed in light of the Swiss health care reform, it could be used as a basis for identifying the ethical implications of DRG-based systems worldwide and for highlighting the ethical implications of other kinds of provider payment systems (PPS). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Showcasing Chemical Engineering Principles through the Production of Biodiesel from Spent Coffee Grounds

    ERIC Educational Resources Information Center

    Bendall, Sophie; Birdsall-Wilson, Max; Jenkins, Rhodri; Chew, Y. M. John; Chuck, Christopher J.

    2015-01-01

    Chemical engineering is rarely encountered before higher-level education in the U.S. or in Europe, leaving prospective students unaware of what an applied chemistry or chemical engineering degree entails. In this lab experiment, we report the implementation of a three-day course to showcase chemical engineering principles for 16-17 year olds…

  12. 77 FR 76871 - Approval and Promulgation of Implementation Plans; State of Colorado; Regional Haze State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-31

    ... nitrogen oxides. xiii. The initials NPS mean or refer to National Park Service. xiv. The initials PM 2.5..., nitrogen deposition, and mercury emissions and deposition. The State spent considerable time and conducted sequential and extended hearings to develop a plan which seeks to balance a number of variables beyond those...

  13. Designing a Blended Course: Using ADDIE to Guide Instructional Design

    ERIC Educational Resources Information Center

    Shibley, Ike; Amaral, Katie E.; Shank, John D.; Shibley, Lisa R.

    2011-01-01

    The ADDIE (analysis, design, development, implementation, and evaluation) model was applied to help redesign a General Chemistry course to improve student success in the course. A team of six professionals spent 18 months and over 1,000 man-hours in the redesign. The resultant course is a blend of online and face-to-face instruction that utilizes…

  14. I've Seen the Future, and It's Surprisingly Cheap!

    ERIC Educational Resources Information Center

    Reynolds, Veronica

    2011-01-01

    In these difficult economic times, everyone is on the lookout for savings. This author, an adult services librarian at New City Library in New York, has spent the last two years implementing small-scale open source and freeware replacements where proprietary or paper solutions once ruled. While none of these projects is individually revolutionary,…

  15. "I Spent 1-1/2 Hours Sifting through One Large Box...": Diaries as Information Behavior of the Archives User: Lessons Learned.

    ERIC Educational Resources Information Center

    Toms, Elaine G.; Duff, Wendy

    2002-01-01

    This article describes how diaries were implemented in a study of the use of archives and archival finding aids by history graduate students. The issues concerning diary use as a data collection technique are discussed as well as the different types of diaries. (Author)

  16. The Wheels on the Bot Go Round and Round: Robotics Curriculum in Pre-Kindergarten

    ERIC Educational Resources Information Center

    Sullivan, Amanda; Kazakoff, Elizabeth R.; Bers, Marina Umashi

    2013-01-01

    This paper qualitatively examines the implementation of an intensive weeklong robotics curriculum in three Pre-Kindergarten classrooms (N = 37) at an early childhood STEM (science, technology, engineering, and math) focused magnet school in the Harlem area of New York City. Children at the school spent one week participating in computer…

  17. Positive Outcomes Increase over Time with the Implementation of a Semiflipped Teaching Model

    ERIC Educational Resources Information Center

    Gorres-Martens, Brittany K.; Segovia, Angela R.; Pfefer, Mark T.

    2016-01-01

    The flipped teaching model can engage students in the learning process and improve learning outcomes. The purpose of the present study was to assess the outcomes of a semiflipped teaching model over time. Neurophysiology students spent the majority of class time listening to traditional didactic lectures, but they also listened to 5 online…

  18. Transforming a Business Statistics Course with Just-in-Time Teaching

    ERIC Educational Resources Information Center

    Bangs, Joann

    2012-01-01

    This paper describes changing the way a business statistics course is taught through the use of just-in-time teaching methods. Implementing this method allowed for more time in the class to be spent focused on problem solving, resulting in students being able to handle more difficult problems. Students' perceptions of the just-in-time assignments…

  19. The Influence of Learning Management Technology to Student's Learning Outcome

    ERIC Educational Resources Information Center

    Adi Sucipto, Taufiq Lilo; Efendi, Agus; Hanif, Husni Nadya; Budiyanto, Cucuk

    2017-01-01

    The study examines the influence of learning management systems to the implementation of flipped classroom model in a vocational school in Indonesia. The flipped classroom is a relatively new educational model that inverts students' time to study on lectures and time spent on homework. Despite studies have been conducted on the model, few…

  20. The Role of Children's Books in Classroom Discourse and Pedagogy

    ERIC Educational Resources Information Center

    Bailey, Deborah

    2014-01-01

    This qualitative case study explored the factors that contributed to how typical book reading practices and the time spent on book reading were implemented during the school day. Previous research has shown the importance of reading books to young children and that providing access to books has led to reading growth and progress. However, little…

  1. First Human Brain Imaging by the jPET-D4 Prototype With a Pre-Computed System Matrix

    NASA Astrophysics Data System (ADS)

    Yamaya, Taiga; Yoshida, Eiji; Obi, Takashi; Ito, Hiroshi; Yoshikawa, Kyosan; Murayama, Hideo

    2008-10-01

    The jPET-D4 is a novel brain PET scanner which aims to achieve not only high spatial resolution but also high scanner sensitivity by using 4-layer depth-of-interaction (DOI) information. The dimensions of a system matrix for the jPET-D4 are 3.3 billion (lines-of-response) times 5 million (image elements) when a standard field-of-view (FOV) of 25 cm diameter is sampled with a (1.5 mm)3 voxel . The size of the system matrix is estimated as 117 petabytes (PB) with the accuracy of 8 bytes per element. An on-the-fly calculation is usually used to deal with such a huge system matrix. However we cannot avoid extension of the calculation time when we improve the accuracy of system modeling. In this work, we implemented an alternative approach based on pre-calculation of the system matrix. A histogram-based 3D OS-EM algorithm was implemented on a desktop workstation with 32 GB memory installed. The 117 PB system matrix was compressed under the limited amount of computer memory by (1) eliminating zero elements, (2) applying the DOI compression (DOIC) method and (3) applying rotational symmetry and an axial shift property of the crystal arrangement. Spanning, which degrades axial resolution, was not applied. The system modeling and the DOIC method, which had been validated in 2D image reconstruction, were expanded into 3D implementation. In particular, a new system model including the DOIC transformation was introduced to suppress resolution loss caused by the DOIC method. Experimental results showed that the jPET-D4 has almost uniform spatial resolution of better than 3 mm over the FOV. Finally the first human brain images were obtained with the jPET-D4.

  2. Coffee Grounds to Multifunctional Quantum Dots: Extreme Nanoenhancers of Polymer Biocomposites.

    PubMed

    Xu, Huan; Xie, Lan; Li, Jinlai; Hakkarainen, Minna

    2017-08-23

    Central to the design and execution of nanocomposite strategies is the invention of polymer-affinitive and multifunctional nanoreinforcements amenable to economically viable processing. Here, a microwave-assisted approach enabled gram-scale fabrication of polymer-affinitive luminescent quantum dots (QDs) from spent coffee grounds. The ultrasmall dimensions (approaching 20 nm), coupled with richness of diverse oxygen functional groups, conferred the zero-dimensional QDs with proper exfoliation and uniform dispersion in poly(l-lactic acid) (PLLA) matrix. The unique optical properties of QDs were inherited by PLLA nanocomposites, giving intensive luminescence and high visible transparency, as well as nearly 100% UV-blocking ratio in the full-UV region at only 0.5 wt % QDs. The strong anchoring of PLLA chains at the nanoscale surfaces of QDs facilitated PLLA crystallization, which was accompanied by substantial improvements in thermomechanical and tensile properties. With 1 wt % QDs, for example, the storage modulus at 100 °C and tensile strength increased over 2500 and 69% compared to those of pure PLLA (4 and 57.3 MPa), respectively. The QD-enabled energy-dissipating and flexibility-imparting mechanisms upon tensile deformation, including the generation of numerous shear bands, crazing, and nanofibrillation, gave an unusual combination of elasticity and extensibility for PLLA nanocomposites. This paves the way to biowaste-derived nanodots with high affinity to polymer for elegant implementation of distinct light management and extreme nanoreinforcements in an ecofriendly manner.

  3. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Zhen, E-mail: Zhen.Tian@UTSouthwestern.edu, E-mail: Xun.Jia@UTSouthwestern.edu, E-mail: Steve.Jiang@UTSouthwestern.edu; Folkerts, Michael; Tan, Jun

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform tomore » solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H and N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method. Results: The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H and N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes. Conclusions: The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.« less

  4. An Implementation Model for Integrated Learning Systems.

    ERIC Educational Resources Information Center

    Mills, Steven C.; Ragan, Tillman R.

    This paper describes the development, validation, and research application of the Computer-Delivered Instruction Configuration Matrix (CDICM), an instrument for evaluating the implementation of Integrated Learning Systems (ILS). The CDICM consists of a 15-item checklist, describing the major components of implementation of ILS technology, to be…

  5. The impact of interoperability of electronic health records on ambulatory physician practices: a discrete-event simulation study.

    PubMed

    Zhou, Yuan; Ancker, Jessica S; Upadhye, Mandar; McGeorge, Nicolette M; Guarrera, Theresa K; Hegde, Sudeep; Crane, Peter W; Fairbanks, Rollin J; Bisantz, Ann M; Kaushal, Rainu; Lin, Li

    2013-01-01

    The effect of health information technology (HIT) on efficiency and workload among clinical and nonclinical staff has been debated, with conflicting evidence about whether electronic health records (EHRs) increase or decrease effort. None of this paper to date, however, examines the effect of interoperability quantitatively using discrete event simulation techniques. To estimate the impact of EHR systems with various levels of interoperability on day-to-day tasks and operations of ambulatory physician offices. Interviews and observations were used to collect workflow data from 12 adult primary and specialty practices. A discrete event simulation model was constructed to represent patient flows and clinical and administrative tasks of physicians and staff members. High levels of EHR interoperability were associated with reduced time spent by providers on four tasks: preparing lab reports, requesting lab orders, prescribing medications, and writing referrals. The implementation of an EHR was associated with less time spent by administrators but more time spent by physicians, compared with time spent at paper-based practices. In addition, the presence of EHRs and of interoperability did not significantly affect the time usage of registered nurses or the total visit time and waiting time of patients. This paper suggests that the impact of using HIT on clinical and nonclinical staff work efficiency varies, however, overall it appears to improve time efficiency more for administrators than for physicians and nurses.

  6. BWR Spent Nuclear Fuel Integrity Research and Development Survey for UKABWR Spent Fuel Interim Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevard, Bruce Balkcom; Mertyurek, Ugur; Belles, Randy

    The objective of this report is to identify issues and support documentation and identify and detail existing research on spent fuel dry storage; provide information to support potential R&D for the UKABWR (United Kingdom Advanced Boiling Water Reactor) Spent Fuel Interim Storage (SFIS) Pre-Construction Safety Report; and support development of answers to questions developed by the regulator. Where there are gaps or insufficient data, Oak Ridge National Laboratory (ORNL) has summarized the research planned to provide the necessary data along with the schedule for the research, if known. Spent nuclear fuel (SNF) from nuclear power plants has historically been storedmore » on site (wet) in spent fuel pools pending ultimate disposition. Nuclear power users (countries, utilities, vendors) are developing a suite of options and set of supporting analyses that will enable future informed choices about how best to manage these materials. As part of that effort, they are beginning to lay the groundwork for implementing longer-term interim storage of the SNF and the Greater Than Class C (CTCC) waste (dry). Deploying dry storage will require a number of technical issues to be addressed. For the past 4-5 years, ORNL has been supporting the U.S. Department of Energy (DOE) in identifying these key technical issues, managing the collection of data to be used in issue resolution, and identifying gaps in the needed data. During this effort, ORNL subject matter experts (SMEs) have become expert in understanding what information is publicly available and what gaps in data remain. To ensure the safety of the spent fuel under normal and frequent conditions of wet and subsequent dry storage, intact fuel must be shown to: 1.Maintain fuel cladding integrity; 2.Maintain its geometry for cooling, shielding, and subcriticality; 3.Maintain retrievability, and damaged fuel with pinhole or hairline cracks must be shown not to degrade further. Where PWR (pressurized water reactor) information is utilized or referenced, justification has been provided as to why the data can be utilized for BWR fuel.« less

  7. Hydrogen suppresses UO 2 corrosion

    NASA Astrophysics Data System (ADS)

    Carbol, Paul; Fors, Patrik; Gouder, Thomas; Spahiu, Kastriot

    2009-08-01

    Release of long-lived radionuclides such as plutonium and caesium from spent nuclear fuel in deep geological repositories will depend mainly on the dissolution rate of the UO 2 fuel matrix. This dissolution rate will, in turn, depend on the redox conditions at the fuel surface. Under oxidative conditions UO 2 will be oxidised to the 1000 times more soluble UO 2.67. This may occur in a repository as the reducing deep groundwater becomes locally oxidative at the fuel surface under the effect of α-radiolysis, the process by which α-particles emitted from the fuel split water molecules. On the other hand, the groundwater corrodes canister iron generating large amounts of hydrogen. The role of molecular hydrogen as reductant in a deep bedrock repository is questioned. Here we show evidence of a surface-catalysed reaction, taking place in the H 2-UO 2-H 2O system where molecular hydrogen is able to reduce oxidants originating from α-radiolysis. In our experiment the UO 2 surface remained stoichiometric proving that the expected oxidation of UO 2.00 to UO 2.67 due to radiolytic oxidants was absent. As a consequence, the dissolution of UO 2 stopped when equilibrium was reached between the solid phase and U 4+ species in the aqueous phase. The steady-state concentration of uranium in solution was determined to be 9 × 10 -12 M, about 30 times lower than previously reported for reducing conditions. Our findings show that fuel dissolution is suppressed by H 2. Consequently, radiotoxic nuclides in spent nuclear fuel will remain immobilised in the UO 2 matrix. A mechanism for the surface-catalysed reaction between molecular hydrogen and radiolytic oxidants is proposed.

  8. A broadband 8-18GHz 4-input 4-output Butler matrix

    NASA Astrophysics Data System (ADS)

    Milner, Leigh; Parker, Michael

    2007-01-01

    Butler matrices can be used in antenna beam-forming networks to provide a linear phase distribution across the elements of an array. The development of an 8 to 18GHz micro-strip implementation of a 4-input 4-ouput Butler matrix is described. The designed Butler matrix uses March hybrids, Schiffman phase shifters and wire-bond crossovers integrated on a single 60mm x 70mm alumina substrate.

  9. spammpack, Version 2013-06-18

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-01-17

    This library is an implementation of the Sparse Approximate Matrix Multiplication (SpAMM) algorithm introduced. It provides a matrix data type, and an approximate matrix product, which exhibits linear scaling computational complexity for matrices with decay. The product error and the performance of the multiply can be tuned by choosing an appropriate tolerance. The library can be compiled for serial execution or parallel execution on shared memory systems with an OpenMP capable compiler

  10. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  11. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  12. Leadership's Role in Support of Online Academic Programs: Implementing an Administrative Support Matrix

    PubMed Central

    Barefield, Amanda C.; Meyer, John D.

    2013-01-01

    The proliferation of online education programs creates a myriad of challenges for those charged with implementation and delivery of these programs. Although creating and sustaining quality education is a shared responsibility of faculty, staff, and academic leaders, this article focuses on the pivotal role of leadership in securing the necessary resources, developing the organizational structures, and influencing organizational culture. The vital foundation for a successful outcome when implementing online education programs is the role of leadership in providing adequate and appropriate support. Abundant literature extols the roles of leadership in project management; however, there is a dearth of models or systematic methods for leaders to follow regarding how to implement and sustain online programs. Research conducted by the authors culminated in the development of an Administrative Support Matrix, thus addressing the current gap in the literature. PMID:23346030

  13. Performance assessment of self-interrogation neutron resonance densitometry for spent nuclear fuel assay

    NASA Astrophysics Data System (ADS)

    Hu, Jianwei; Tobin, Stephen J.; LaFleur, Adrienne M.; Menlove, Howard O.; Swinhoe, Martyn T.

    2013-11-01

    Self-Interrogation Neutron Resonance Densitometry (SINRD) is one of several nondestructive assay (NDA) techniques being integrated into systems to measure spent fuel as part of the Next Generation Safeguards Initiative (NGSI) Spent Fuel Project. The NGSI Spent Fuel Project is sponsored by the US Department of Energy's National Nuclear Security Administration to measure plutonium in, and detect diversion of fuel pins from, spent nuclear fuel assemblies. SINRD shows promising capability in determining the 239Pu and 235U content in spent fuel. SINRD is a relatively low-cost and lightweight instrument, and it is easy to implement in the field. The technique makes use of the passive neutron source existing in a spent fuel assembly, and it uses ratios between the count rates collected in fission chambers that are covered with different absorbing materials. These ratios are correlated to key attributes of the spent fuel assembly, such as the total mass of 239Pu and 235U. Using count rate ratios instead of absolute count rates makes SINRD less vulnerable to systematic uncertainties. Building upon the previous research, this work focuses on the underlying physics of the SINRD technique: quantifying the individual impacts on the count rate ratios of a few important nuclides using the perturbation method; examining new correlations between count rate ratio and mass quantities based on the results of the perturbation study; quantifying the impacts on the energy windows of the filtering materials that cover the fission chambers by tallying the neutron spectra before and after the neutrons go through the filters; and identifying the most important nuclides that cause cooling-time variations in the count rate ratios. The results of these studies show that 235U content has a major impact on the SINRD signal in addition to the 239Pu content. Plutonium-241 and 241Am are the two main nuclides responsible for the variation in the count rate ratio with cooling time. In short, this work provides insights into some of the main factors that affect the performance of SINRD, and it should help improve the hardware design and the algorithm used to interpret the signal for the SINRD technique. In addition, the modeling and simulation techniques used in this work can be easily adopted for analysis of other NDA systems, especially when complex systems like spent nuclear fuel are involved. These studies were conducted at Los Alamos National Laboratory.

  14. Compressed sensing of hyperspectral images based on scrambled block Hadamard ensemble

    NASA Astrophysics Data System (ADS)

    Wang, Li; Feng, Yan

    2016-11-01

    A fast measurement matrix based on scrambled block Hadamard ensemble for compressed sensing (CS) of hyperspectral images (HSI) is investigated. The proposed measurement matrix offers several attractive features. First, the proposed measurement matrix possesses Gaussian behavior, which illustrates that the matrix is universal and requires a near-optimal number of samples for exact reconstruction. In addition, it could be easily implemented in the optical domain due to its integer-valued elements. More importantly, the measurement matrix only needs small memory for storage in the sampling process. Experimental results on HSIs reveal that the reconstruction performance of the proposed measurement matrix is comparable or better than Gaussian matrix and Bernoulli matrix using different reconstruction algorithms while consuming less computational time. The proposed matrix could be used in CS of HSI, which would save the storage memory on board, improve the sampling efficiency, and ameliorate the reconstruction quality.

  15. PAH Spectroscopy: Past, Present and Future

    NASA Technical Reports Server (NTRS)

    Mattioda, Andrew

    2016-01-01

    Since their discovery in the 1970's, astronomers, astrophysicists and astrochemists have been intrigued by the nearly ubiquitous unidentified infrared emission (UIR) bands. In the 1980's, investigators determined the most probably source of these emissions was a family of molecules known as Polycyclic Aromatic Hydrocarbons or simply PAHs. In order to better understand these interstellar IR features and utilize them as chemical probes of the cosmos, laboratory spectroscopists have spent the last three decades investigating the spectroscopy of PAHs under astrophysically relevant conditions. This presentation will discuss the similarities and differences in the spectroscopic properties of PAHs as one goes from the Far to Mid to Near infrared wavelength regions and probe the changes observed in PAH spectra as they go from neutral to ionized molecules suspended in an inert gas matrix, to PAHs in a water ice matrix and as a thin film. In selected instances, the experimental results will be compared to theoretical values. The presentation will conclude with a discussion on the future directions of PAH spectroscopy.

  16. The impact of a Critical Care Information System (CCIS) on time spent charting and in direct patient care by staff in the ICU: a review of the literature.

    PubMed

    Mador, Rebecca L; Shaw, Nicola T

    2009-07-01

    The introduction of a Critical Care Information System (CCIS) into an intensive care unit (ICU) is purported to reduce the time health care providers (HCP) spend on documentation and increase the time available for direct patient care. However, there is a paucity of rigorous empirical research that has investigated these assertions. Moreover, those studies that have sought to elucidate the relationship between the introduction of a CCIS and the time spent by staff on in/direct patient care activities have published contradictory findings. The objective of this literature review is to establish the impact of a CCIS on time spent documenting and in direct patient care by staff in the ICU. Five electronic databases were searched including PubMed Central, EMBASE, CINAHL, IEEE Xplore, and the Cochrane Database of Systematic Reviews. Reference lists of all published papers were hand searched, and citations reviewed to identify extra papers. We included studies that were empirical articles, published in English, and provided original data on the impact of a CCIS on time spent documenting and in direct patient care by staff in the ICU. In total, 12 articles met the inclusion criteria. Workflow analysis (66%) and time-and-motion analysis (25%) were the most common forms of data collection. Three (25%) studies found an increase in time spent charting, five (42%) found no difference, and four (33%) studies reported a decrease. Results on the impact of a CCIS on direct patient care were similarly inconclusive. Due to the discrepant findings and several key methodological issues, the impact of a CCIS on time spent charting and in direct patient care remains unclear. This review highlights the need for an increase in rigorous empirical research in this area and provides recommendations for the design and implementation of future studies.

  17. A parallel algorithm for Hamiltonian matrix construction in electron-molecule collision calculations: MPI-SCATCI

    NASA Astrophysics Data System (ADS)

    Al-Refaie, Ahmed F.; Tennyson, Jonathan

    2017-12-01

    Construction and diagonalization of the Hamiltonian matrix is the rate-limiting step in most low-energy electron - molecule collision calculations. Tennyson (1996) implemented a novel algorithm for Hamiltonian construction which took advantage of the structure of the wavefunction in such calculations. This algorithm is re-engineered to make use of modern computer architectures and the use of appropriate diagonalizers is considered. Test calculations demonstrate that significant speed-ups can be gained using multiple CPUs. This opens the way to calculations which consider higher collision energies, larger molecules and / or more target states. The methodology, which is implemented as part of the UK molecular R-matrix codes (UKRMol and UKRMol+) can also be used for studies of bound molecular Rydberg states, photoionization and positron-molecule collisions.

  18. ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.

    PubMed

    Kim, Seongho

    2015-11-01

    Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.

  19. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-02-01

    Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.

  20. On the effective implementation of a boundary element code on graphics processing units unsing an out-of-core LU algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Azevedo, Ed F; Nintcheu Fata, Sylvain

    2012-01-01

    A collocation boundary element code for solving the three-dimensional Laplace equation, publicly available from \\url{http://www.intetec.org}, has been adapted to run on an Nvidia Tesla general purpose graphics processing unit (GPU). Global matrix assembly and LU factorization of the resulting dense matrix were performed on the GPU. Out-of-core techniques were used to solve problems larger than available GPU memory. The code achieved over eight times speedup in matrix assembly and about 56~Gflops/sec in the LU factorization using only 512~Mbytes of GPU memory. Details of the GPU implementation and comparisons with the standard sequential algorithm are included to illustrate the performance ofmore » the GPU code.« less

  1. Singular value decomposition utilizing parallel algorithms on graphical processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotas, Charlotte W; Barhen, Jacob

    2011-01-01

    One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less

  2. Hypercube matrix computation task

    NASA Technical Reports Server (NTRS)

    Calalo, Ruel H.; Imbriale, William A.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Lyons, James R.; Manshadi, Farzin; Patterson, Jean E.

    1988-01-01

    A major objective of the Hypercube Matrix Computation effort at the Jet Propulsion Laboratory (JPL) is to investigate the applicability of a parallel computing architecture to the solution of large-scale electromagnetic scattering problems. Three scattering analysis codes are being implemented and assessed on a JPL/California Institute of Technology (Caltech) Mark 3 Hypercube. The codes, which utilize different underlying algorithms, give a means of evaluating the general applicability of this parallel architecture. The three analysis codes being implemented are a frequency domain method of moments code, a time domain finite difference code, and a frequency domain finite elements code. These analysis capabilities are being integrated into an electromagnetics interactive analysis workstation which can serve as a design tool for the construction of antennas and other radiating or scattering structures. The first two years of work on the Hypercube Matrix Computation effort is summarized. It includes both new developments and results as well as work previously reported in the Hypercube Matrix Computation Task: Final Report for 1986 to 1987 (JPL Publication 87-18).

  3. Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep

    2011-05-01

    This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.

  4. Design and experimental verification for optical module of optical vector-matrix multiplier.

    PubMed

    Zhu, Weiwei; Zhang, Lei; Lu, Yangyang; Zhou, Ping; Yang, Lin

    2013-06-20

    Optical computing is a new method to implement signal processing functions. The multiplication between a vector and a matrix is an important arithmetic algorithm in the signal processing domain. The optical vector-matrix multiplier (OVMM) is an optoelectronic system to carry out this operation, which consists of an electronic module and an optical module. In this paper, we propose an optical module for OVMM. To eliminate the cross talk and make full use of the optical elements, an elaborately designed structure that involves spherical lenses and cylindrical lenses is utilized in this optical system. The optical design software package ZEMAX is used to optimize the parameters and simulate the whole system. Finally, experimental data is obtained through experiments to evaluate the overall performance of the system. The results of both simulation and experiment indicate that the system constructed can implement the multiplication between a matrix with dimensions of 16 by 16 and a vector with a dimension of 16 successfully.

  5. Spatiotemporal matrix image formation for programmable ultrasound scanners

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  6. Lanczos algorithm with matrix product states for dynamical correlation functions

    NASA Astrophysics Data System (ADS)

    Dargel, P. E.; Wöllert, A.; Honecker, A.; McCulloch, I. P.; Schollwöck, U.; Pruschke, T.

    2012-05-01

    The density-matrix renormalization group (DMRG) algorithm can be adapted to the calculation of dynamical correlation functions in various ways which all represent compromises between computational efficiency and physical accuracy. In this paper we reconsider the oldest approach based on a suitable Lanczos-generated approximate basis and implement it using matrix product states (MPS) for the representation of the basis states. The direct use of matrix product states combined with an ex post reorthogonalization method allows us to avoid several shortcomings of the original approach, namely the multitargeting and the approximate representation of the Hamiltonian inherent in earlier Lanczos-method implementations in the DMRG framework, and to deal with the ghost problem of Lanczos methods, leading to a much better convergence of the spectral weights and poles. We present results for the dynamic spin structure factor of the spin-1/2 antiferromagnetic Heisenberg chain. A comparison to Bethe ansatz results in the thermodynamic limit reveals that the MPS-based Lanczos approach is much more accurate than earlier approaches at minor additional numerical cost.

  7. Algorithm for optimizing bipolar interconnection weights with applications in associative memories and multitarget classification.

    PubMed

    Chang, S; Wong, K W; Zhang, W; Zhang, Y

    1999-08-10

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  8. Algorithm for Optimizing Bipolar Interconnection Weights with Applications in Associative Memories and Multitarget Classification

    NASA Astrophysics Data System (ADS)

    Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin

    1999-08-01

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  9. The semantic architecture of the World-Wide Molecular Matrix (WWMM)

    PubMed Central

    2011-01-01

    The World-Wide Molecular Matrix (WWMM) is a ten year project to create a peer-to-peer (P2P) system for the publication and collection of chemical objects, including over 250, 000 molecules. It has now been instantiated in a number of repositories which include data encoded in Chemical Markup Language (CML) and linked by URIs and RDF. The technical specification and implementation is now complete. We discuss the types of architecture required to implement nodes in the WWMM and consider the social issues involved in adoption. PMID:21999475

  10. The semantic architecture of the World-Wide Molecular Matrix (WWMM).

    PubMed

    Murray-Rust, Peter; Adams, Sam E; Downing, Jim; Townsend, Joe A; Zhang, Yong

    2011-10-14

    The World-Wide Molecular Matrix (WWMM) is a ten year project to create a peer-to-peer (P2P) system for the publication and collection of chemical objects, including over 250, 000 molecules. It has now been instantiated in a number of repositories which include data encoded in Chemical Markup Language (CML) and linked by URIs and RDF. The technical specification and implementation is now complete. We discuss the types of architecture required to implement nodes in the WWMM and consider the social issues involved in adoption.

  11. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    NASA Technical Reports Server (NTRS)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  12. SAR Polarimetry

    NASA Technical Reports Server (NTRS)

    vanZyl, Jakob J.

    2012-01-01

    Radar Scattering includes: Surface Characteristics, Geometric Properties, Dielectric Properties, Rough Surface Scattering, Geometrical Optics and Small Perturbation Method Solutions, Integral Equation Method, Magellan Image of Pancake Domes on Venus, Dickinson Impact Crater on Venus (Magellan), Lakes on Titan (Cassini Radar, Longitudinal Dunes on Titan (Cassini Radar), Rough Surface Scattering: Effect of Dielectric Constant, Vegetation Scattering, Effect of Soil Moisture. Polarimetric Radar includes: Principles of Polarimetry: Field Descriptions, Wave Polarizations: Geometrical Representations, Definition of Ellipse Orientation Angles, Scatter as Polarization Transformer, Scattering Matrix, Coordinate Systems, Scattering Matrix, Covariance Matrix, Pauli Basis and Coherency Matrix, Polarization Synthesis, Polarimeter Implementation.

  13. Applying transpose matrix on advanced encryption standard (AES) for database content

    NASA Astrophysics Data System (ADS)

    Manurung, E. B. P.; Sitompul, O. S.; Suherman

    2018-03-01

    Advanced Encryption Standard (AES) is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) and has been adopted by the U.S. government and is now used worldwide. This paper reports the impact of transpose matrix integration to AES. Transpose matrix implementation on AES is aimed at first stage of chypertext modifications for text based database security so that the confidentiality improves. The matrix is also able to increase the avalanche effect of the cryptography algorithm 4% in average.

  14. Comparison Of Models Of Metal-Matrix Composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.; Johnson, W. S.; Naik, R. A.

    1994-01-01

    Report presents comparative review of four mathematical models of micromechanical behaviors of fiber/metal-matrix composite materials. Models differ in various details, all based on properties of fiber and matrix constituent materials, all involve square arrays of fibers continuous and parallel and all assume complete bonding between constituents. Computer programs implementing models used to predict properties and stress-vs.-strain behaviors of unidirectional- and cross-ply laminated composites made of boron fibers in aluminum matrices and silicon carbide fibers in titanium matrices. Stresses in fiber and matrix constituent materials also predicted.

  15. Cost analysis for the implementation of a medication review with follow-up service in Spain.

    PubMed

    Noain, Aranzazu; Garcia-Cardenas, Victoria; Gastelurrutia, Miguel Angel; Malet-Larrea, Amaia; Martinez-Martinez, Fernando; Sabater-Hernandez, Daniel; Benrimoj, Shalom I

    2017-08-01

    Background Medication review with follow-up (MRF) is a professional pharmacy service proven to be cost-effective. Its broader implementation is limited, mainly due to the lack of evidence-based implementation programs that include economic and financial analysis. Objective To analyse the costs and estimate the price of providing and implementing MRF. Setting Community pharmacy in Spain. Method Elderly patients using poly-pharmacy received a community pharmacist-led MRF for 6 months. The cost analysis was based on the time-driven activity based costing model and included the provider costs, initial investment costs and maintenance expenses. The service price was estimated using the labour costs, costs associated with service provision, potential number of patients receiving the service and mark-up. Main outcome measures Costs and potential price of MRF. Results A mean time of 404.4 (SD 232.2) was spent on service provision and was extrapolated to annual costs. Service provider cost per patient ranged from €196 (SD 90.5) to €310 (SD 164.4). The mean initial investment per pharmacy was €4594 and the mean annual maintenance costs €3,068. Largest items contributing to cost were initial staff training, continuing education and renting of the patient counselling area. The potential service price ranged from €237 to €628 per patient a year. Conclusion Time spent by the service provider accounted for 75-95% of the final cost, followed by initial investment costs and maintenance costs. Remuneration for professional pharmacy services provision must cover service costs and appropriate profit, allowing for their long-term sustainability.

  16. Application of SOJA and InforMatrix in practice: interactive web and workshop tools.

    PubMed

    Brenninkmeijer, Rob; Janknegt, Robert

    2007-10-01

    System of Objectified Judgement Analysis (SOJA) and InforMatrix are decision-matrix techniques designed to support a rational selection of drugs. Both SOJA and InforMatrix can be considered as strategic tools in the practical implementation of rational pharmacotherapy. In order to apply the matrix techniques to drug selection, strategic navigation through essential information (with the aim of reaching consensus in pharmacotherapy) is required. The consensus has to be reached in an interactive, communicative, collegial manner, within a professional environment. This environment is realised in the form of interactive applications in workshops and on the internet. Such interactive applications are illustrated and discussed in this article.

  17. Matrix preconditioning: a robust operation for optical linear algebra processors.

    PubMed

    Ghosh, A; Paparao, P

    1987-07-15

    Analog electrooptical processors are best suited for applications demanding high computational throughput with tolerance for inaccuracies. Matrix preconditioning is one such application. Matrix preconditioning is a preprocessing step for reducing the condition number of a matrix and is used extensively with gradient algorithms for increasing the rate of convergence and improving the accuracy of the solution. In this paper, we describe a simple parallel algorithm for matrix preconditioning, which can be implemented efficiently on a pipelined optical linear algebra processor. From the results of our numerical experiments we show that the efficacy of the preconditioning algorithm is affected very little by the errors of the optical system.

  18. Educational Reform Implementation: A Co-Constructed Process. Research Report 5.

    ERIC Educational Resources Information Center

    Datnow, Amanda; Hubbard, Lea; Mehan, Hugh

    This research report argues for viewing the complex, often messy process of school reform implementation as a "conditional matrix" coupled with qualitative research. As illustration, two studies (of six reform efforts in one county and of implementation of an untracking program in Kentucky) are reported. Preliminary analysis reveals that…

  19. Establishing a Baseline: Community Benefit Spending by Not-for-Profit Hospitals Prior to Implementation of the Affordable Care Act

    PubMed Central

    Tung, Greg J.; Lindrooth, Richard C.; Johnson, Emily K.; Hardy, Rose; Castrucci, Brian C.

    2017-01-01

    Context: Community Benefit spending by not-for-profit hospitals has served as a critical, formalized part of the nation's safety net for almost 50 years. This has occurred mostly through charity care. This article examines how not-for-profit hospitals spent Community Benefit dollars prior to full implementation of the Affordable Care Act (ACA). Methods: Using data from 2009 to 2012 hospital tax and other governmental filings, we constructed national, hospital-referral-region, and facility-level estimates of Community Benefit spending. Data were collected in 2015 and analyzed in 2015 and 2016. Data were matched at the facility level for a non-profit hospital's IRS tax filings (Form 990, Schedule H) and CMS Hospital Cost Report Information System and Provider of Service data sets. Results: During 2009, hospitals spent about 8% of total operating expenses on Community Benefit. This increased to between 8.3% and 8.5% in 2012. The majority of spending (>80%) went toward charity care, unreimbursed Medicaid, and subsidized health services, with approximately 6% going toward both community health improvement and health professionals' education. By 2012, national spending on Community Benefit likely exceeded $60 billion. The largest hospital systems spent the vast majority of the nation's Community Benefit; the top 25% of systems spent more than 80 cents of every Community Benefit dollar. Discussion: Community Benefit spending has remained relatively steady as a proportion of total operating expenses and so has increased over time—although charity care remains the major focus of Community Benefit spending overall. Implications: More than $60 billion was spent on Community Benefit prior to implementation of the ACA. New reporting and spending requirements from the IRS, alongside changes by the ACA, are changing incentives for hospitals in how they spend Community Benefit dollars. In the short term, and especially the long term, hospital systems would do well to partner with public health, other social services, and even competing hospitals to invest in population-based activities. The mandated community health needs assessment process is a logical home for these sorts of collaborations. Relatively modest investments can improve the baseline level of health in their communities and make it easier to improve population health. Aside from a population health justification for a partnership model, a business case is necessary for widespread adoption of this approach. Because of their authorities, responsibilities, and centuries of expertise in community health, public health agencies are in a position to help hospitals form concrete, sustainable collaborations for the improvement of population health. Conclusion: The ACA will likely change the delivery of uncompensated and charity care in the United States in the years to come. How hospitals choose to spend those dollars may be influenced greatly by the financial and political environments, as well as the strength of community partnerships. PMID:27997478

  20. Establishing a Baseline: Community Benefit Spending by Not-for-Profit Hospitals Prior to Implementation of the Affordable Care Act.

    PubMed

    Leider, Jonathon P; Tung, Greg J; Lindrooth, Richard C; Johnson, Emily K; Hardy, Rose; Castrucci, Brian C

    Community Benefit spending by not-for-profit hospitals has served as a critical, formalized part of the nation's safety net for almost 50 years. This has occurred mostly through charity care. This article examines how not-for-profit hospitals spent Community Benefit dollars prior to full implementation of the Affordable Care Act (ACA). Using data from 2009 to 2012 hospital tax and other governmental filings, we constructed national, hospital-referral-region, and facility-level estimates of Community Benefit spending. Data were collected in 2015 and analyzed in 2015 and 2016. Data were matched at the facility level for a non-profit hospital's IRS tax filings (Form 990, Schedule H) and CMS Hospital Cost Report Information System and Provider of Service data sets. During 2009, hospitals spent about 8% of total operating expenses on Community Benefit. This increased to between 8.3% and 8.5% in 2012. The majority of spending (>80%) went toward charity care, unreimbursed Medicaid, and subsidized health services, with approximately 6% going toward both community health improvement and health professionals' education. By 2012, national spending on Community Benefit likely exceeded $60 billion. The largest hospital systems spent the vast majority of the nation's Community Benefit; the top 25% of systems spent more than 80 cents of every Community Benefit dollar. Community Benefit spending has remained relatively steady as a proportion of total operating expenses and so has increased over time-although charity care remains the major focus of Community Benefit spending overall. More than $60 billion was spent on Community Benefit prior to implementation of the ACA. New reporting and spending requirements from the IRS, alongside changes by the ACA, are changing incentives for hospitals in how they spend Community Benefit dollars. In the short term, and especially the long term, hospital systems would do well to partner with public health, other social services, and even competing hospitals to invest in population-based activities. The mandated community health needs assessment process is a logical home for these sorts of collaborations. Relatively modest investments can improve the baseline level of health in their communities and make it easier to improve population health. Aside from a population health justification for a partnership model, a business case is necessary for widespread adoption of this approach. Because of their authorities, responsibilities, and centuries of expertise in community health, public health agencies are in a position to help hospitals form concrete, sustainable collaborations for the improvement of population health. The ACA will likely change the delivery of uncompensated and charity care in the United States in the years to come. How hospitals choose to spend those dollars may be influenced greatly by the financial and political environments, as well as the strength of community partnerships.

  1. Computing Fiber/Matrix Interfacial Effects In SiC/RBSN

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Hopkins, Dale A.

    1996-01-01

    Computational study conducted to demonstrate use of boundary-element method in analyzing effects of fiber/matrix interface on elastic and thermal behaviors of representative laminated composite materials. In study, boundary-element method implemented by Boundary Element Solution Technology - Composite Modeling System (BEST-CMS) computer program.

  2. 45 CFR Appendix A to Subpart C of... - Security Standards: Matrix

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... C of Part 164 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS... Protected Health Information Pt. 164, Subpt. C, App. A Appendix A to Subpart C of Part 164—Security Standards: Matrix Standards Sections Implementation Specifications (R)=Required, (A)=Addressable...

  3. 45 CFR Appendix A to Subpart C of... - Security Standards: Matrix

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... C of Part 164 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS... Protected Health Information Pt. 164, Subpt. C, App. A Appendix A to Subpart C of Part 164—Security Standards: Matrix Standards Sections Implementation Specifications (R)=Required, (A)=Addressable...

  4. Association Between Physical Activity Intensity and Physical Capacity Among Individuals Awaiting Bariatric Surgery.

    PubMed

    Rioux, Brittany V; Sénéchal, Martin; Kwok, Karen; Fox, Jill; Gamey, Dean; Bharti, Neha; Vergis, Ashley; Hardy, Krista; Bouchard, Danielle R

    2017-05-01

    Physical activity is a routine component of the lifestyle modification program implemented prior to bariatric surgery, and one of the goals is to improve patients' physical capacity. However, the physical activity intensity recommended to meet that goal is unknown. This study aimed to assess the association between time spent at different physical activity intensities and physical capacity in patients awaiting bariatric surgery. A total of 39 women and 13 men were recruited. The primary outcome was physical capacity measured using six objective tests: 6-min walk, chair stand, sit and reach, unipodal balance (eyes open and eyes closed), and hand grip strength tests. The primary exposure variable was physical activity intensity (i.e., sedentary, light, moderate, and vigorous) measured by accelerometers. The average body mass index was 46.3 ± 5.4 kg/m 2 . Only 6% of total time was spent at moderate to vigorous intensity, while 71% of the time was spent sedentary. When adjusted for body mass index, age, and sex, four of the six physical capacity tests were significantly associated with moderate intensity physical activity β(SE): 6-min walk 9.7 (2.7), chair stand 0.3 (0.1), balance (eyes open) 1.8 (0.7), and hand grip strength 1.2 (0.4), and only the 6-min walk was associated with sedentary activity 1.7 (0.7). These results suggest that physical capacity is associated with time spent at moderate intensity in individuals awaiting bariatric surgery. The next step is to study if an increase in time spent at moderate intensity will translate to improvements in physical capacity.

  5. A note on implementation of decaying product correlation structures for quasi-least squares.

    PubMed

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Economic cost for implementation of the U.S. Food and Drug Administration's Code of Federal Regulations Title 21, Part 1271 in an egg donor program.

    PubMed

    Baker, Valerie L; Gvakharia, Marina O; Rone, Heather M; Manalad, James R; Adamson, G David

    2008-09-01

    To assess the economic cost of implementing the U.S. Food and Drug Administration's Code of Federal Regulations Title 21, Part 1271 for infectious screening of egg donors in our practice during the first year. Physicians and employees of our practice were surveyed to ascertain the scope of duties and the number of hours spent to implement the regulations. The economic cost to the practice and the cost of additional laboratories were calculated. Private practice. Egg donors and recipient couples who underwent treatment in our center from May 25, 2005 (the day regulations became effective) to May 25, 2006; and physicians, administrators, and staff who were employed by the practice during this time frame. Using a questionnaire, structured interviews were conducted for all physicians and employees of our practice. The information regarding number of hours was provided to our chief financial officer, who calculated the cost to the practice. The cost that recipient couples paid for laboratory tests that would not otherwise be required to meet American Society for Reproductive Medicine guidelines and the cost of an external audit were also added to the overall practice costs to determine a total cost associated with the regulations in the first year. List of activities associated with implementation of the regulations, personnel hours involved to implement the regulations, and economic cost to the practice and to recipient couples. The total number of personnel hours spent by our practice in preparation for implementation of the regulations was 623.3 hours. In the first year, 675.2 additional hours were required to implement the regulations for 40 donors who cycled during this time. The economic cost to the practice for both preparation and implementation of the regulations was $219, 838, and the cost of additional laboratory work borne by the recipient couples was $15,880. Thus, the total cost was calculated to be $235,718 at 1 year after implementation of the regulations. Implementation of the FDA 21 CFR, Part 1271 was associated with a very high economic cost, even if the costs incurred by the government to develop and implement the regulation are excluded.

  7. The multifacet graphically contracted function method. I. Formulation and implementation

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.

    2014-08-01

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  8. The multifacet graphically contracted function method. I. Formulation and implementation.

    PubMed

    Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R

    2014-08-14

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N(2)n(4)) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.

  9. Efficient parallel linear scaling construction of the density matrix for Born-Oppenheimer molecular dynamics.

    PubMed

    Mniszewski, S M; Cawkwell, M J; Wall, M E; Mohd-Yusof, J; Bock, N; Germann, T C; Niklasson, A M N

    2015-10-13

    We present an algorithm for the calculation of the density matrix that for insulators scales linearly with system size and parallelizes efficiently on multicore, shared memory platforms with small and controllable numerical errors. The algorithm is based on an implementation of the second-order spectral projection (SP2) algorithm [ Niklasson, A. M. N. Phys. Rev. B 2002 , 66 , 155115 ] in sparse matrix algebra with the ELLPACK-R data format. We illustrate the performance of the algorithm within self-consistent tight binding theory by total energy calculations of gas phase poly(ethylene) molecules and periodic liquid water systems containing up to 15,000 atoms on up to 16 CPU cores. We consider algorithm-specific performance aspects, such as local vs nonlocal memory access and the degree of matrix sparsity. Comparisons to sparse matrix algebra implementations using off-the-shelf libraries on multicore CPUs, graphics processing units (GPUs), and the Intel many integrated core (MIC) architecture are also presented. The accuracy and stability of the algorithm are illustrated with long duration Born-Oppenheimer molecular dynamics simulations of 1000 water molecules and a 303 atom Trp cage protein solvated by 2682 water molecules.

  10. UB Matrix Implementation for Inelastic Neutron Scattering Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumsden, Mark D; Robertson, Lee; Yethiraj, Mohana

    The UB matrix approach has been extended to handle inelastic neutron scattering experiments with differing k{sub i} and k{sub f}. We have considered the typical goniometer employed on triple-axis and time-of-flight spectrometers. Expressions are derived to allow for calculation of the UB matrix and for converting from observables to Q-energy space. In addition, we have developed appropriate modes for calculation of angles for a specified Q-energy position.

  11. Optical character recognition with feature extraction and associative memory matrix

    NASA Astrophysics Data System (ADS)

    Sasaki, Osami; Shibahara, Akihito; Suzuki, Takamasa

    1998-06-01

    A method is proposed in which handwritten characters are recognized using feature extraction and an associative memory matrix. In feature extraction, simple processes such as shifting and superimposing patterns are executed. A memory matrix is generated with singular value decomposition and by modifying small singular values. The method is optically implemented with two liquid crystal displays. Experimental results for the recognition of 25 handwritten alphabet characters clearly shows the effectiveness of the method.

  12. Investigation and Implementation of Matrix Permanent Algorithms for Identity Resolution

    DTIC Science & Technology

    2014-12-01

    calculation of the permanent of a matrix whose dimension is a function of target count [21]. However, the optimal approach for computing the permanent is...presently unclear. The primary objective of this project was to determine the optimal computing strategy(-ies) for the matrix permanent in tactical and...solving various combinatorial problems (see [16] for details and appli- cations to a wide variety of problems) and thus can be applied to compute a

  13. How tough is bone? Application of elastic-plastic fracture mechanics to bone.

    PubMed

    Yan, Jiahau; Mecholsky, John J; Clifton, Kari B

    2007-02-01

    Bone, with a hierarchical structure that spans from the nano-scale to the macro-scale and a composite design composed of nano-sized mineral crystals embedded in an organic matrix, has been shown to have several toughening mechanisms that increases its toughness. These mechanisms can stop, slow, or deflect crack propagation and cause bone to have a moderate amount of apparent plastic deformation before fracture. In addition, bone contains a high volumetric percentage of organics and water that makes it behave nonlinearly before fracture. Many researchers used strength or critical stress intensity factor (fracture toughness) to characterize the mechanical property of bone. However, these parameters do not account for the energy spent in plastic deformation before bone fracture. To accurately describe the mechanical characteristics of bone, we applied elastic-plastic fracture mechanics to study bone's fracture toughness. The J integral, a parameter that estimates both the energies consumed in the elastic and plastic deformations, was used to quantify the total energy spent before bone fracture. Twenty cortical bone specimens were cut from the mid-diaphysis of bovine femurs. Ten of them were prepared to undergo transverse fracture and the other 10 were prepared to undergo longitudinal fracture. The specimens were prepared following the apparatus suggested in ASTM E1820 and tested in distilled water at 37 degrees C. The average J integral of the transverse-fractured specimens was found to be 6.6 kPa m, which is 187% greater than that of longitudinal-fractured specimens (2.3 kPa m). The energy spent in the plastic deformation of the longitudinal-fractured and transverse-fractured bovine specimens was found to be 3.6-4.1 times the energy spent in the elastic deformation. This study shows that the toughness of bone estimated using the J integral is much greater than the toughness measured using the critical stress intensity factor. We suggest that the J integral method is a better technique in estimating the toughness of bone.

  14. Measuring the success of electronic medical record implementation using electronic and survey data.

    PubMed Central

    Keshavjee, K.; Troyan, S.; Holbrook, A. M.; VanderMolen, D.

    2001-01-01

    Computerization of physician practices is increasing. Stakeholders are demanding demonstrated value for their Electronic Medical Record (EMR) implementations. We developed survey tools to measure medical office processes, including administrative and physician tasks pre- and post-EMR implementation. We included variables that were expected to improve with EMR implementation and those that were not expected to improve, as controls. We measured the same processes pre-EMR, at six months and 18 months post-EMR. Time required for most administrative tasks decreased within six months of EMR implementation. Staff time spent on charting increased with time, in keeping with our anecdotal observations that nurses were given more responsibility for charting in many offices. Physician time to chart increased initially by 50%, but went down to original levels by 18 months. However, this may be due to the drop-out of those physicians who had a difficult time charting electronically. PMID:11825201

  15. Finite Element Implementation of Mechanochemical Phenomena in Neutral Deformable Porous Media Under Finite Deformation

    PubMed Central

    Ateshian, Gerard A.; Albro, Michael B.; Maas, Steve; Weiss, Jeffrey A.

    2011-01-01

    Biological soft tissues and cells may be subjected to mechanical as well as chemical (osmotic) loading under their natural physiological environment or various experimental conditions. The interaction of mechanical and chemical effects may be very significant under some of these conditions, yet the highly nonlinear nature of the set of governing equations describing these mechanisms poses a challenge for the modeling of such phenomena. This study formulated and implemented a finite element algorithm for analyzing mechanochemical events in neutral deformable porous media under finite deformation. The algorithm employed the framework of mixture theory to model the porous permeable solid matrix and interstitial fluid, where the fluid consists of a mixture of solvent and solute. A special emphasis was placed on solute-solid matrix interactions, such as solute exclusion from a fraction of the matrix pore space (solubility) and frictional momentum exchange that produces solute hindrance and pumping under certain dynamic loading conditions. The finite element formulation implemented full coupling of mechanical and chemical effects, providing a framework where material properties and response functions may depend on solid matrix strain as well as solute concentration. The implementation was validated using selected canonical problems for which analytical or alternative numerical solutions exist. This finite element code includes a number of unique features that enhance the modeling of mechanochemical phenomena in biological tissues. The code is available in the public domain, open source finite element program FEBio (http://mrl.sci.utah.edu/software). PMID:21950898

  16. Numericware i: Identical by State Matrix Calculator

    PubMed Central

    Kim, Bongsong; Beavis, William D

    2017-01-01

    We introduce software, Numericware i, to compute identical by state (IBS) matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505) and TASSEL (.0587). With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db. PMID:28469375

  17. Wrapper-based selection of genetic features in genome-wide association studies through fast matrix operations

    PubMed Central

    2012-01-01

    Background Through the wealth of information contained within them, genome-wide association studies (GWAS) have the potential to provide researchers with a systematic means of associating genetic variants with a wide variety of disease phenotypes. Due to the limitations of approaches that have analyzed single variants one at a time, it has been proposed that the genetic basis of these disorders could be determined through detailed analysis of the genetic variants themselves and in conjunction with one another. The construction of models that account for these subsets of variants requires methodologies that generate predictions based on the total risk of a particular group of polymorphisms. However, due to the excessive number of variants, constructing these types of models has so far been computationally infeasible. Results We have implemented an algorithm, known as greedy RLS, that we use to perform the first known wrapper-based feature selection on the genome-wide level. The running time of greedy RLS grows linearly in the number of training examples, the number of features in the original data set, and the number of selected features. This speed is achieved through computational short-cuts based on matrix calculus. Since the memory consumption in present-day computers can form an even tighter bottleneck than running time, we also developed a space efficient variation of greedy RLS which trades running time for memory. These approaches are then compared to traditional wrapper-based feature selection implementations based on support vector machines (SVM) to reveal the relative speed-up and to assess the feasibility of the new algorithm. As a proof of concept, we apply greedy RLS to the Hypertension – UK National Blood Service WTCCC dataset and select the most predictive variants using 3-fold external cross-validation in less than 26 minutes on a high-end desktop. On this dataset, we also show that greedy RLS has a better classification performance on independent test data than a classifier trained using features selected by a statistical p-value-based filter, which is currently the most popular approach for constructing predictive models in GWAS. Conclusions Greedy RLS is the first known implementation of a machine learning based method with the capability to conduct a wrapper-based feature selection on an entire GWAS containing several thousand examples and over 400,000 variants. In our experiments, greedy RLS selected a highly predictive subset of genetic variants in a fraction of the time spent by wrapper-based selection methods used together with SVM classifiers. The proposed algorithms are freely available as part of the RLScore software library at http://users.utu.fi/aatapa/RLScore/. PMID:22551170

  18. Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Trott, Christian Robert; Rajamanickam, Sivasankaran

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  19. Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deveci, Mehmet; Rajamanickam, Sivasankaran; Trott, Christian Robert

    Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less

  20. Employment Interventions for Individuals with ASD: The Relative Efficacy of Supported Employment with or without Prior Project Search Training

    ERIC Educational Resources Information Center

    Schall, Carol M.; Wehman, Paul; Brooke, Valerie; Graham, Carolyn; McDonough, Jennifer; Brooke, Alissa; Ham, Whitney; Rounds, Rachael; Lau, Stephanie; Allen, Jaclyn

    2015-01-01

    This paper presents findings from a retrospective observational records review study that compares the outcomes associated with implementation of supported employment (SE) with and without prior Project SEARCH with ASD Supports (PS-ASD) on wages earned, time spent in intervention, and job retention. Results suggest that SE resulted in competitive…

  1. A Summative Evaluation of a Middle School Summer Math Program

    ERIC Educational Resources Information Center

    Nelson, Brian W.

    2014-01-01

    By some estimates, students lose an average of 2.6 months of learning during summer break, roughly one quarter of the time spent in school. To combat this problem, the school under study implemented a summer math program that was thematically linked to the Boston Red Sox baseball team. Hundreds of students have participated in the program, but the…

  2. Technology Programs...for All or for Some?

    NASA Astrophysics Data System (ADS)

    Giancola, Susan P.

    2001-12-01

    The 1990s have been a decade of great spending and great introspection, particularly when it comes to educational allocations. Citizens, corporations, and public officials are becoming increasingly inquisitive about where their money is going and if the dollars spent are making a difference. For 5 years, the multimillion-dollar Delaware Technology Innovation Challenge project has implemented LightspanTM educational software in the classrooms and homes of elementary school students. Program goals are to increase parent involvement, generate more time for learning, and improve student achievement. On the surface, the program seems to have met its goals. Parents report being more involved in their child's education. Students and parents describe the time spent on the software at home as not replacing traditional homework, but rather television watching. And, student achievement in both reading and mathematics has increased at rates higher than would be expected. However, a closer examination of evaluation results reveals the program has worked best for lower achieving students; students who scored below the 50th percentile in fall testing had much greater achievement gains than their higher scoring peers. This paper investigates whether evaluation findings are reflective of the program's implementation or rather reveal a limitation of the technology.

  3. An Integrated Forensics Approach To Fingerprint PCB Sources In Sediments Using RSC And ACF

    EPA Science Inventory

    Determing the original source of contamination to a heterogeneous matrix matrix such as sediment is a requirement for both clean-up and compliance programs. Identifying the source of sediment contaminants in industrial settings is a pre-requisite to implementing any proposed se...

  4. 82 FR 38764 - Wassenaar Arrangement 2016 Plenary Agreements Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2017-08-15

    ... `ceramic-``matrix'',' so as to control carbon fiber reinforced SiC matrix composites (C-SiC). These... Machines and Tow/Fiber Placement machines were accurately delineated at 1 inch, which is used in industry... manufacturing process. The formerly used phrase ``incorporating particles, whiskers or fibers'' did not...

  5. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  6. Numerical methods on some structured matrix algebra problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1996-06-01

    This proposal concerned the design, analysis, and implementation of serial and parallel algorithms for certain structured matrix algebra problems. It emphasized large order problems and so focused on methods that can be implemented efficiently on distributed-memory MIMD multiprocessors. Such machines supply the computing power and extensive memory demanded by the large order problems. We proposed to examine three classes of matrix algebra problems: the symmetric and nonsymmetric eigenvalue problems (especially the tridiagonal cases) and the solution of linear systems with specially structured coefficient matrices. As all of these are of practical interest, a major goal of this work was tomore » translate our research in linear algebra into useful tools for use by the computational scientists interested in these and related applications. Thus, in addition to software specific to the linear algebra problems, we proposed to produce a programming paradigm and library to aid in the design and implementation of programs for distributed-memory MIMD computers. We now report on our progress on each of the problems and on the programming tools.« less

  7. Parallel halftoning technique using dot diffusion optimization

    NASA Astrophysics Data System (ADS)

    Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara

    2017-05-01

    In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.

  8. Incompressible SPH (ISPH) with fast Poisson solver on a GPU

    NASA Astrophysics Data System (ADS)

    Chow, Alex D.; Rogers, Benedict D.; Lind, Steven J.; Stansby, Peter K.

    2018-05-01

    This paper presents a fast incompressible SPH (ISPH) solver implemented to run entirely on a graphics processing unit (GPU) capable of simulating several millions of particles in three dimensions on a single GPU. The ISPH algorithm is implemented by converting the highly optimised open-source weakly-compressible SPH (WCSPH) code DualSPHysics to run ISPH on the GPU, combining it with the open-source linear algebra library ViennaCL for fast solutions of the pressure Poisson equation (PPE). Several challenges are addressed with this research: constructing a PPE matrix every timestep on the GPU for moving particles, optimising the limited GPU memory, and exploiting fast matrix solvers. The ISPH pressure projection algorithm is implemented as 4 separate stages, each with a particle sweep, including an algorithm for the population of the PPE matrix suitable for the GPU, and mixed precision storage methods. An accurate and robust ISPH boundary condition ideal for parallel processing is also established by adapting an existing WCSPH boundary condition for ISPH. A variety of validation cases are presented: an impulsively started plate, incompressible flow around a moving square in a box, and dambreaks (2-D and 3-D) which demonstrate the accuracy, flexibility, and speed of the methodology. Fragmentation of the free surface is shown to influence the performance of matrix preconditioners and therefore the PPE matrix solution time. The Jacobi preconditioner demonstrates robustness and reliability in the presence of fragmented flows. For a dambreak simulation, GPU speed ups demonstrate up to 10-18 times and 1.1-4.5 times compared to single-threaded and 16-threaded CPU run times respectively.

  9. Novel mixed matrix membranes for sulfur removal and for fuel cell applications

    NASA Astrophysics Data System (ADS)

    Lin, Ligang; Wang, Andong; Zhang, Longhui; Dong, Meimei; Zhang, Yuzhong

    2012-12-01

    Sulfur removal is significant for fuels used as hydrogen source for fuel cell applications and to avoid sulfur poisoning of therein used catalysts. Novel mixed matrix membranes (MMMs) with well-defined transport channels are proposed for sulfur removal. MMMs are fabricated using polyimide (PI) as matrix material and Y zeolites as adsorptive functional materials. The influence of architecture conditions on the morphology transition from finger-like to sponge-like structure and the “short circuit” effect are investigated. The adsorption and regeneration behavior of MMMs is discussed, combining the detailed analysis of FT-IR, morphology, XPS, XRD and thermal properties of MMMs, the process-structure-function relationship is obtained. The results show that the functional zeolites are incorporated into three-dimensional network and the adsorption capacity of MMMs comes to 8.6 and 9.5 mg S g-1 for thiophene and dibenzothiophene species, respectively. And the regeneration behavior suggests that the spent membranes can recover about 88% and 96% of the desulfurization capacity by solvent washing and thermal treating regeneration, respectively. The related discussions provide some general suggestions in promoting the novel application of MMMs on the separation of organic-organic mixtures, and a potential alternative for the production of sulfur-free hydrogen source for fuel cell applications.

  10. Accounting and Accountability for Distributed and Grid Systems

    NASA Technical Reports Server (NTRS)

    Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.

    2001-01-01

    While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.

  11. Strain Rate Dependent Deformation and Strength Modeling of a Polymer Matrix Composite Utilizing a Micromechanics Approach. Degree awarded by Cincinnati Univ.

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    1999-01-01

    Potential gas turbine applications will expose polymer matrix composites to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under extreme conditions. Specifically, analytical methods designed for these applications must have the capability of properly capturing the strain rate sensitivities and nonlinearities that are present in the material response. The Ramaswamy-Stouffer constitutive equations, originally developed to analyze the viscoplastic deformation of metals, have been modified to simulate the nonlinear deformation response of ductile, crystalline polymers. The constitutive model is characterized and correlated for two representative ductile polymers. Fiberite 977-2 and PEEK, and the computed results correlate well with experimental values. The polymer constitutive equations are implemented in a mechanics of materials based composite micromechanics model to predict the nonlinear, rate dependent deformation response of a composite ply. Uniform stress and uniform strain assumptions are applied to compute the effective stresses of a composite unit cell from the applied strains. The micromechanics equations are successfully verified for two polymer matrix composites. IM7/977-2 and AS4/PEEK. The ultimate strength of a composite ply is predicted with the Hashin failure criteria that were implemented in the composite micromechanics model. The failure stresses of the two composite material systems are accurately predicted for a variety of fiber orientations and strain rates. The composite deformation model is implemented in LS-DYNA, a commercially available transient dynamic explicit finite element code. The matrix constitutive equations are converted into an incremental form, and the model is implemented into LS-DYNA through the use of a user defined material subroutine. The deformation response of a bulk polymer and a polymer matrix composite are predicted by finite element analyses. The results compare reasonably well to experimental values, with some discrepancies. The discrepancies are at least partially caused by the method used to integrate the rate equations in the polymer constitutive model.

  12. Implementation Challenges for Ceramic Matrix Composites in High Temperature Applications

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay

    2004-01-01

    Ceramic matrix composites are leading candidate materials for a number of applications in aeronautics, space, energy, electronics, nuclear, and transportation industries. In the aeronautics and space exploration systems, these materials are being considered for applications in hot sections of jet engines such as the combustor liner, nozzle components, nose cones, leading edges of reentry vehicles and space propulsion components. Applications in the energy and environmental industries include radiant heater tubes, heat exchangers, heat recuperators, gas and diesel particulate filters (DPFs), and components for land based turbines for power generation. These materials are also being considered for use in the first wall and blanket components of fusion reactors. There are a number of critical issues and challenges related to successful implementation of composite materials. Fabrication of net and complex shape components with high density and tailorable matrix properties is quite expensive, and even then various desirable properties are not achievable. In this presentation, microstructure and thermomechanical properties of composites fabricated by two techniques (chemical vapor infiltration and melt infiltration), will be presented. In addition, critical need for robust joining and assembly technologies in successful implementation of these systems will be discussed. Other implementation issues will be discussed along with advantages and benefits of using these materials for various components in high temperature applications.

  13. Performance Comparison of a Matrix Solver on a Heterogeneous Network Using Two Implementations of MPI: MPICH and LAM

    NASA Technical Reports Server (NTRS)

    Phillips, Jennifer K.

    1995-01-01

    Two of the current and most popular implementations of the Message-Passing Standard, Message Passing Interface (MPI), were contrasted: MPICH by Argonne National Laboratory, and LAM by the Ohio Supercomputer Center at Ohio State University. A parallel skyline matrix solver was adapted to be run in a heterogeneous environment using MPI. The Message-Passing Interface Forum was held in May 1994 which lead to a specification of library functions that implement the message-passing model of parallel communication. LAM, which creates it's own environment, is more robust in a highly heterogeneous network. MPICH uses the environment native to the machine architecture. While neither of these free-ware implementations provides the performance of native message-passing or vendor's implementations, MPICH begins to approach that performance on the SP-2. The machines used in this study were: IBM RS6000, 3 Sun4, SGI, and the IBM SP-2. Each machine is unique and a few machines required specific modifications during the installation. When installed correctly, both implementations worked well with only minor problems.

  14. Evaluating the effectiveness of implementing quality management practices in the medical industry.

    PubMed

    Yeh, T-M; Lai, H-P

    2015-01-01

    To discuss the effectiveness of 30 quality management practices (QMP) including Strategic Management, Balanced ScoreCard, Knowledge Management, and Total Quality Management in the medical industry. A V-shaped performance evaluation matrix is applied to identify the top ten practices that are important but not easy to use or implement. Quality Function Deployment (QFD) is then utilized to find key factors to improve the implementation of the top ten tools. The questionnaires were sent to the nursing staff and administrators in a hospital through e-mail and posts. A total of 250 copies were distributed and 217 copies were valid. The importance, easiness, and achievement (i.e., implementation level) of 30 quality management practices were used. Key factors for QMP implementation were sequenced in order of importance as top management involvement, inter-department communication and coordination, teamwork, hospital-wide participation, education and training, consultant professionalism, continuous internal auditing, computerized process, and incentive compensation. Top management can implement the V-shaped performance matrix to determine whether quality management practices need improvement and if so, utilize QFD to find the key factors for improvement.

  15. A solver for General Unilateral Polynomial Matrix Equation with Second-Order Matrices Over Prime Finite Fields

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-03-01

    The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.

  16. Determining heavy metals in spent compact fluorescent lamps (CFLs) and their waste management challenges: Some strategies for improving current conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taghipour, Hassan, E-mail: hteir@yahoo.com; Amjad, Zahra; Jafarabadi, Mohamad Asghari

    2014-07-15

    Highlights: • Heavy metals in spent compact fluorescent lamps (CFLs) determined. • Current waste management condition of CFLs in Iran assessed. • Currently, waste of CFLs is disposed by municipal waste stream in waste landfills. • We propose extended producer responsibility (EPR) for CFLs waste management. - Abstract: From environmental viewpoint, the most important advantage of compact fluorescent lamps (CFLs) is reduction of green house gas emissions. But their significant disadvantage is disposal of spent lamps because of containing a few milligrams of toxic metals, especially mercury and lead. For a successful implementation of any waste management plan, availability ofmore » sufficient and accurate information on quantities and compositions of the generated waste and current management conditions is a fundamental prerequisite. In this study, CFLs were selected among 20 different brands in Iran. Content of heavy metals including mercury, lead, nickel, arsenic and chromium was determined by inductive coupled plasma (ICP). Two cities, Tehran and Tabriz, were selected for assessing the current waste management condition of CFLs. The study found that waste generation amount of CFLs in the country was about 159.80, 183.82 and 153.75 million per year in 2010, 2011 and 2012, respectively. Waste generation rate of CFLs in Iran was determined to be 2.05 per person in 2012. The average amount of mercury, lead, nickel, arsenic and chromium was 0.417, 2.33, 0.064, 0.056 and 0.012 mg per lamp, respectively. Currently, waste of CFLs is disposed by municipal waste stream in waste landfills. For improving the current conditions, we propose by considering the successful experience of extended producer responsibility (EPR) in other electronic waste management. The EPR program with advanced recycling fee (ARF) is implemented for collecting and then recycling CFLs. For encouraging consumers to take the spent CFLs back at the end of the products’ useful life, a proportion of ARF (for example, 50%) can be refunded. On the other hand, the government and Environmental Protection Agency should support and encourage recycling companies of CFLs both technically and financially in the first place.« less

  17. Simple Approach to Renormalize the Cabibbo-Kobayashi-Maskawa Matrix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kniehl, Bernd A.; Sirlin, Alberto

    2006-12-01

    We present an on-shell scheme to renormalize the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It is based on a novel procedure to separate the external-leg mixing corrections into gauge-independent self-mass and gauge-dependent wave function renormalization contributions, and to implement the on-shell renormalization of the former with nondiagonal mass counterterm matrices. Diagonalization of the complete mass matrix leads to an explicit CKM counterterm matrix, which automatically satisfies all the following important properties: it is gauge independent, preserves unitarity, and leads to renormalized amplitudes that are nonsingular in the limit in which any two fermions become mass degenerate.

  18. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  19. Measuring the impacts of seclusion on psychiatry inpatients and the effectiveness of a pilot single-session post-seclusion counselling intervention.

    PubMed

    Whitecross, Fiona; Seeary, Amy; Lee, Stuart

    2013-12-01

    Despite the accumulation of evidence demonstrating patients' accounts of trauma associated with seclusion, the use of evidence-based post-seclusion debriefing is not apparent in the published work. This study aimed to identify the impacts seclusion has on an individual using the Impact of Events - Revised (IES-R), a standardized and widely used measure of trauma symptoms, and measure the effectiveness of a post-seclusion counselling intervention in mitigating the experience of seclusion-related trauma and reducing time spent in seclusion. The study design involved a comparison of the seclusion-related trauma and time in seclusion that was experienced by consenting patients managed on the two inpatient wards of Alfred Psychiatry. To investigate the efficacy of post-seclusion counselling to reduce event-related trauma as well as the use of seclusion, a brief single-session intervention was piloted comparing outcomes for patients treated on a ward implementing semistructured post-seclusion counselling and patients treated on a ward continuing with post-seclusion support as usual. A total of 31 patients consented to participate, with approximately 47% reporting trauma symptoms consistent with 'probable post-traumatic stress disorder' (IES-R total score, >33), although there was no difference in trauma experience between groups. Significantly fewer hours were spent in seclusion for patients treated on the ward piloting the post-seclusion counselling intervention. Findings, therefore, highlight not only the potential for significant trauma stemming from a seclusion event, but also the capacity for the implementation of such interventions as post-seclusion counselling to raise awareness of the need to minimize time spent in seclusion for patients. © 2013 Australian College of Mental Health Nurses Inc.

  20. Intermediate outcomes of a chronic disease self-management program for Spanish-speaking older adults in South Florida, 2008-2010.

    PubMed

    Melchior, Michael A; Seff, Laura R; Bastida, Elena; Albatineh, Ahmed N; Page, Timothy F; Palmer, Richard C

    2013-08-29

    The prevalence and negative health effects of chronic diseases are disproportionately high among Hispanics, the largest minority group in the United States. Self-management of chronic conditions by older adults is a public health priority. The objective of this study was to examine 6-week differences in self-efficacy, time spent performing physical activity, and perceived social and role activities limitations for participants in a chronic disease self-management program for Spanish-speaking older adults, Tomando Control de su Salud (TCDS). Through the Healthy Aging Regional Collaborative, 8 area agencies delivered 82 workshops in 62 locations throughout South Florida. Spanish-speaking participants who attended workshops from October 1, 2008, through December 31, 2010, were aged 55 years or older, had at least 1 chronic condition, and completed baseline and post-test surveys were included in analysis (N=682). Workshops consisted of six, 2.5-hour sessions offered once per week for 6 weeks. A self-report survey was administered at baseline and again at the end of program instruction. To assess differences in outcomes, a repeated measures general linear model was used, controlling for agency and baseline general health. All outcomes showed improvement at 6 weeks. Outcomes that improved significantly were self-efficacy to manage disease, perceived social and role activities limitations, time spent walking, and time spent performing other aerobic activities. Implementation of TCDS significantly improved 4 of 8 health promotion skills and behaviors of Spanish-speaking older adults in South Florida. A community-based implementation of TCDS has the potential to improve health outcomes for a diverse, Spanish-speaking, older adult population.

  1. Matrix Management: Is It Really Conflict Management.

    DTIC Science & Technology

    1976-11-01

    ru— A036 516 DEFENSE SYSTEMS MANAGEMENT COLL FORT BELVOIR VA - FIG 5/1 MATRIX MANAGEMENT: IS IT REALLY CONFLICT MANAGEMENT . (U) NOV 76 R P... conflict management .” As a result , sensitivity training for managers and their subordinates was conducted in order to implement the matrix concept...Wilemon. “ Conflict Management in ProJect Life Cycles ,” Sloan ~anagement Review , Vol. 16,Spring , 1975, pp . 3 1—5 0. Thamhain , Hans J. and David L

  2. Implementing a free school-based fruit and vegetable programme: barriers and facilitators experienced by pupils, teachers and produce suppliers in the Boost study

    PubMed Central

    2014-01-01

    Background Multi-component interventions which combine educational and environmental strategies appear to be most effective in increasing fruit and vegetable (FV) intake in adolescents. However, multi-component interventions are complex to implement and often poorly implemented. Identification of barriers and facilitators for implementation is warranted to improve future interventions. This study aimed to explore implementation of two intervention components which addressed availability and accessibility of FV in the multi-component, school-based Boost study which targeted FV intake among Danish 13-year-olds and to identify barriers and facilitators for implementation among pupils, teachers and FV suppliers. Methods We conducted focus group interviews with 111 13-year-olds and 13 teachers, completed class observations at six schools, and conducted telephone interviews with all involved FV suppliers. Interviews were transcribed, coded and analysed using qualitative analytical procedures. Results FV suppliers affected the implementation of the FV programme at schools and thereby pupils’ intake through their timing of delivery and through the quality, quantity and variety of the delivered FV. Teachers influenced the accessibility and appearance of FV by deciding if and when the pupils could eat FV and whether FV were cut up. Different aspects of time acted as barriers for teachers’ implementation of the FV programme: time spent on having a FV break during lessons, time needed to prepare FV and time spent on pupils’ misbehaviour and not being able to handle getting FV. Teacher timing of cutting up and serving FV could turn into a barrier for pupils FV intake due to enzymatic browning. The appearance of FV was important for pupils’ intake, especially for girls. FV that did not appeal to the pupils e.g. had turned brown after being cut up were thrown around as a part of a game by the pupils, especially boys. Girls appreciated the social dimension of eating FV together to a larger extent than boys. Conclusions Limited time and pupils’ misbehaviour were barriers for teachers’ implementation. Establishing FV delivery to schools as a new routine challenged FV suppliers’ implementation. Food aesthetics were important for most pupils’ FV intake while the social dimension of eating FV together seemed more important to girls than boys. Trial registration Current Controlled Trials ISRCTN11666034. PMID:24512278

  3. Implementing a free school-based fruit and vegetable programme: barriers and facilitators experienced by pupils, teachers and produce suppliers in the Boost study.

    PubMed

    Aarestrup, Anne Kristine; Krølner, Rikke; Jørgensen, Thea Suldrup; Evans, Alexandra; Due, Pernille; Tjørnhøj-Thomsen, Tine

    2014-02-11

    Multi-component interventions which combine educational and environmental strategies appear to be most effective in increasing fruit and vegetable (FV) intake in adolescents. However, multi-component interventions are complex to implement and often poorly implemented. Identification of barriers and facilitators for implementation is warranted to improve future interventions.This study aimed to explore implementation of two intervention components which addressed availability and accessibility of FV in the multi-component, school-based Boost study which targeted FV intake among Danish 13-year-olds and to identify barriers and facilitators for implementation among pupils, teachers and FV suppliers. We conducted focus group interviews with 111 13-year-olds and 13 teachers, completed class observations at six schools, and conducted telephone interviews with all involved FV suppliers. Interviews were transcribed, coded and analysed using qualitative analytical procedures. FV suppliers affected the implementation of the FV programme at schools and thereby pupils' intake through their timing of delivery and through the quality, quantity and variety of the delivered FV. Teachers influenced the accessibility and appearance of FV by deciding if and when the pupils could eat FV and whether FV were cut up. Different aspects of time acted as barriers for teachers' implementation of the FV programme: time spent on having a FV break during lessons, time needed to prepare FV and time spent on pupils' misbehaviour and not being able to handle getting FV. Teacher timing of cutting up and serving FV could turn into a barrier for pupils FV intake due to enzymatic browning. The appearance of FV was important for pupils' intake, especially for girls. FV that did not appeal to the pupils e.g. had turned brown after being cut up were thrown around as a part of a game by the pupils, especially boys. Girls appreciated the social dimension of eating FV together to a larger extent than boys. Limited time and pupils' misbehaviour were barriers for teachers' implementation. Establishing FV delivery to schools as a new routine challenged FV suppliers' implementation. Food aesthetics were important for most pupils' FV intake while the social dimension of eating FV together seemed more important to girls than boys. Current Controlled Trials ISRCTN11666034.

  4. Exact solution of some linear matrix equations using algebraic methods

    NASA Technical Reports Server (NTRS)

    Djaferis, T. E.; Mitter, S. K.

    1979-01-01

    Algebraic methods are used to construct the exact solution P of the linear matrix equation PA + BP = - C, where A, B, and C are matrices with real entries. The emphasis of this equation is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The paper is divided into six sections which include the proof of the basic lemma, the Liapunov equation, and the computer implementation for the rational, integer and modular algorithms. Two numerical examples are given and the entire calculation process is depicted.

  5. Implementation of biological tissue Mueller matrix for polarization-sensitive optical coherence tomography based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Lin, Yongping; Zhang, Xiyang; He, Youwu; Cai, Jianyong; Li, Hui

    2018-02-01

    The Jones matrix and the Mueller matrix are main tools to study polarization devices. The Mueller matrix can also be used for biological tissue research to get complete tissue properties, while the commercial optical coherence tomography system does not give relevant analysis function. Based on the LabVIEW, a near real time display method of Mueller matrix image of biological tissue is developed and it gives the corresponding phase retardant image simultaneously. A quarter-wave plate was placed at 45 in the sample arm. Experimental results of the two orthogonal channels show that the phase retardance based on incident light vector fixed mode and the Mueller matrix based on incident light vector dynamic mode can provide an effective analysis method of the existing system.

  6. Metal matrix composite micromechanics: In-situ behavior influence on composite properties

    NASA Technical Reports Server (NTRS)

    Murthy, P. L. N.; Hopkins, D. A.; Chamis, C. C.

    1989-01-01

    Recent efforts in computational mechanics methods for simulating the nonlinear behavior of metal matrix composites have culminated in the implementation of the Metal Matrix Composite Analyzer (METCAN) computer code. In METCAN material nonlinearity is treated at the constituent (fiber, matrix, and interphase) level where the current material model describes a time-temperature-stress dependency of the constituent properties in a material behavior space. The composite properties are synthesized from the constituent instantaneous properties by virtue of composite micromechanics and macromechanics models. The behavior of metal matrix composites depends on fabrication process variables, in situ fiber and matrix properties, bonding between the fiber and matrix, and/or the properties of an interphase between the fiber and matrix. Specifically, the influence of in situ matrix strength and the interphase degradation on the unidirectional composite stress-strain behavior is examined. These types of studies provide insight into micromechanical behavior that may be helpful in resolving discrepancies between experimentally observed composite behavior and predicted response.

  7. Computerized clinical documentation system in the pediatric intensive care unit

    PubMed Central

    2001-01-01

    Background To determine whether a computerized clinical documentation system (CDS): 1) decreased time spent charting and increased time spent in patient care; 2) decreased medication errors; 3) improved clinical decision making; 4) improved quality of documentation; and/or 5) improved shift to shift nursing continuity. Methods Before and after implementation of CDS, a time study involving nursing care, medication delivery, and normalization of serum calcium and potassium values was performed. In addition, an evaluation of completeness of documentation and a clinician survey of shift to shift reporting were also completed. This was a modified one group, pretest-posttest design. Results With the CDS there was: improved legibility and completeness of documentation, data with better accessibility and accuracy, no change in time spent in direct patient care or charting by nursing staff. Incidental observations from the study included improved management functions of our nurse manager; improved JCAHO documentation compliance; timely access to clinical data (labs, vitals, etc); a decrease in time and resource use for audits; improved reimbursement because of the ability to reconstruct lost charts; limited human data entry by automatic data logging; eliminated costs of printing forms. CDS cost was reasonable. Conclusions When compared to a paper chart, the CDS provided a more legible, compete, and accessible patient record without affecting time spent in direct patient care. The availability of the CDS improved shift to shift reporting. Other observations showed that the CDS improved management capabilities; helped physicians deliver care; improved reimbursement; limited data entry errors; and reduced costs. PMID:11604105

  8. Value-added care: a paradigm shift in patient care delivery.

    PubMed

    Upenieks, Valda V; Akhavan, Jaleh; Kotlerman, Jenny

    2008-01-01

    Spiraling costs in health care have placed hospitals in a constant state of transition. As a result, nursing practice is now influenced by numerous factors and has remained in a continuous state of flux. Multiple changes within the last 2 decades in nurse/patient ratio and blend of front-line nurses are examples of this transition. To reframe the nursing practice into an economic equation that captures the cost, quality, and service, a paradigm shift in thinking is needed in order to assess work redesign. Nursing productivity must be evaluated in terms of value-added care, a vision that goes beyond direct care activities and includes team collaboration, physician rounding, increased RN-to-aide communication, and patient centeredness; all of which are crucial to the nurse's role and the patient's well-being. The science of appropriating staffing depends on assessment and implementation of systematic changes best illustrated through a "systems theory" framework. A throughput transformation is required to create process changes with input elements (number of front-line nurses) in order to increase time spent in value-added care and to decrease waste activities with an improvement in efficiency, quality, and service. The purpose of this pilot study was two-fold: (a) to gain an understanding of how much time RNs spent in value-added care, and (b) whether increasing the combined level of RNs and unlicensed assistive personnel increased the amount of time spent in value-added care compared to time spent in necessary tasks and waste.

  9. The cost of routine Aedes aegypti control and of insecticide-treated curtain implementation.

    PubMed

    Baly, Alberto; Flessa, Steffen; Cote, Marilys; Thiramanus, Thirapong; Vanlerberghe, Veerle; Villegas, Elci; Jirarojwatana, Somchai; Van der Stuyft, Patrick

    2011-05-01

    Insecticide-treated curtains (ITCs) are promoted for controlling the Dengue vector Aedes aegypti. We assessed the cost of the routine Aedes control program (RACP) and the cost of ITC implementation through the RACP and health committees in Venezuela and through health volunteers in Thailand. The yearly cost of the RACP per household amounted to US$2.14 and $1.89, respectively. The ITC implementation cost over three times more, depending on the channel used. In Venezuela the RACP was the most efficient implementation-channel. It spent US$1.90 (95% confidence interval [CI]: 1.83; 1.97) per curtain distributed, of which 76.9% for the curtain itself. Implementation by health committees cost significantly (P = 0.02) more: US$2.32 (95% CI: 1.93; 2.61) of which 63% for the curtain. For ITC implementation to be at least as cost-effective as the RACP, at equal effectiveness and actual ITC prices, the attained curtain coverage and the adulticiding effect should last for 3 years.

  10. The Cost of Routine Aedes aegypti Control and of Insecticide-Treated Curtain Implementation

    PubMed Central

    Baly, Alberto; Flessa, Steffen; Cote, Marilys; Thiramanus, Thirapong; Vanlerberghe, Veerle; Villegas, Elci; Jirarojwatana, Somchai; Van der Stuyft, Patrick

    2011-01-01

    Insecticide-treated curtains (ITCs) are promoted for controlling the Dengue vector Aedes aegypti. We assessed the cost of the routine Aedes control program (RACP) and the cost of ITC implementation through the RACP and health committees in Venezuela and through health volunteers in Thailand. The yearly cost of the RACP per household amounted to US$2.14 and $1.89, respectively. The ITC implementation cost over three times more, depending on the channel used. In Venezuela the RACP was the most efficient implementation-channel. It spent US$1.90 (95% confidence interval [CI]: 1.83; 1.97) per curtain distributed, of which 76.9% for the curtain itself. Implementation by health committees cost significantly (P = 0.02) more: US$2.32 (95% CI: 1.93; 2.61) of which 63% for the curtain. For ITC implementation to be at least as cost-effective as the RACP, at equal effectiveness and actual ITC prices, the attained curtain coverage and the adulticiding effect should last for 3 years. PMID:21540384

  11. Dynamic leaching studies of 48 MWd/kgU UO2 commercial spent nuclear fuel under oxic conditions

    NASA Astrophysics Data System (ADS)

    Serrano-Purroy, D.; Casas, I.; González-Robles, E.; Glatz, J. P.; Wegen, D. H.; Clarens, F.; Giménez, J.; de Pablo, J.; Martínez-Esparza, A.

    2013-03-01

    The leaching of a high-burn-up spent nuclear fuel (48 MWd/KgU) has been studied in a carbonate-containing solution and under oxic conditions using a Continuously Stirred Tank Flow-Through Reactor (CSTR). Two samples of the fuel, one prepared from the centre of the pellet (labelled CORE) and another one from the fuel pellet periphery, enriched with the so-called High Burn-Up Structure (HBS, labelled OUT) have been used.For uranium and actinides, the results showed that U, Np, Am and Cm gave very similar normalized dissolution rates, while Pu showed slower dissolution rates for both samples. In addition, dissolution rates were consistently two to four times lower for OUT sample compared to CORE sample.Considering the fission products release the main results are that Y, Tc, La and Nd dissolved very similar to uranium; while Cs, Sr, Mo and Rb have up to 10 times higher dissolution rates. Rh, Ru and Zr seemed to have lower dissolution rates than uranium. The lowest dissolution rates were found for OUT sample.Three different contributions were detected on uranium release, modelled and attributed to oxidation layer, fines and matrix release.

  12. Comparing implementations of penalized weighted least-squares sinogram restoration.

    PubMed

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-11-01

    A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors' previous penalized-likelihood implementation. Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes.

  13. [Economic impact of an automated dispensing system in an intensive care unit].

    PubMed

    Kheniene, F; Bedouch, P; Durand, M; Marie, F; Brudieu, E; Tourlonnias, M-M; Bongi, P; Allenet, B; Calop, J

    2008-03-01

    Automated dispensing systems (ADS) allow a reduction of medication errors and an improvement of drug distribution in clinical ward. The objective of this study was to evaluate the economic impact of ADS in an intensive care unit. A cost-benefit model was constructed based on the hospital perspective. The system was evaluated before-after implementation of an ADS in a 12-bed cardiovascular intensive care unit of a French teaching hospital: (a) by a measuring nurse and pharmacy technician working time required for various tasks; (b) by measuring the cost of drug storage and the cost of expired drug; (c) by measuring the nurses' acceptability. After ADS was installed, nursing personnel spent less time on medication-related activities (mean of 1.9 hour/day of nursing time). Pharmacy technicians spent more time on floor-stock activities (mean of 0.7 hour/day of technician time). Implementation reduced the cost of drug storage by 56% (14,742 euros) and cost of expired drug by 9,086 euros per year. Finally, cost-benefit analysis including potential savings in terms of working time showed a net benefit of 71,586 euros (14,317 euros/year). The ADS was given high marks by the nurses; 77% wanted to keep it on their unit. Implementation of ADS is expected to generate direct savings for the hospital and working time reallocation, for nurses to interact with patients and for pharmacy technicians to get involved on the ward.

  14. Involution symmetries and the PMNS matrix

    NASA Astrophysics Data System (ADS)

    Pal, Palash B.; Byakti, Pritibhajan

    2017-10-01

    C S Lam has suggested that the PMNS matrix (or at least some of its elements) can be predicted by embedding the residual symmetry of the leptonic mass terms into a bigger symmetry. We analyse the possibility that the residual symmetries consist of involution generators only and explore how Lam's idea can be implemented.

  15. Skeletal Muscle Regeneration in a Rat (Rattus norvegicus) Model with CorMatrix and Adipose Derived Stem Cells

    DTIC Science & Technology

    2015-07-16

    outcome or training benefit the DoD/USAF? Yes. This study provided evidence that extracellular matrix made from swine small intestinal submucosa does...Isometric functional testing was implemented prior to euthanasia 2 FDGXXX at 10 months to further evaluate healing at later time points

  16. Solving rational matrix equations in the state space with applications to computer-aided control-system design

    NASA Technical Reports Server (NTRS)

    Packard, A. K.; Sastry, S. S.

    1986-01-01

    A method of solving a class of linear matrix equations over various rings is proposed, using results from linear geometric control theory. An algorithm, successfully implemented, is presented, along with non-trivial numerical examples. Applications of the method to the algebraic control system design methodology are discussed.

  17. Generating Multiple Imputations for Matrix Sampling Data Analyzed with Item Response Models.

    ERIC Educational Resources Information Center

    Thomas, Neal; Gan, Nianci

    1997-01-01

    Describes and assesses missing data methods currently used to analyze data from matrix sampling designs implemented by the National Assessment of Educational Progress. Several improved methods are developed, and these models are evaluated using an EM algorithm to obtain maximum likelihood estimates followed by multiple imputation of complete data…

  18. Heuristic Implementation of Dynamic Programming for Matrix Permutation Problems in Combinatorial Data Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich; Stahl, Stephanie

    2008-01-01

    Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30x30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation…

  19. Performance of low-rank QR approximation of the finite element Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D; Fasenfest, B

    2006-10-16

    In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law. Our goal is to develop an algorithm that is easily implemented on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representingmore » distant interactions being low rank and having a compressed QR representation. While an octree partitioning of the matrix may be ideal, for ease of parallel implementation we employ a partitioning based on number of processors. The rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less

  20. Romania: Brand-New Engineering Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ken Allen; Lucian Biro; Nicolae Zamfir

    The HEU spent nuclear fuel transport from Romania was a pilot project in the framework of the Russian Research Reactor Fuel Return Program (RRRFR), being the first fully certified spent nuclear fuel shipment by air. The successful implementation of the Romanian shipment also brought various new technology in the program, further used by other participating countries. Until 2009, the RRRFR program repatriated to the Russian Federation HEU spent nuclear fuel of Russian origin from many countries, like Uzbekistan, Czech Republic, Latvia, Hungary, Kazakhstan and Bulgaria. The means of transport used were various; from specialized TK-5 train for the carriage ofmore » Russian TUK-19 transport casks, to platform trains for 20 ft freight ISO containers carrying Czech Skoda VPVR/M casks; from river barge on the Danube, to vessel on the Mediterranean Sea and Atlantic Ocean. Initially, in 2005, the transport plan of the HEU spent nuclear fuel from the National Institute for R&D in Nuclear Physics and Nuclear Engineering 'Horia Hulubei' in Magurele, Romania considered a similar scheme, using the specialized TK-5 train transiting Ukraine to the destination point in the Russian Federation, or, as an alternative, using the means and route of the spent nuclear fuel periodically shipped from the Bulgarian nuclear power plant Kosloduy (by barge on the Danube, and by train through Ukraine to the Russian Federation). Due to impossibility to reach an agreement in due time with the transit country, in February 2007 the US, Russian and Romanian project partners decided to adopt the air shipment of the spent nuclear fuel as prime option, eliminating the need for agreements with any transit countries. By this time the spent nuclear fuel inspections were completed, proving the compliance of the burn-up parameters with the international requirements for air shipments of radioactive materials. The short air route avoiding overflying of any other countries except the country of origin and the country of destination also contributed to the decision making in this issue. The efficient project management and cooperation between the three countries (Russia, Romania and USA) made possible, after two and a half years of preparation work, for the first fully certified spent nuclear fuel air shipment to take place on 29th of June 2009, from Romanian airport 'Henri Coanda' to the Russian airport 'Koltsovo' near Yekaterinburg. One day before that, after a record period of 3 weeks of preparation, another HEU cargo was shipped by air from Romanian Institute for Nuclear Research in Pitesti to Russia, containing fresh pellets and therefore making Romania the third HEU-free country in the RRRFR program.« less

  1. Did the No Child Left Behind Act Miss the Mark? Assessing the Potential Benefits from an Accountability System for Early Childhood Education

    ERIC Educational Resources Information Center

    Miller, Lawrence J.; Smith, Stephanie C.

    2011-01-01

    With growing evidence that human capital investment is more efficiently spent on younger children coupled with wide variation in preschool access across states, this article uses a neoliberal approach to examine the potential social costs and benefits that could accrue should the United States decide to implement a centralized preschool…

  2. Weaving Action Learning into the Fabric of Manufacturing: The Impact of Humble Inquiry and Structured Reflection in a Cross-Cultural Context

    ERIC Educational Resources Information Center

    Luckman, Elizabeth A.

    2017-01-01

    This account of practice examines the implementation of and reactions to action learning through the Lean methodology in a unique, cross-cultural context. I review my time spent as a Lean coach; engaging with, training, and using action learning with employees in a garment manufacturing facility located in Bali, Indonesia. This research addresses…

  3. Matrix thermalization

    NASA Astrophysics Data System (ADS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  4. Weighted graph based ordering techniques for preconditioned conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Clift, Simon S.; Tang, Wei-Pai

    1994-01-01

    We describe the basis of a matrix ordering heuristic for improving the incomplete factorization used in preconditioned conjugate gradient techniques applied to anisotropic PDE's. Several new matrix ordering techniques, derived from well-known algorithms in combinatorial graph theory, which attempt to implement this heuristic, are described. These ordering techniques are tested against a number of matrices arising from linear anisotropic PDE's, and compared with other matrix ordering techniques. A variation of RCM is shown to generally improve the quality of incomplete factorization preconditioners.

  5. Technical note: Avoiding the direct inversion of the numerator relationship matrix for genotyped animals in single-step genomic best linear unbiased prediction solved with the preconditioned conjugate gradient.

    PubMed

    Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I

    2017-01-01

    This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.

  6. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  7. The multifacet graphically contracted function method. I. Formulation and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely

    2014-08-14

    The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that bothmore » the energy and the gradient computation scale as O(N{sup 2}n{sup 4}) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N{sub 2} dissociation, cubic H{sub 8} dissociation, the symmetric dissociation of H{sub 2}O, and the insertion of Be into H{sub 2}. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.« less

  8. Community participation in biofilm matrix assembly and function.

    PubMed

    Mitchell, Kaitlin F; Zarnowski, Robert; Sanchez, Hiram; Edward, Jessica A; Reinicke, Emily L; Nett, Jeniel E; Mitchell, Aaron P; Andes, David R

    2015-03-31

    Biofilms of the fungus Candida albicans produce extracellular matrix that confers such properties as adherence and drug resistance. Our prior studies indicate that the matrix is complex, with major polysaccharide constituents being α-mannan, β-1,6 glucan, and β-1,3 glucan. Here we implement genetic, biochemical, and pharmacological approaches to unravel the contributions of these three constituents to matrix structure and function. Interference with synthesis or export of any one polysaccharide constituent altered matrix concentrations of each of the other polysaccharides. Each of these was also required for matrix function, as assessed by assays for sequestration of the antifungal drug fluconazole. These results indicate that matrix biogenesis entails coordinated delivery of the individual matrix polysaccharides. To understand whether coordination occurs at the cellular level or the community level, we asked whether matrix-defective mutant strains could be coaxed to produce functional matrix through biofilm coculture. We observed that mixed biofilms inoculated with mutants containing a disruption in each polysaccharide pathway had restored mature matrix structure, composition, and biofilm drug resistance. Our results argue that functional matrix biogenesis is coordinated extracellularly and thus reflects the cooperative actions of the biofilm community.

  9. Community participation in biofilm matrix assembly and function

    PubMed Central

    Mitchell, Kaitlin F.; Zarnowski, Robert; Sanchez, Hiram; Edward, Jessica A.; Reinicke, Emily L.; Nett, Jeniel E.; Mitchell, Aaron P.; Andes, David R.

    2015-01-01

    Biofilms of the fungus Candida albicans produce extracellular matrix that confers such properties as adherence and drug resistance. Our prior studies indicate that the matrix is complex, with major polysaccharide constituents being α-mannan, β-1,6 glucan, and β-1,3 glucan. Here we implement genetic, biochemical, and pharmacological approaches to unravel the contributions of these three constituents to matrix structure and function. Interference with synthesis or export of any one polysaccharide constituent altered matrix concentrations of each of the other polysaccharides. Each of these was also required for matrix function, as assessed by assays for sequestration of the antifungal drug fluconazole. These results indicate that matrix biogenesis entails coordinated delivery of the individual matrix polysaccharides. To understand whether coordination occurs at the cellular level or the community level, we asked whether matrix-defective mutant strains could be coaxed to produce functional matrix through biofilm coculture. We observed that mixed biofilms inoculated with mutants containing a disruption in each polysaccharide pathway had restored mature matrix structure, composition, and biofilm drug resistance. Our results argue that functional matrix biogenesis is coordinated extracellularly and thus reflects the cooperative actions of the biofilm community. PMID:25770218

  10. A transition matrix approach to the Davenport gryo calibration scheme

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.

    1998-01-01

    The in-flight gyro calibration scheme commonly used by NASA Goddard Space Flight Center (GSFC) attitude ground support teams closely follows an original version of the Davenport algorithm developed in the late seventies. Its basic idea is to minimize the least-squares differences between attitudes gyro- propagated over the course of a maneuver and those determined using post- maneuver sensor measurements. The paper represents the scheme in a recursive form by combining necessary partials into a rectangular matrix, which is propagated in exactly the same way as a Kalman filters square transition matrix. The nontrivial structure of the propagation matrix arises from the fact that attitude errors are not included in the state vector, and therefore their derivatives with respect to estimated a parameters do not appear in the transition matrix gyro defined in the conventional way. In cases when the required accuracy can be achieved by a single iteration, representation of the Davenport gyro calibration scheme in a recursive form allows one to discard each gyro measurement immediately after it was used to propagate the attitude and state transition matrix. Another advantage of the new approach is that it utilizes the same expression for the error sensitivity matrix as that used by the Kalman filter. As a result the suggested modification of the Davenport algorithm made it possible to reuse software modules implemented in the Kalman filter estimator, where both attitude errors and gyro calibration parameters are included in the state vector. The new approach has been implemented in the ground calibration utilities used to support the Tropical Rainfall Measuring Mission (TRMM). The paper analyzes some preliminary results of gyro calibration performed by the TRMM ground attitude support team. It is demonstrated that an effect of the second iteration on estimated values of calibration parameters is negligibly small, and therefore there is no need to store processed gyro data. This opens a promising opportunity for onboard implementation of the suggested recursive procedure by combining, it with the Kalman filter used to obtain necessary attitude solutions at the beginning and end of each maneuver.

  11. A space efficient flexible pivot selection approach to evaluate determinant and inverse of a matrix.

    PubMed

    Jafree, Hafsa Athar; Imtiaz, Muhammad; Inayatullah, Syed; Khan, Fozia Hanif; Nizami, Tajuddin

    2014-01-01

    This paper presents new simple approaches for evaluating determinant and inverse of a matrix. The choice of pivot selection has been kept arbitrary thus they reduce the error while solving an ill conditioned system. Computation of determinant of a matrix has been made more efficient by saving unnecessary data storage and also by reducing the order of the matrix at each iteration, while dictionary notation [1] has been incorporated for computing the matrix inverse thereby saving unnecessary calculations. These algorithms are highly class room oriented, easy to use and implemented by students. By taking the advantage of flexibility in pivot selection, one may easily avoid development of the fractions by most. Unlike the matrix inversion method [2] and [3], the presented algorithms obviate the use of permutations and inverse permutations.

  12. Electronic implementation of associative memory based on neural network models

    NASA Technical Reports Server (NTRS)

    Moopenn, A.; Lambe, John; Thakoor, A. P.

    1987-01-01

    An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.

  13. Project - line interaction implementing projects in JPL's Matrix

    NASA Technical Reports Server (NTRS)

    Baroff, Lynn E.

    2006-01-01

    Can programmatic and line organizations really work interdependently, to accomplish their work as a community? Does the matrix produce a culture in which individuals take personal responsibility for both immediate mission success and long-term growth? What is the secret to making a matrix enterprise actually work? This paper will consider those questions, and propose that developing an effective project-line partnership demands primary attention to personal interactions among people. Many potential problems can be addressed by careful definition of roles, responsibilities, and work processes for both parts of the matrix -- and by deliberate and clear communication between project and line organizations and individuals.

  14. A generalization of random matrix theory and its application to statistical physics.

    PubMed

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  15. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  16. Exploiting Symmetry on Parallel Architectures.

    NASA Astrophysics Data System (ADS)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  17. COMGEN - A PROGRAM FOR GENERATING FINITE ELEMENT MODELS OF COMPOSITE MATERIALS AT THE MICRO LEVEL (SGI IRIS VERSION)

    NASA Technical Reports Server (NTRS)

    Melis, M. E.

    1994-01-01

    A significant percentage of time spent in a typical finite element analysis is taken up in the modeling and assignment of loads and constraints. This process not only requires the analyst to be well-versed in the art of finite element modeling, but also demands familiarity with some sort of preprocessing software in order to complete the task expediently. COMGEN (COmposite Model GENerator) is an interactive FORTRAN program which can be used to create a wide variety of finite element models of continuous fiber composite materials at the micro level. It quickly generates batch or "session files" to be submitted to the finite element pre- and post-processor program, PATRAN. (PDA Engineering, Costa Mesa, CA.) In modeling a composite material, COMGEN assumes that its constituents can be represented by a "unit cell" of a fiber surrounded by matrix material. Two basic cell types are available. The first is a square packing arrangement where the fiber is positioned in the center of a square matrix cell. The second type, hexagonal packing, has the fiber centered in a hexagonal matrix cell. Different models can be created using combinations of square and hexagonal packing schemes. Variations include two- and three- dimensional cases, models with a fiber-matrix interface, and different constructions of unit cells. User inputs include fiber diameter and percent fiber-volume of the composite to be analyzed. In addition, various mesh densities, boundary conditions, and loads can be assigned to the models within COMGEN. The PATRAN program then uses a COMGEN session file to generate finite element models and their associated loads which can then be translated to virtually any finite element analysis code such as NASTRAN or MARC. COMGEN is written in FORTRAN 77 and has been implemented on DEC VAX series computers under VMS and SGI IRIS series workstations under IRIX. If the user has the PATRAN package available, the output can be graphically displayed. Without PATRAN, the output is tabular. The VAX VMS version is available on a 5.25 inch 360K MS-DOS format diskette (standard distribution media) or a 9-track 1600 BPI DEC VAX FILES-11 format magnetic tape, and it requires about 124K of main memory. The standard distribution media for the IRIS version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. The memory requirement for the IRIS version is 627K. COMGEN was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. PATRAN is a registered trademark of PDA Engineering. SGI IRIS and IRIX are trademarks of Silicon Graphics, Inc. MS-DOS is a registered trademark of Microsoft Corporation. UNIX is a registered trademark of AT&T.

  18. On distribution reduction and algorithm implementation in inconsistent ordered information systems.

    PubMed

    Zhang, Yanqin

    2014-01-01

    As one part of our work in ordered information systems, distribution reduction is studied in inconsistent ordered information systems (OISs). Some important properties on distribution reduction are studied and discussed. The dominance matrix is restated for reduction acquisition in dominance relations based information systems. Matrix algorithm for distribution reduction acquisition is stepped. And program is implemented by the algorithm. The approach provides an effective tool for the theoretical research and the applications for ordered information systems in practices. For more detailed and valid illustrations, cases are employed to explain and verify the algorithm and the program which shows the effectiveness of the algorithm in complicated information systems.

  19. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  20. Effective correlator for RadioAstron project

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey

    This paper presents the implementation of programme FX-correlator for Very Long Baseline Interferometry, adapted for the project "RadioAstron". Software correlator implemented for heterogeneous computing systems using graphics accelerators. It is shown that for the task interferometry implementation of the graphics hardware has a high efficiency. The host processor of heterogeneous computing system, performs the function of forming the data flow for graphics accelerators, the number of which corresponds to the number of frequency channels. So, for the Radioastron project, such channels is seven. Each accelerator is perform correlation matrix for all bases for a single frequency channel. Initial data is converted to the floating-point format, is correction for the corresponding delay function and computes the entire correlation matrix simultaneously. Calculation of the correlation matrix is performed using the sliding Fourier transform. Thus, thanks to the compliance of a solved problem for architecture graphics accelerators, managed to get a performance for one processor platform Kepler, which corresponds to the performance of this task, the computing cluster platforms Intel on four nodes. This task successfully scaled not only on a large number of graphics accelerators, but also on a large number of nodes with multiple accelerators.

  1. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  2. Preparing data for analysis using microsoft Excel.

    PubMed

    Elliott, Alan C; Hynan, Linda S; Reisch, Joan S; Smith, Janet P

    2006-09-01

    A critical component essential to good research is the accurate and efficient collection and preparation of data for analysis. Most medical researchers have little or no training in data management, often causing not only excessive time spent cleaning data but also a risk that the data set contains collection or recording errors. The implementation of simple guidelines based on techniques used by professional data management teams will save researchers time and money and result in a data set better suited to answer research questions. Because Microsoft Excel is often used by researchers to collect data, specific techniques that can be implemented in Excel are presented.

  3. Implementing dense linear algebra algorithms using multitasking on the CRAY X-MP-4 (or approaching the gigaflop)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, J.J.; Hewitt, T.

    1985-08-01

    This note describes some experiments on simple, dense linear algebra algorithms. These experiments show that the CRAY X-MP is capable of small-grain multitasking arising from standard implementations of LU and Cholesky decomposition. The implementation described here provides the ''fastest'' execution rate for LU decomposition, 718 MFLOPS for a matrix of order 1000.

  4. Efficient multitasking of Choleski matrix factorization on CRAY supercomputers

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Poole, Eugene L.

    1991-01-01

    A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.

  5. Microwave-assisted Extraction of Rare Earth Elements from Petroleum Refining Catalysts and Ambient Fine Aerosols Prior to Inductively Coupled Plasma - Mass Spectrometry

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Kulkarni, Pranav; Chellam, Shankar

    2006-01-01

    In the absence of a certified reference material, a robust microwave-assisted acid digestion procedure followed by inductively coupled plasma - mass spectrometry (ICP-MS) was developed to quantify rare earth elements (REEs) in fluidized-bed catalytic cracking (FCC) catalysts and atmospheric fine particulate matter (PM2.5). High temperature (200 C), high pressure (200 psig), acid digestion (HNO3, HF, and H3BO3) with 20 minute dwell time effectively solubilized REEs from six fresh catalysts, a spent catalyst, and PM2.5. This method was also employed to measure 27 non-REEs including Na, Mg, Al, Si, K, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Se, Rb, Sr, Zr, Mo, Cd, Cs, Ba, Pb, and U. Complete extraction of several REEs (Y, La, Ce, Pr, Nd, Tb, Dy, and Er) required HF indicating that they were closely associated with the aluminosilicate structure of the zeolite FCC catalysts. Internal standardization using 115In quantitatively corrected non-spectral interferences in the catalyst digestate matrix. Inter-laboratory comparison using ICP-optical emission spectroscopy (ICP-OES) and instrumental neutron activation analysis (INAA) demonstrated the applicability of the newly developed analytical method for accurate analysis of REEs in FCC catalysts. The method developed for FCC catalysts was also successfully implemented to measure trace to ultra-trace concentrations of La, Ce, Pr, Nd, Sm, Gd, Eu, and Dy in ambient PM2.5 in an industrial area of Houston, TX.

  6. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  7. Experimental study on cesium immobilization in struvite structures.

    PubMed

    Wagh, Arun S; Sayenko, S Y; Shkuropatenko, V A; Tarasov, R V; Dykiy, M P; Svitlychniy, Y O; Virych, V D; Ulybkina, Е А

    2016-01-25

    Ceramicrete, a chemically bonded phosphate ceramic, was developed for nuclear waste immobilization and nuclear radiation shielding. Ceramicrete products are fabricated by an acid-base reaction between magnesium oxide and mono potassium phosphate that has a struvite-K mineral structure. In this study, we demonstrate that this crystalline structure is ideal for incorporating radioactive Cs into a Ceramicrete matrix. This is accomplished by partially replacing K by Cs in the struvite-K structure, thus forming struvite-(K, Cs) mineral. X-ray diffraction and thermo-gravimetric analyses are used to confirm such a replacement. The resulting product is non-leachable and stable at high temperatures, and hence it is an ideal matrix for immobilizing Cs found in high-activity nuclear waste streams. The product can also be used for immobilizing secondary waste streams generated during glass vitrification of spent fuel, or the method described in this article can be used as a pretreatment method during glass vitrification of high level radioactive waste streams. Furthermore, it suggests a method of producing safe commercial radioactive Cs sources. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Predicting thermo-mechanical behaviour of high minor actinide content composite oxide fuel in a dedicated transmutation facility

    NASA Astrophysics Data System (ADS)

    Lemehov, S. E.; Sobolev, V. P.; Verwerft, M.

    2011-09-01

    The European Facility for Industrial Transmutation (EFIT) of the minor actinides (MA), from LWR spent fuel is being developed in the integrated project EUROTRANS within the 6th Framework Program of EURATOM. Two composite uranium-free fuel systems, containing a large fraction of MA, are proposed as the main candidates: a CERCER with magnesia matrix hosting (Pu,MA)O 2-x particles, and a CERMET with metallic molybdenum matrix. The long-term thermal and mechanical behaviour of the fuel under the expected EFIT operating conditions is one of the critical issues in the core design. To make a reliable prediction of long-term thermo-mechanical behaviour of the hottest fuel rods in the lead-cooled version of EFIT with thermal power of 400 MW, different fuel performance codes have been used. This study describes the main results of modelling the thermo-mechanical behaviour of the hottest CERCER fuel rods with the fuel performance code MACROS which indicate that the CERCER fuel residence time can safely reach at least 4-5 effective full power years.

  9. Attempt to model laboratory-scale diffusion and retardation data.

    PubMed

    Hölttä, P; Siitari-Kauppi, M; Hakanen, M; Tukiainen, V

    2001-02-01

    Different approaches for measuring the interaction between radionuclides and rock matrix are needed to test the compatibility of experimental retardation parameters and transport models used in assessing the safety of the underground repositories for the spent nuclear fuel. In this work, the retardation of sodium, calcium and strontium was studied on mica gneiss, unaltered, moderately altered and strongly altered tonalite using dynamic fracture column method. In-diffusion of calcium into rock cubes was determined to predict retardation in columns. In-diffusion of calcium into moderately and strongly altered tonalite was interpreted using a numerical code FTRANS. The code was able to interprete in-diffusion of weakly sorbing calcium into the saturated porous matrix. Elution curves of calcium for the moderately and strongly altered tonalite fracture columns were explained adequately using FTRANS code and parameters obtained from in-diffusion calculations. In this paper, mass distribution ratio values of sodium, calcium and strontium for intact rock are compared to values, previously obtained for crushed rock from batch and crushed rock column experiments. Kd values obtained from fracture column experiments were one order of magnitude lower than Kd values from batch experiments.

  10. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  11. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  12. Review and Implementation of Technology for Solid Radioactive Waste Volume Reduction

    DTIC Science & Technology

    1999-10-15

    were shifted to Project 1.1 for spent nuclear fuel cask development to accelerate that project. Those funds should be repaid to Project 1.3 in the... transported between the shipyards such as Nerpa, and other intermediate storage sites such as Gremikha and Andreeva Bay. At these sites the largest...waste source and allow pretreatment unit operations using commercially available technologies of contaminant assaying, cutting/shearing, sorting

  13. Start-up and incremental practice expenses for behavior change interventions in primary care.

    PubMed

    Dodoo, Martey S; Krist, Alex H; Cifuentes, Maribel; Green, Larry A

    2008-11-01

    If behavior-change services are to be offered routinely in primary care practices, providers must be appropriately compensated. Estimating what is spent by practices in providing such services is a critical component of establishing appropriate payment and was the objective of this study. In-practice expenditure data were collected for ten different interventions, using a standardized instrument in 29 practices nested in ten practice-based research networks across the U.S. during 2006-2007. The data were analyzed using standard templates to create credible estimates of the expenses incurred for both the start-up period and the implementation phase of the interventions. Average monthly start-up expenses were $1860 per practice (SE=$455). Most start-up expenditures were for staff training. Average monthly incremental costs were $58 ($15 for provision of direct care [SE=$5]; $43 in overhead [SE=$17]) per patient participant. The bulk of the intervention expenditures was spent on the recruitment and screening of patient participants. Primary care practices must spend money to address their patients' unhealthy behaviors--at least $1860 to initiate systematic approaches and $58 monthly per participating patient to implement the approaches routinely. Until primary care payment systems incorporate these expenses, it is unlikely that these services will be readily available.

  14. Management of spent nuclear fuel on the Oak Ridge Reservation, Oak Ridge, Tennessee: Environmental assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-01

    On June 1, 1995, DOE issued a Record of Decision [60 Federal Register 28680] for the Department-wide management of spent nuclear fuel (SNF); regionalized storage of SNF by fuel type was selected as the preferred alternative. The proposed action evaluated in this environmental assessment is the management of SNF on the Oak Ridge Reservation (ORR) to implement this preferred alternative of regional storage. SNF would be retrieved from storage, transferred to a hot cell if segregation by fuel type and/or repackaging is required, loaded into casks, and shipped to off-site storage. The proposed action would also include construction and operationmore » of a dry cask SNF storage facility on ORR, in case of inadequate SNF storage. Action is needed to enable DOE to continue operation of the High Flux Isotope Reactor, which generates SNF. This report addresses environmental impacts.« less

  15. Control of a laser inertial confinement fusion-fission power plant

    DOEpatents

    Moses, Edward I.; Latkowski, Jeffery F.; Kramer, Kevin J.

    2015-10-27

    A laser inertial-confinement fusion-fission energy power plant is described. The fusion-fission hybrid system uses inertial confinement fusion to produce neutrons from a fusion reaction of deuterium and tritium. The fusion neutrons drive a sub-critical blanket of fissile or fertile fuel. A coolant circulated through the fuel extracts heat from the fuel that is used to generate electricity. The inertial confinement fusion reaction can be implemented using central hot spot or fast ignition fusion, and direct or indirect drive. The fusion neutrons result in ultra-deep burn-up of the fuel in the fission blanket, thus enabling the burning of nuclear waste. Fuels include depleted uranium, natural uranium, enriched uranium, spent nuclear fuel, thorium, and weapons grade plutonium. LIFE engines can meet worldwide electricity needs in a safe and sustainable manner, while drastically shrinking the highly undesirable stockpiles of depleted uranium, spent nuclear fuel and excess weapons materials.

  16. Evaluation of Neutron Poison Materials for DOE SNF Disposal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinson, D.W.; Caskey, G.R. Jr.; Sindelar, R.L.

    1998-09-01

    Aluminum-based spent nuclear fuel (Al-SNF) from foreign and domestic research reactors is being consolidated at the Savannah River Site (SRS) for ultimate disposal in the Mined Geologic Disposal System (MGDS). Most of the aluminum-based fuel material contains highly enriched uranium (HEU) (more than 20 percent 235U), which challenges the preclusion of criticality events for disposal periods exceeding 10,000 years. Recent criticality analyses have shown that the addition of neutron absorbing materials (poisons) is needed in waste packages containing DOE SNF canisters fully loaded with Al-SNF under flooded and degraded configurations to demonstrate compliance with the requirement that Keff less thanmore » 0.95. Compatibility of poison matrix materials and the Al-SNF, including their relative degradation rate and solubility, are important to maintain criticality control. An assessment of the viability of poison and matrix materials has been conducted, and an experimental corrosion program has been initiated to provide data on degradation rates of poison and matrix materials and Al-SNF materials under repository relevant vapor and aqueous environments. Initial testing includes Al6061, Type 316L stainless steel, and A516Gr55 in synthesized J-13 water vapor at 50 degrees C, 100 degrees C, and 200 degrees C and in condensate water vapor at 100 degrees C. Preliminary results are presented herein.« less

  17. Research on Spent Fuel Storage and Transportation in CRIEPI (Part 2 Concrete Cask Storage)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koji Shirai; Jyunichi Tani; Taku Arai

    2008-10-01

    Concrete cask storage has been implemented in the world. At a later stage of storage period, the containment of the canister may deteriorate due to stress corrosion cracking phenomena in a salty air environment. High resistant stainless steels against SCC have been tested as compared with normal stainless steel. Taking account of the limited time-length of environment with certain level of humidity and temperature range, the high resistant stainless steels will survive from SCC damage. In addition, the adhesion of salt from salty environment on the canister surface will be further limited with respect to the canister temperature and anglemore » of the canister surface against the salty air flow in the concrete cask. Optional countermeasure against SCC with respect to salty air environment has been studied. Devices consisting of various water trays to trap salty particles from the salty air were designed to be attached at the air inlet for natural cooling of the cask storage building. Efficiency for trapping salty particles was evaluated. Inspection of canister surface was carried out using an optical camera inserted from the air outlet through the annulus of a concrete cask that has stored real spent fuel for more than 15 years. The camera image revealed no gross degradation on the surface of the canister. Seismic response of a full-scale concrete cask with simulated spent fuel assemblies has been demonstrated. The cask did not tip over, but laterally moved by the earthquake motion. Stress generated on the surface of the spent fuel assemblies during the earthquake motion were within the elastic region.« less

  18. Fidelity to the housing first model and effectiveness of permanent supported housing programs in California.

    PubMed

    Gilmer, Todd P; Stefancic, Ana; Katz, Marian L; Sklar, Marisa; Tsemberis, Sam; Palinkas, Lawrence A

    2014-11-01

    Permanent supported housing programs are being implemented throughout the United States. This study examined the relationship between fidelity to the Housing First model and residential outcomes among clients of full service partnerships (FSPs) in California. This study had a mixed-methods design. Quantitative administrative and survey data were used to describe FSP practices and to examine the association between fidelity to Housing First and residential outcomes in the year before and after enrollment of 6,584 FSP clients in 86 programs. Focus groups at 20 FSPs provided qualitative data to enhance the understanding of these findings with actual accounts of housing-related experiences in high- and low-fidelity programs. Prior to enrollment, the mean days of homelessness were greater at high- versus low-fidelity (101 versus 46 days) FSPs. After adjustment for individual characteristics, the analysis found that days spent homeless after enrollment declined by 87 at high-fidelity programs and by 34 at low-fidelity programs. After adjustment for days spent homeless before enrollment, days spent homeless after enrollment declined by 63 at high-fidelity programs and by 53 at low-fidelity programs. After enrollment, clients at high-fidelity programs spent more than 60 additional days in apartments than clients at low-facility programs. Differences were found between high- and low-fidelity FSPs in client choice in housing and how much clients' goals were considered in housing placement. Programs with greater fidelity to the Housing First model enrolled clients with longer histories of homelessness and placed most of them in apartments.

  19. Generation of PHB from Spent Sulfite Liquor Using Halophilic Microorganisms.

    PubMed

    Weissgram, Michaela; Gstöttner, Janina; Lorantfy, Bettina; Tenhaken, Raimund; Herwig, Christoph; Weber, Hedda K

    2015-06-08

    Halophilic microorganisms thrive at elevated concentrations of sodium chloride up to saturation and are capable of growing on a wide variety of carbon sources like various organic acids, hexose and also pentose sugars. Hence, the biotechnological application of these microorganisms can cover many aspects, such as the treatment of hypersaline waste streams of different origin. Due to the fact that the high osmotic pressure of hypersaline environments reduces the risk of contamination, the capacity for cost-effective non-sterile cultivation can make extreme halophilic microorganisms potentially valuable organisms for biotechnological applications. In this contribution, the stepwise use of screening approaches, employing design of experiment (DoE) on model media and subsequently using industrial waste as substrate have been implemented to investigate the applicability of halophiles to generate PHB from the industrial waste stream spent sulfite liquor (SSL). The production of PHB on model media as well as dilutions of industrial substrate in a complex medium has been screened for by fluorescence microscopy using Nile Blue staining. Screening was used to investigate the ability of halophilic microorganisms to withstand the inhibiting substances of the waste stream without negatively affecting PHB production. It could be shown that neither single inhibiting substances nor a mixture thereof inhibited growth in the investigated range, hence, leaving the question on the inhibiting mechanisms open. However, it could be demonstrated that some haloarchaea and halophilic bacteria are able to produce PHB when cultivated on 3.3% w/w dry matter spent sulfite liquor, whereas H. halophila was even able to thrive on 6.6% w/w dry matter spent sulfite liquor and still produce PHB.

  20. Implementing a low-starch biscuit-free diet in zoo gorillas: the impact on behavior.

    PubMed

    Less, E H; Bergl, R; Ball, R; Dennis, P M; Kuhar, C W; Lavin, S R; Raghanti, M A; Wensvoort, J; Willis, M A; Lukas, K E

    2014-01-01

    In the wild, western lowland gorillas travel long distances while foraging and consume a diet high in fiber and low in caloric density. In contrast, gorillas in zoos typically consume a diet that is low in fiber and calorically dense. Some items commonly used in captive gorilla diets contain high levels of starch and sugars, which are present at low levels in the natural diet of gorillas. Diet items high in simple carbohydrates are associated with obesity and heart disease in humans. Typical captive gorilla diets may also encourage undesirable behaviors. In response to these issues, we tested the behavioral impact of a diet that was biscuit-free, had low caloric density, and which was higher in volume at five institutions. We hypothesized that this diet change would reduce abnormal behaviors such as regurgitation and reingestion (R/R), decrease time spent inactive, and increase time spent feeding. The biscuit-free diet significantly reduced (and in the case of one zoo eliminated) R/R and may have reduced hair-plucking behavior. However, an increase in coprophagy was observed in many individuals following the diet change. The experimental diet caused a general increase in time the gorillas spent feeding, but this increase did not occur across all institutions and varied by individual. Interestingly, the overall time gorillas spent inactive actually increased with this diet change. Future research will examine these behavioral changes in a greater number of individuals to determine if the results remain consistent with these preliminary findings. Additionally, future research will examine the physiological impact of this diet change. © 2014 Wiley Periodicals, Inc.

  1. Massively parallel sparse matrix function calculations with NTPoly

    NASA Astrophysics Data System (ADS)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  2. Matrix effect and recovery terminology issues in regulated drug bioanalysis.

    PubMed

    Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard

    2012-02-01

    Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.

  3. Clustering Tree-structured Data on Manifold

    PubMed Central

    Lu, Na; Miao, Hongyu

    2016-01-01

    Tree-structured data usually contain both topological and geometrical information, and are necessarily considered on manifold instead of Euclidean space for appropriate data parameterization and analysis. In this study, we propose a novel tree-structured data parameterization, called Topology-Attribute matrix (T-A matrix), so the data clustering task can be conducted on matrix manifold. We incorporate the structure constraints embedded in data into the non-negative matrix factorization method to determine meta-trees from the T-A matrix, and the signature vector of each single tree can then be extracted by meta-tree decomposition. The meta-tree space turns out to be a cone space, in which we explore the distance metric and implement the clustering algorithm based on the concepts like Fréchet mean. Finally, the T-A matrix based clustering (TAMBAC) framework is evaluated and compared using both simulated data and real retinal images to illus trate its efficiency and accuracy. PMID:26660696

  4. Biological Matrix Effects in Quantitative Tandem Mass Spectrometry-Based Analytical Methods: Advancing Biomonitoring

    PubMed Central

    Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd

    2015-01-01

    The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585

  5. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  6. NASA's high-temperature engine materials program for civil aeronautics

    NASA Technical Reports Server (NTRS)

    Gray, Hugh R.; Ginty, Carol A.

    1992-01-01

    The Advanced High-Temperature Engine Materials Technology Program is described in terms of its research initiatives and its goal of developing propulsion systems for civil aeronautics with low levels of noise, pollution, and fuel consumption. The program emphasizes the analysis and implementation of structural materials such as polymer-matrix composites in fans, casings, and engine-control systems. Also investigated in the program are intermetallic- and metal-matrix composites for uses in compressors and turbine disks as well as ceramic-matrix composites for extremely high-temperature applications such as turbine vanes.

  7. Image Matrix Processor for Volumetric Computations Final Report CRADA No. TSB-1148-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberson, G. Patrick; Browne, Jolyon

    The development of an Image Matrix Processor (IMP) was proposed that would provide an economical means to perform rapid ray-tracing processes on volume "Giga Voxel" data sets. This was a multi-phased project. The objective of the first phase of the IMP project was to evaluate the practicality of implementing a workstation-based Image Matrix Processor for use in volumetric reconstruction and rendering using hardware simulation techniques. Additionally, ARACOR and LLNL worked together to identify and pursue further funding sources to complete a second phase of this project.

  8. Quantum Support Vector Machine for Big Data Classification

    NASA Astrophysics Data System (ADS)

    Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth

    2014-09-01

    Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.

  9. Thorium-based mixed oxide fuel in a pressurized water reactor: A feasibility analysis with MCNP

    NASA Astrophysics Data System (ADS)

    Tucker, Lucas Powelson

    This dissertation investigates techniques for spent fuel monitoring, and assesses the feasibility of using a thorium-based mixed oxide fuel in a conventional pressurized water reactor for plutonium disposition. Both non-paralyzing and paralyzing dead-time calculations were performed for the Portable Spectroscopic Fast Neutron Probe (N-Probe), which can be used for spent fuel interrogation. Also, a Canberra 3He neutron detector's dead-time was estimated using a combination of subcritical assembly measurements and MCNP simulations. Next, a multitude of fission products were identified as candidates for burnup and spent fuel analysis of irradiated mixed oxide fuel. The best isotopes for these applications were identified by investigating half-life, photon energy, fission yield, branching ratios, production modes, thermal neutron absorption cross section and fuel matrix diffusivity. 132I and 97Nb were identified as good candidates for MOX fuel on-line burnup analysis. In the second, and most important, part of this work, the feasibility of utilizing ThMOX fuel in a pressurized water reactor (PWR) was first examined under steady-state, beginning of life conditions. Using a three-dimensional MCNP model of a Westinghouse-type 17x17 PWR, several fuel compositions and configurations of a one-third ThMOX core were compared to a 100% UO2 core. A blanket-type arrangement of 5.5 wt% PuO2 was determined to be the best candidate for further analysis. Next, the safety of the ThMOX configuration was evaluated through three cycles of burnup at several using the following metrics: axial and radial nuclear hot channel factors, moderator and fuel temperature coefficients, delayed neutron fraction, and shutdown margin. Additionally, the performance of the ThMOX configuration was assessed by tracking cycle length, plutonium destroyed, and fission product poison concentration.

  10. Methods for design and evaluation of integrated hardware-software systems for concurrent computation

    NASA Technical Reports Server (NTRS)

    Pratt, T. W.

    1985-01-01

    Research activities and publications are briefly summarized. The major tasks reviewed are: (1) VAX implementation of the PISCES parallel programming environment; (2) Apollo workstation network implementation of the PISCES environment; (3) FLEX implementation of the PISCES environment; (4) sparse matrix iterative solver in PSICES Fortran; (5) image processing application of PISCES; and (6) a formal model of concurrent computation being developed.

  11. Science education reform in an elementary school: An investigation of collaboration and inquiry in a school with an emphasis on language arts and fine arts

    NASA Astrophysics Data System (ADS)

    Martini, Mariana

    This investigation was framed within the science education reform, which proposes to change the way science is taught and promotes the implementation of inquiry-based teaching approaches. The implementation of inquiry science teaching represents a move away from traditional didactic teaching styles, a transition that requires change in the assumptions underlying the philosophy of traditional science instruction. Another theme in the reform literature is the establishment of collaboration between teachers and researchers or scientists as a way to implement reform practices. Situated within this reform climate, this research aimed to investigate science education at an elementary school with a history of implementing reform ideas in the areas of language arts and fine arts. I employed an ethnographic methodology to examine the nature of a teacher-researcher relationship in the context of the school's culture and teachers' practices. The findings indicate that change was not pervasive. Reform ideas were implemented only in the areas of language arts and fine arts. Situated within a district that promoted an accountability climate, the school disregarded science education and opposed the use of constructivist-based pedagogies, and did not have a strong science program. Since science was not tested, teachers spent little (if any) time teaching science. All participants firmly perceived the existence of several barriers to the implementation of inquiry: (a) lack of time: teachers spent excessive time to prepare students for tests, (b) nature of science teaching: materials and set preparation, (c) lack of content knowledge, (d) lack of pedagogical content knowledge, and (e) lack of opportunities to develop professional knowledge. In spite of the barriers, the school had two assets: an outdoor facility and two enthusiastic teachers who were lead science teachers, in spite of the their lack of content and pedagogical science knowledge. Collaboration between the researcher and each teacher was developmental. Defining who we are and how we approach the work ahead played an important part in the relationship. It took time to build trust and change the modus operandi from a cooperation to a collaboration project. Despite the constraints faced, collaboration had a positive effect on us.

  12. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    PubMed Central

    Diamond, Alan; Nowotny, Thomas; Schmuker, Michael

    2016-01-01

    Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and “neuromorphic algorithms” are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing. PMID:26778950

  13. Effectiveness and Safety of the Awakening and Breathing Coordination, Delirium Monitoring/Management, and Early Exercise/Mobility (ABCDE) Bundle

    PubMed Central

    Balas, Michele C.; Vasilevskis, Eduard E.; Olsen, Keith M.; Schmid, Kendra K.; Shostrom, Valerie; Cohen, Marlene Z.; Peitz, Gregory; Gannon, David E.; Sisson, Joseph; Sullivan, James; Stothert, Joseph C.; Lazure, Julie; Nuss, Suzanne L.; Jawa, Randeep S.; Freihaut, Frank; Ely, E. Wesley; Burke, William J.

    2014-01-01

    Objective The debilitating and persistent effects of intensive care unit (ICU)-acquired delirium and weakness warrant testing of prevention strategies. The purpose of this study was to evaluate the effectiveness and safety of implementing the Awakening and Breathing Coordination, Delirium monitoring/management, and Early exercise/mobility (ABCDE) bundle into everyday practice. Design Eighteen-month, prospective, cohort, before-after study conducted between November 2010 and May 2012. Setting Five adult ICUs, one step-down unit, and one oncology/hematology special care unit located in a 624-bed tertiary medical center. Patients Two hundred ninety-six patients (146 pre- and 150 post-bundle implementation), age ≥ 19 years, managed by the institutions’ medical or surgical critical care service. Interventions ABCDE bundle. Measurements For mechanically ventilated patients (n = 187), we examined the association between bundle implementation and ventilator-free days. For all patients, we used regression models to quantify the relationship between ABCDE bundle implementation and the prevalence/duration of delirium and coma, early mobilization, mortality, time to discharge, and change in residence. Safety outcomes and bundle adherence were monitored. Main Results Patients in the post-implementation period spent three more days breathing without mechanical assistance than did those in the pre-implementation period (median [IQR], 24 [7 to 26] vs. 21 [0 to 25]; p = 0.04). After adjusting for age, sex, severity of illness, comorbidity, and mechanical ventilation status, patients managed with the ABCDE bundle experienced a near halving of the odds of delirium (odds ratio [OR], 0.55; 95% confidence interval [CI], 0.33–0.93; p = 0.03) and increased odds of mobilizing out of bed at least once during an ICU stay (OR, 2.11; 95% CI, 1.29–3.45; p = 0.003). No significant differences were noted in self-extubation or reintubation rates. Conclusions Critically ill patients managed with the ABCDE bundle spent three more days breathing without assistance, experienced less delirium, and were more likely to be mobilized during their ICU stay than patients treated with usual care. PMID:24394627

  14. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  15. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  16. Comparing implementations of penalized weighted least-squares sinogram restoration

    PubMed Central

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick

    2010-01-01

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix inversion into smaller coupled problems and exploited sparseness to minimize matrix operations. For the conjugate-gradient approach, the authors exploited sparseness and preconditioned the problem to speed up convergence. Results: All methods produced qualitatively and quantitatively similar images as measured by resolution-variance tradeoffs and difference images. Despite the acceleration strategies, the direct matrix-inversion approach was found to be uncompetitive with iterative approaches, with a computational burden higher by an order of magnitude or more. The iterative conjugate-gradient approach, however, does appear promising, with computation times half that of the authors’ previous penalized-likelihood implementation. Conclusions: Iterative conjugate-gradient based PWLS sinogram restoration with careful matrix optimizations has computational advantages over direct matrix PWLS inversion and over penalized-likelihood sinogram restoration and can be considered a good alternative in standard-dose regimes. PMID:21158306

  17. A 3/D finite element approach for metal matrix composites based on micromechanical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svobodnik, A.J.; Boehm, H.J.; Rammerstorfer, F.G.

    Based on analytical considerations by Dvorak and Bahel-El-Din, a 3/D finite element material law has been developed for the elastic-plastic analysis of unidirectional fiber-reinforced metal matrix composites. The material law described in this paper has been implemented in the finite element code ABAQUS via the user subroutine UMAT. A constitutive law is described under the assumption that the fibers are linear-elastic and the matrix is of a von Mises-type with a Prager-Ziegler kinematic hardening rule. The uniaxial effective stress-strain relationship of the matrix in the plastic range is approximated by a Ramberg-Osgood law, a linear hardening rule or a nonhardeningmore » rule. Initial yield surface of the matrix material and for the fiber reinforced composite are compared to show the effect of reinforcement. Implementation of this material law in a finite element program is shown. Furthermore, the efficiency of substepping schemes and stress corrections for the numerical integration of the elastic-plastic stress-strain relations for anisotropic materials are investigated. The results of uniaxial monotonic tests of a boron/aluminum composite are compared to some finite element analyses based on micromechanical considerations. Furthermore a complete 3/D analysis of a tensile test specimen made of a silicon-carbide/aluminum MMC and the analysis of an MMC inlet inserted in a homogenous material are shown. 12 refs.« less

  18. A review on management of spent lithium ion batteries and strategy for resource recycling of all components from them.

    PubMed

    Zhang, Wenxuan; Xu, Chengjian; He, Wenzhi; Li, Guangming; Huang, Juwen

    2018-02-01

    The wide use of lithium ion batteries (LIBs) has brought great numbers of discarded LIBs, which has become a common problem facing the world. In view of the deleterious effects of spent LIBs on the environment and the contained valuable materials that can be reused, much effort in many countries has been made to manage waste LIBs, and many technologies have been developed to recycle waste LIBs and eliminate environmental risks. As a review article, this paper introduces the situation of waste LIB management in some developed countries and in China, and reviews separation technologies of electrode components and refining technologies of LiCoO 2 and graphite. Based on the analysis of these recycling technologies and the structure and components characteristics of the whole LIB, this paper presents a recycling strategy for all components from obsolete LIBs, including discharge, dismantling, and classification, separation of electrode components and refining of LiCoO 2 /graphite. This paper is intended to provide a valuable reference for the management, scientific research, and industrial implementation on spent LIBs recycling, to recycle all valuable components and reduce the environmental pollution, so as to realize the win-win situation of economic and environmental benefits.

  19. Strategic Plan for Standards-Based Reform. Report of Progress.

    ERIC Educational Resources Information Center

    Hawaii State Dept. of Education, Honolulu. Office of the Superintendent.

    This report summarizes the expectations, mission, guiding principles, standards, assessments, and time line for standards implementation for the state of Hawai'i. Implementation was scheduled for completion by August 2000. A technical reference matrix tracks the development of the assessment and accountability system. The completion of specific…

  20. I-NERI-2007-004-K, DEVELOPMENT AND CHARACTERIZATION OF NEW HIGH-LEVEL WASTE FORMS FOR ACHIEVING WASTE MINIMIZATION FROM PYROPROCESSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.M. Frank

    Work describe in this report represents the final year activities for the 3-year International Nuclear Energy Research Initiative (I-NERI) project: Development and Characterization of New High-Level Waste Forms for Achieving Waste Minimization from Pyroprocessing. Used electrorefiner salt that contained actinide chlorides and was highly loaded with surrogate fission products was processed into three candidate waste forms. The first waste form, a high-loaded ceramic waste form is a variant to the CWF produced during the treatment of Experimental Breeder Reactor-II used fuel at the Idaho National Laboratory (INL). The two other waste forms were developed by researchers at the Korean Atomicmore » Energy Research Institute (KAERI). These materials are based on a silica-alumina-phosphate matrix and a zinc/titanium oxide matrix. The proposed waste forms, and the processes to fabricate them, were designed to immobilize spent electrorefiner chloride salts containing alkali, alkaline earth, lanthanide, and halide fission products that accumulate in the salt during the processing of used nuclear fuel. This aspect of the I-NERI project was to demonstrate 'hot cell' fabrication and characterization of the proposed waste forms. The outline of the report includes the processing of the spent electrorefiner salt and the fabrication of each of the three waste forms. Also described is the characterization of the waste forms, and chemical durability testing of the material. While waste form fabrication and sample preparation for characterization must be accomplished in a radiological hot cell facility due to hazardous radioactivity levels, smaller quantities of each waste form were removed from the hot cell to perform various analyses. Characterization included density measurement, elemental analysis, x-ray diffraction, scanning electron microscopy and the Product Consistency Test, which is a leaching method to measure chemical durability. Favorable results from this demonstration project will provide additional options for fission product immobilization and waste management associated the electrochemical/pyrometallurgical processing of used nuclear fuel.« less

  1. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  2. Category-theoretic models of algebraic computer systems

    NASA Astrophysics Data System (ADS)

    Kovalyov, S. P.

    2016-01-01

    A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.

  3. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  4. Iterative-method performance evaluation for multiple vectors associated with a large-scale sparse matrix

    NASA Astrophysics Data System (ADS)

    Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo

    2016-07-01

    Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.

  5. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  6. VASP- VARIABLE DIMENSION AUTOMATIC SYNTHESIS PROGRAM

    NASA Technical Reports Server (NTRS)

    White, J. S.

    1994-01-01

    VASP is a variable dimension Fortran version of the Automatic Synthesis Program, ASP. The program is used to implement Kalman filtering and control theory. Basically, it consists of 31 subprograms for solving most modern control problems in linear, time-variant (or time-invariant) control systems. These subprograms include operations of matrix algebra, computation of the exponential of a matrix and its convolution integral, and the solution of the matrix Riccati equation. The user calls these subprograms by means of a FORTRAN main program, and so can easily obtain solutions to most general problems of extremization of a quadratic functional of the state of the linear dynamical system. Particularly, these problems include the synthesis of the Kalman filter gains and the optimal feedback gains for minimization of a quadratic performance index. VASP, as an outgrowth of the Automatic Synthesis Program, has the following improvements: more versatile programming language; more convenient input/output format; some new subprograms which consolidate certain groups of statements that are often repeated; and variable dimensioning. The pertinent difference between the two programs is that VASP has variable dimensioning and more efficient storage. The documentation for the VASP program contains a VASP dictionary and example problems. The dictionary contains a description of each subroutine and instructions on its use. The example problems include dynamic response, optimal control gain, solution of the sampled data matrix Riccati equation, matrix decomposition, and a pseudo-inverse of a matrix. This program is written in FORTRAN IV and has been implemented on the IBM 360. The VASP program was developed in 1971.

  7. A new numerically stable implementation of the T-matrix method for electromagnetic scattering by spheroidal particles

    NASA Astrophysics Data System (ADS)

    Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.

    2013-07-01

    We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.

  8. Real-Time Parameter Estimation Method Applied to a MIMO Process and its Comparison with an Offline Identification Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk

    2009-01-12

    An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less

  9. Impact of Barcode Medication Administration Technology on How Nurses Spend Their Time On Clinical Care

    PubMed Central

    Poon, Eric G; Keohane, Carol; Featherstone, Erica; Hays, Brandon; Dervan, Andrew; Woolf, Seth; Hayes, Judy; Bane, Anne; Newmark, Lisa; Gandhi, Tejal K

    2006-01-01

    In a time-motion study conducted in a hospital that recently implemented barcode medication administration (BCMA) technology, we found that the BCMA system did not increase the amount of time nurses spend on medication administration activities, and did not compromise the amount of time nurses spent on direct care of patients. Our results should allay concerns regarding the impact of BCMA on nursing workflow. PMID:17238684

  10. Affects of Provider Type on Patient Satisfaction, Productivity and Cost Efficiency

    DTIC Science & Technology

    2006-04-25

    plus inflation. With the implementation of the prospective payment system, the MTF Commanders will need to examine ways to demonstrate effectiveness ...practitioner’s performed well when compared to physicians, the longer time spent with patients can reduce productivity and thereby reduce cost effectiveness ...are most cost effective in use of resources (Vincent, 2002). Cost per visit ratio is derived by dividing the variable cost of Provider Type 22

  11. Rethinking anaerobic As(III) oxidation in filters: Effect of indigenous nitrate respirers.

    PubMed

    Cui, Jinli; Du, Jingjing; Tian, Haixia; Chan, Tingshan; Jing, Chuanyong

    2018-04-01

    Microorganisms play a key role in the redox transformation of arsenic (As) in aquifers. In this study, the impact of indigenous bacteria, especially the prevailing nitrate respirers, on arsenite (As(III)) oxidation was explored during groundwater filtration using granular TiO 2 and subsequent spent TiO 2 anaerobic landfill. X-ray absorption near edge structure spectroscopy analysis showed As(III) oxidation (46% in 10 days) in the presence of nitrate in the simulated anaerobic landfills. Meanwhile, iron (Fe) species on the spent TiO 2 were dominated by amorphous ferric arsenate, ferrihydrite and goethite. The Fe phase showed no change during the anaerobic landfill incubation. Batch incubation experiments implied that the indigenous bacteria completely oxidized As(III) to arsenate (As(V)) in 10 days using nitrate as the terminal electron acceptor under anaerobic conditions. The bacterial community analysis indicated that various kinds of microbial species exist in groundwater matrix. Phylogenetic tree analysis revealed that Proteobacteria was the dominant phylum, with Hydrogenophaga (34%), Limnohabitans (16%), and Simplicispira (7%) as the major bacterial genera. The nitrate respirers especially from the Hydrogenophaga genus anaerobically oxidized As(III) using nitrate as an electron acceptor instead of oxygen. Our study implied that microbes can facilitate the groundwater As oxidation using nitrate on the adsorptive media. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Fate of 14C-acrylamide in roasted and ground coffee during storage.

    PubMed

    Baum, Matthias; Böhm, Nadine; Görlitz, Jessica; Lantz, Ingo; Merz, Karl Heinz; Ternité, Rüdiger; Eisenbrand, Gerhard

    2008-05-01

    Acrylamide (AA) is formed during heating of carbohydrate rich foods in the course of the Maillard reaction. AA has been classified as probably carcinogenic to humans. Storage experiments with roasted coffee have shown that AA levels decrease depending on storage time and temperature. In the present study the fate of AA lost during storage of roasted and ground (R&G) coffee was studied, using 14C-labeled AA as radiotracer. Radiolabel was measured in coffee brew, filter residue, and volatiles. In the brew, total (14)C-label decreased during storage of R&G coffee, while activity in the filter residue built up concomitantly. [2,3-14C]-AA (14C-AA) was the only 14C-related water extractable low molecular compound in the brew detected by radio-HPLC. No formation of volatile 14C-AA-related compounds was detected during storage and coffee brewing. Close to 90% of the radiolabel in the filter residue (spent R&G coffee, spent grounds) remained firmly bound to the matrix, largely resisting extraction by aqueous ammonia, ethyl acetate, chloroform, hexane, and sequential polyenzymatic digest. Furanthiols, which are abundant as aroma components in roasted coffee, have not been found to be involved in the formation of covalent AA adducts and thus do not contribute substantially to the decrease of AA during storage.

  13. Non-invasive preimplantation genetic screening using array comparative genomic hybridization on spent culture media: a proof-of-concept pilot study.

    PubMed

    Feichtinger, Michael; Vaccari, Enrico; Carli, Luca; Wallner, Elisabeth; Mädel, Ulrike; Figl, Katharina; Palini, Simone; Feichtinger, Wilfried

    2017-06-01

    The aim of this pilot study was to assess if array comparative genomic hybridization (aCGH), non-invasive preimplantation genetic screening (PGS) on blastocyst culture media is feasible. Therefore, aCGH analysis was carried out on 22 spent blastocyst culture media samples after polar body PGS because of advanced maternal age. All oocytes were fertilized by intracytoplasmic sperm injection and all embryos underwent assisted hatching. Concordance of polar body analysis and culture media genetic results was assessed. Thirteen out of 18 samples (72.2%) revealed general concordance of ploidy status (euploid or aneuploid). At least one chromosomal aberration was found concordant in 10 out of 15 embryos found to be aneuploid by both polar body and culture media analysis. Overall, 17 out of 35 (48.6%) single chromosomal aneuploidies were concordant between the culture media and polar body analysis. By analysing negative controls (oocytes with fertilization failure), notable maternal contamination was observed. Therefore, non-invasive PGS could serve as a second matrix after polar body or cleavage stage PGS; however, in euploid results, maternal contamination needs to be considered and results interpreted with caution. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  14. Movements and bioenergetics of canvasbacks wintering in the upper Chesapeake Bay

    USGS Publications Warehouse

    Howerter, D.W.

    1990-01-01

    The movement patterns, range areas and energetics of canvasbacks (Aythya valisineria) wintering in the upper Chesapeake Bay, Maryland, were investigated. Eighty-seven juvenile female canvasbacks were radio-tracked between 30 December 1988 and 25 March 1989. Diurnal time and energy budgets were constructed for a time of day-season matrix for canvasbacks using riverine and main bay habitats. Canvasbacks were very active at night, making regular and often lengthy crepuscular movements (x = 11.7 km) from near shore habitats during the day to off shore habitats at night. Movement patterns were similar for birds using habitats on the eastern and western shores of the Bay. Canvasbacks had extensive home ranges averaging 14,286 ha, and used an average of 1.97 core areas. Sleeping was the predominant diurnal behavior. Telemetry indicated that canvasbacks actively fed at night. Canvasbacks spent more time in active behaviors (e.g. swimming, alert) on the eastern shore than on the western shore. Similarly, canvasbacks were more active during daytime hours at locations where artificial feeding occurred. Behavioral patterns were only weakly correlated with weather patterns. Canvasbacks appeared to reduce energy expenditure in mid-winter by reducing distances moved, reducing feeding activities and increasing the amount of time spent sleeping. This pattern was observed even though 1988-89 mid-winter weather conditions were very mild.

  15. Eco-sustainable systems based on poly(lactic acid), diatomite and coffee grounds extract for food packaging.

    PubMed

    Cacciotti, Ilaria; Mori, Stefano; Cherubini, Valeria; Nanni, Francesca

    2018-06-01

    In the food packaging sector many efforts have been (and are) devoted to the development of new materials in order to reply to an urgent market demand for green and eco-sustainable products. Particularly a lot of attention is currently devoted both to the use of compostable and biobased polymers as innovative and promising alternative to the currently used petrochemical derived polymers, and to the re-use of waste materials coming from agriculture and food industry. In this work, multifunctional eco-sustainable systems, based on poly(lactic acid) (PLA) as biopolymeric matrix, diatomaceous earth as reinforcing filler and spent coffee grounds extract as oxygen scavenger, were produced for the first time, in order to provide a simultaneous improvement of mechanical and gas barrier properties. The influence of the diatomite and the spent coffee grounds extract on the microstructural, mechanical and oxygen barrier properties of the produced films was deeply investigated by means of X-Ray diffraction (XRD), infrared spectroscopy (FT-IR, ATR), scanning electron microscopy (SEM), uniaxial tensile tests, O 2 permeabilimetry measurements. An improvement of both mechanical and oxygen barrier properties was recorded for systems characterised by the co-presence of diatomite and coffee grounds extract, suggesting a possible synergic effect of the two additives. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Polymer and ceramic nanocomposites for aerospace applications

    NASA Astrophysics Data System (ADS)

    Rathod, Vivek T.; Kumar, Jayanth S.; Jain, Anjana

    2017-11-01

    This paper reviews the potential of polymer and ceramic matrix composites for aerospace/space vehicle applications. Special, unique and multifunctional properties arising due to the dispersion of nanoparticles in ceramic and metal matrix are briefly discussed followed by a classification of resulting aerospace applications. The paper presents polymer matrix composites comprising majority of aerospace applications in structures, coating, tribology, structural health monitoring, electromagnetic shielding and shape memory applications. The capabilities of the ceramic matrix nanocomposites to providing the electromagnetic shielding for aircrafts and better tribological properties to suit space environments are discussed. Structural health monitoring capability of ceramic matrix nanocomposite is also discussed. The properties of resulting nanocomposite material with its disadvantages like cost and processing difficulties are discussed. The paper concludes after the discussion of the possible future perspectives and challenges in implementation and further development of polymer and ceramic nanocomposite materials.

  17. FORTRAN Versions of Reformulated HFGMC Codes

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Aboudi, Jacob; Bednarcyk, Brett A.

    2006-01-01

    Several FORTRAN codes have been written to implement the reformulated version of the high-fidelity generalized method of cells (HFGMC). Various aspects of the HFGMC and its predecessors were described in several prior NASA Tech Briefs articles, the most recent being HFGMC Enhancement of MAC/GMC (LEW-17818-1), NASA Tech Briefs, Vol. 30, No. 3 (March 2006), page 34. The HFGMC is a mathematical model of micromechanics for simulating stress and strain responses of fiber/matrix and other composite materials. The HFGMC overcomes a major limitation of a prior version of the GMC by accounting for coupling of shear and normal stresses and thereby affords greater accuracy, albeit at a large computational cost. In the reformulation of the HFGMC, the issue of computational efficiency was addressed: as a result, codes that implement the reformulated HFGMC complete their calculations about 10 times as fast as do those that implement the HFGMC. The present FORTRAN implementations of the reformulated HFGMC were written to satisfy a need for compatibility with other FORTRAN programs used to analyze structures and composite materials. The FORTRAN implementations also afford capabilities, beyond those of the basic HFGMC, for modeling inelasticity, fiber/matrix debonding, and coupled thermal, mechanical, piezo, and electromagnetic effects.

  18. International Review of Frameworks for Impact Evaluation of Appliance Standards, Labeling, and Incentives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Nan; Romankiewicz, John; Vine, Edward

    2012-12-15

    In recent years, the number of energy efficiency policies implemented has grown very rapidly as energy security and climate change have become top policy issues for many governments around the world. Within the sphere of energy efficiency policy, governments (federal and local), electric utilities, and other types of businesses and institutions are implementing a wide variety of programs to spread energy efficiency practices in industry, buildings, transport, and electricity. As programs proliferate, there is an administrative and business imperative to evaluate the savings and processes of these programs to ensure that program funds spent are indeed leading to a moremore » energy-efficient economy.« less

  19. Efficient Computation Of Manipulator Inertia Matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Improved method for computation of manipulator inertia matrix developed, based on concept of spatial inertia of composite rigid body. Required for implementation of advanced dynamic-control schemes as well as dynamic simulation of manipulator motion. Motivated by increasing demand for fast algorithms to provide real-time control and simulation capability and, particularly, need for faster-than-real-time simulation capability, required in many anticipated space teleoperation applications.

  20. Implementation and Assessment of Advanced Analog Vector-Matrix Processor

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.

  1. Background recovery via motion-based robust principal component analysis with matrix factorization

    NASA Astrophysics Data System (ADS)

    Pan, Peng; Wang, Yongli; Zhou, Mingyuan; Sun, Zhipeng; He, Guoping

    2018-03-01

    Background recovery is a key technique in video analysis, but it still suffers from many challenges, such as camouflage, lighting changes, and diverse types of image noise. Robust principal component analysis (RPCA), which aims to recover a low-rank matrix and a sparse matrix, is a general framework for background recovery. The nuclear norm is widely used as a convex surrogate for the rank function in RPCA, which requires computing the singular value decomposition (SVD), a task that is increasingly costly as matrix sizes and ranks increase. However, matrix factorization greatly reduces the dimension of the matrix for which the SVD must be computed. Motion information has been shown to improve low-rank matrix recovery in RPCA, but this method still finds it difficult to handle original video data sets because of its batch-mode formulation and implementation. Hence, in this paper, we propose a motion-assisted RPCA model with matrix factorization (FM-RPCA) for background recovery. Moreover, an efficient linear alternating direction method of multipliers with a matrix factorization (FL-ADM) algorithm is designed for solving the proposed FM-RPCA model. Experimental results illustrate that the method provides stable results and is more efficient than the current state-of-the-art algorithms.

  2. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  3. FPGA architecture and implementation of sparse matrix vector multiplication for the finite element method

    NASA Astrophysics Data System (ADS)

    Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.

    2008-04-01

    The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.

  4. Increased care demand and medical costs after falls in nursing homes: A Delphi study.

    PubMed

    Sterke, Carolyn Shanty; Panneman, Martien J; Erasmus, Vicki; Polinder, Suzanne; van Beeck, Ed F

    2018-04-21

    To estimate the increased care demand and medical costs caused by falls in nursing homes. There is compelling evidence that falls in nursing homes are preventable. However, proper implementation of evidence-based guidelines to prevent falls is often hindered by insufficient management support, staff time and funding. A three-round Delphi study. A panel of 41 experts, all working in nursing homes in the Netherlands, received three online questionnaires to estimate the extra hours of care needed during the first year after the fall. This was estimated for ten falls categories with different levels of injury severity, in three scenarios, that is a best-case, a typical-case and a worst-case scenario. We calculated the costs of falls by multiplying the mean amount of extra hours that the participants spent on the care for a resident after a fall with their hourly wages. In case of a noninjurious fall, the extra time spent on the faller is on average almost 5 hr, expressed in euros that add to € 193. The extra staff time and costs of falls increased with increasing severity of injury. In the case of a fracture of the lower limb, the extra staff time increased to 132 hr, expressed in euros that is € 4,604. In the worst-case scenario of a fracture of the lower limb, the extra staff time increased to 284 hr, expressed in euros that is € 10,170. Falls in nursing homes result in a great deal of extra staff time spent on care, with extra costs varying between € 193 for a noninjurious fall and € 10,170 for serious falls. This study could aid decision-making on investing in appropriate implementation of falls prevention interventions in nursing homes. © 2018 John Wiley & Sons Ltd.

  5. Determining heavy metals in spent compact fluorescent lamps (CFLs) and their waste management challenges: some strategies for improving current conditions.

    PubMed

    Taghipour, Hassan; Amjad, Zahra; Jafarabadi, Mohamad Asghari; Gholampour, Akbar; Norouz, Prviz

    2014-07-01

    From environmental viewpoint, the most important advantage of compact fluorescent lamps (CFLs) is reduction of green house gas emissions. But their significant disadvantage is disposal of spent lamps because of containing a few milligrams of toxic metals, especially mercury and lead. For a successful implementation of any waste management plan, availability of sufficient and accurate information on quantities and compositions of the generated waste and current management conditions is a fundamental prerequisite. In this study, CFLs were selected among 20 different brands in Iran. Content of heavy metals including mercury, lead, nickel, arsenic and chromium was determined by inductive coupled plasma (ICP). Two cities, Tehran and Tabriz, were selected for assessing the current waste management condition of CFLs. The study found that waste generation amount of CFLs in the country was about 159.80, 183.82 and 153.75 million per year in 2010, 2011 and 2012, respectively. Waste generation rate of CFLs in Iran was determined to be 2.05 per person in 2012. The average amount of mercury, lead, nickel, arsenic and chromium was 0.417, 2.33, 0.064, 0.056 and 0.012 mg per lamp, respectively. Currently, waste of CFLs is disposed by municipal waste stream in waste landfills. For improving the current conditions, we propose by considering the successful experience of extended producer responsibility (EPR) in other electronic waste management. The EPR program with advanced recycling fee (ARF) is implemented for collecting and then recycling CFLs. For encouraging consumers to take the spent CFLs back at the end of the products' useful life, a proportion of ARF (for example, 50%) can be refunded. On the other hand, the government and Environmental Protection Agency should support and encourage recycling companies of CFLs both technically and financially in the first place. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Telemedicine optoelectronic biomedical data processing system

    NASA Astrophysics Data System (ADS)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.

  7. IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3

    EPA Science Inventory

    The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...

  8. Snapshot retinal imaging Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Kudenov, Michael; Kashani, Amir; Schwiegerling, Jim; Escuti, Michael

    2015-09-01

    Early diagnosis of glaucoma, which is a leading cause for visual impairment, is critical for successful treatment. It has been shown that Imaging polarimetry has advantages in early detection of structural changes in the retina. Here, we theoretically and experimentally present a snapshot Mueller Matrix Polarimeter fundus camera, which has the potential to record the polarization-altering characteristics of retina with a single snapshot. It is made by incorporating polarization gratings into a fundus camera design. Complete Mueller Matrix data sets can be obtained by analyzing the polarization fringes projected onto the image plane. In this paper, we describe the experimental implementation of the snapshot retinal imaging Mueller matrix polarimeter (SRIMMP), highlight issues related to calibration, and provide preliminary images acquired from the camera.

  9. Implementing the SU(2) Symmetry for the DMRG

    NASA Astrophysics Data System (ADS)

    Alvarez, Gonzalo

    2010-03-01

    In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992), Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This talk will explain how the DMRG++ codefootnotetextarXiv:0902.3185 or Computer Physics Communications 180 (2009) 1572-1578. has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries will be discussed for typical tight-binding models of strongly correlated electronic systems. The computational bottleneck of the algorithm, and the use of shared memory parallelization will also be addressed. Finally, a roadmap for future work on DMRG++ will be presented.

  10. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  11. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-01

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  12. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-14

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programmed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  13. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  14. Implementation of the SU(2) Hamiltonian Symmetry for the DMRG Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Gonzalo

    2012-01-01

    In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992, 1993) and Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This paper explains how the the DMRG++ code (Alvarez, 2009) has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries are discussed for the one-orbital Hubbard model, and for a two-orbital Hubbard model for iron-based superconductors. The computational bottleneck of the algorithm and themore » use of shared memory parallelization are also addressed.« less

  15. Considerations on Visible Light Communication security by applying the Risk Matrix methodology for risk assessment

    PubMed Central

    Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Visible Light Communications (VLC) is a cutting edge technology for data communication that is being considered to be implemented in a wide range of applications such as Inter-vehicle communication or Local Area Network (LAN) communication. As a novel technology, some aspects of the implementation of VLC have not been deeply considered or tested. Among these aspects, security and its implementation may become an obstacle for VLCs broad usage. In this article, we have used the well-known Risk Matrix methodology to determine the relative risk that several common attacks have in a VLC network. Four examples: a War Driving, a Queensland alike Denial of Service, a Preshared Key Cracking, and an Evil Twin attack, illustrate the utilization of the methodology over a VLC implementation. The used attacks also covered the different areas delimited by the attack taxonomy used in this work. By defining and determining which attacks present a greater risk, the results of this work provide a lead into which areas should be invested to increase the safety of VLC networks. PMID:29186184

  16. 3D Progressive Damage Modeling for Laminated Composite Based on Crack Band Theory and Continuum Damage Mechanics

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Pineda, Evan J.; Ranatunga, Vipul; Smeltzer, Stanley S.

    2015-01-01

    A simple continuum damage mechanics (CDM) based 3D progressive damage analysis (PDA) tool for laminated composites was developed and implemented as a user defined material subroutine to link with a commercially available explicit finite element code. This PDA tool uses linear lamina properties from standard tests, predicts damage initiation with an easy-to-implement Hashin-Rotem failure criteria, and in the damage evolution phase, evaluates the degradation of material properties based on the crack band theory and traction-separation cohesive laws. It follows Matzenmiller et al.'s formulation to incorporate the degrading material properties into the damaged stiffness matrix. Since nonlinear shear and matrix stress-strain relations are not implemented, correction factors are used for slowing the reduction of the damaged shear stiffness terms to reflect the effect of these nonlinearities on the laminate strength predictions. This CDM based PDA tool is implemented as a user defined material (VUMAT) to link with the Abaqus/Explicit code. Strength predictions obtained, using this VUMAT, are correlated with test data for a set of notched specimens under tension and compression loads.

  17. Considerations on Visible Light Communication security by applying the Risk Matrix methodology for risk assessment.

    PubMed

    Marin-Garcia, Ignacio; Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Perez-Jimenez, Rafael

    2017-01-01

    Visible Light Communications (VLC) is a cutting edge technology for data communication that is being considered to be implemented in a wide range of applications such as Inter-vehicle communication or Local Area Network (LAN) communication. As a novel technology, some aspects of the implementation of VLC have not been deeply considered or tested. Among these aspects, security and its implementation may become an obstacle for VLCs broad usage. In this article, we have used the well-known Risk Matrix methodology to determine the relative risk that several common attacks have in a VLC network. Four examples: a War Driving, a Queensland alike Denial of Service, a Preshared Key Cracking, and an Evil Twin attack, illustrate the utilization of the methodology over a VLC implementation. The used attacks also covered the different areas delimited by the attack taxonomy used in this work. By defining and determining which attacks present a greater risk, the results of this work provide a lead into which areas should be invested to increase the safety of VLC networks.

  18. Packaging Strategies for Criticality Safety for "Other" DOE Fuels in a Repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larry L Taylor

    2004-06-01

    Since 1998, there has been an ongoing effort to gain acceptance of U.S. Department of Energy (DOE)-owned spent nuclear fuel (SNF) in the national repository. To accomplish this goal, the fuel matrix was used as a discriminating feature to segregate fuels into nine distinct groups. From each of those groups, a characteristic fuel was selected and analyzed for criticality safety based on a proposed packaging strategy. This report identifies and quantifies the important criticality parameters for the canisterized fuels within each criticality group to: (1) demonstrate how the “other” fuels in the group are bounded by the baseline calculations ormore » (2) allow identification of individual type fuels that might require special analysis and packaging.« less

  19. Composite neutron absorbing coatings for nuclear criticality control

    DOEpatents

    Wright, Richard N.; Swank, W. David; Mizia, Ronald E.

    2005-07-19

    Thermal neutron absorbing composite coating materials and methods of applying such coating materials to spent nuclear fuel storage systems are provided. A composite neutron absorbing coating applied to a substrate surface includes a neutron absorbing layer overlying at least a portion of the substrate surface, and a corrosion resistant top coat layer overlying at least a portion of the neutron absorbing layer. An optional bond coat layer can be formed on the substrate surface prior to forming the neutron absorbing layer. The neutron absorbing layer can include a neutron absorbing material, such as gadolinium oxide or gadolinium phosphate, dispersed in a metal alloy matrix. The coating layers may be formed by a plasma spray process or a high velocity oxygen fuel process.

  20. D-MATRIX: A web tool for constructing weight matrix of conserved DNA motifs

    PubMed Central

    Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok

    2009-01-01

    Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. D­MATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the co­regulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sos­box cis­regulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. D­MATRIX tool is accessible through the CIMAP domain network. Availability http://203.190.147.116/dmatrix/ PMID:19759861

  1. D-MATRIX: a web tool for constructing weight matrix of conserved DNA motifs.

    PubMed

    Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok

    2009-07-27

    Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. D-MATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the co-regulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sos-box cis-regulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. D-MATRIX tool is accessible through the CIMAP domain network. http://203.190.147.116/dmatrix/

  2. Generation of PHB from Spent Sulfite Liquor Using Halophilic Microorganisms

    PubMed Central

    Weissgram, Michaela; Gstöttner, Janina; Lorantfy, Bettina; Tenhaken, Raimund; Herwig, Christoph; Weber, Hedda K.

    2015-01-01

    Halophilic microorganisms thrive at elevated concentrations of sodium chloride up to saturation and are capable of growing on a wide variety of carbon sources like various organic acids, hexose and also pentose sugars. Hence, the biotechnological application of these microorganisms can cover many aspects, such as the treatment of hypersaline waste streams of different origin. Due to the fact that the high osmotic pressure of hypersaline environments reduces the risk of contamination, the capacity for cost-effective non-sterile cultivation can make extreme halophilic microorganisms potentially valuable organisms for biotechnological applications. In this contribution, the stepwise use of screening approaches, employing design of experiment (DoE) on model media and subsequently using industrial waste as substrate have been implemented to investigate the applicability of halophiles to generate PHB from the industrial waste stream spent sulfite liquor (SSL). The production of PHB on model media as well as dilutions of industrial substrate in a complex medium has been screened for by fluorescence microscopy using Nile Blue staining. Screening was used to investigate the ability of halophilic microorganisms to withstand the inhibiting substances of the waste stream without negatively affecting PHB production. It could be shown that neither single inhibiting substances nor a mixture thereof inhibited growth in the investigated range, hence, leaving the question on the inhibiting mechanisms open. However, it could be demonstrated that some haloarchaea and halophilic bacteria are able to produce PHB when cultivated on 3.3% w/w dry matter spent sulfite liquor, whereas H. halophila was even able to thrive on 6.6% w/w dry matter spent sulfite liquor and still produce PHB. PMID:27682089

  3. A Hybrid Task Graph Scheduler for High Performance Image Processing Workflows.

    PubMed

    Blattner, Timothy; Keyrouz, Walid; Bhattacharyya, Shuvra S; Halem, Milton; Brady, Mary

    2017-12-01

    Designing applications for scalability is key to improving their performance in hybrid and cluster computing. Scheduling code to utilize parallelism is difficult, particularly when dealing with data dependencies, memory management, data motion, and processor occupancy. The Hybrid Task Graph Scheduler (HTGS) improves programmer productivity when implementing hybrid workflows for multi-core and multi-GPU systems. The Hybrid Task Graph Scheduler (HTGS) is an abstract execution model, framework, and API that increases programmer productivity when implementing hybrid workflows for such systems. HTGS manages dependencies between tasks, represents CPU and GPU memories independently, overlaps computations with disk I/O and memory transfers, keeps multiple GPUs occupied, and uses all available compute resources. Through these abstractions, data motion and memory are explicit; this makes data locality decisions more accessible. To demonstrate the HTGS application program interface (API), we present implementations of two example algorithms: (1) a matrix multiplication that shows how easily task graphs can be used; and (2) a hybrid implementation of microscopy image stitching that reduces code size by ≈ 43% compared to a manually coded hybrid workflow implementation and showcases the minimal overhead of task graphs in HTGS. Both of the HTGS-based implementations show good performance. In image stitching the HTGS implementation achieves similar performance to the hybrid workflow implementation. Matrix multiplication with HTGS achieves 1.3× and 1.8× speedup over the multi-threaded OpenBLAS library for 16k × 16k and 32k × 32k size matrices, respectively.

  4. Full-Scale Cask Testing and Public Acceptance of Spent Nuclear Fuel Shipments - 12254

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dilger, Fred; Halstead, Robert J.; Ballard, James D.

    Full-scale physical testing of spent fuel shipping casks has been proposed by the National Academy of Sciences (NAS) 2006 report on spent nuclear fuel transportation, and by the Presidential Blue Ribbon Commission (BRC) on America's Nuclear Future 2011 draft report. The U.S. Nuclear Regulatory Commission (NRC) in 2005 proposed full-scale testing of a rail cask, and considered 'regulatory limits' testing of both rail and truck casks (SRM SECY-05-0051). The recent U.S. Department of Energy (DOE) cancellation of the Yucca Mountain project, NRC evaluation of extended spent fuel storage (possibly beyond 60-120 years) before transportation, nuclear industry adoption of very largemore » dual-purpose canisters for spent fuel storage and transport, and the deliberations of the BRC, will fundamentally change assumptions about the future spent fuel transportation system, and reopen the debate over shipping cask performance in severe accidents and acts of sabotage. This paper examines possible approaches to full-scale testing for enhancing public confidence in risk analyses, perception of risk, and acceptance of spent fuel shipments. The paper reviews the literature on public perception of spent nuclear fuel and nuclear waste transportation risks. We review and summarize opinion surveys sponsored by the State of Nevada over the past two decades, which show consistent patterns of concern among Nevada residents about health and safety impacts, and socioeconomic impacts such as reduced property values along likely transportation routes. We also review and summarize the large body of public opinion survey research on transportation concerns at regional and national levels. The paper reviews three past cask testing programs, the way in which these cask testing program results were portrayed in films and videos, and examines public and official responses to these three programs: the 1970's impact and fire testing of spent fuel truck casks at Sandia National Laboratories, the 1980's regulatory and demonstration testing of MAGNOX fuel flasks in the United Kingdom (the CEGB 'Operation Smash Hit' tests), and the 1980's regulatory drop and fire tests conducted on the TRUPACT II containers used for transuranic waste shipments to the Waste Isolation Pilot Plant in New Mexico. The primary focus of the paper is a detailed evaluation of the cask testing programs proposed by the NRC in its decision implementing staff recommendations based on the Package Performance Study, and by the State of Nevada recommendations based on previous work by Audin, Resnikoff, Dilger, Halstead, and Greiner. The NRC approach is based on demonstration impact testing (locomotive strike) of a large rail cask, either the TAD cask proposed by DOE for spent fuel shipments to Yucca Mountain, or a similar currently licensed dual-purpose cask. The NRC program might also be expanded to include fire testing of a legal-weight truck cask. The Nevada approach calls for a minimum of two tests: regulatory testing (impact, fire, puncture, immersion) of a rail cask, and extra-regulatory fire testing of a legal-weight truck cask, based on the cask performance modeling work by Greiner. The paper concludes with a discussion of key procedural elements - test costs and funding sources, development of testing protocols, selection of testing facilities, and test peer review - and various methods of communicating the test results to a broad range of stakeholder audiences. (authors)« less

  5. A web-based platform to support an evidence-based mental health intervention: lessons from the CBITS web site.

    PubMed

    Vona, Pamela; Wilmoth, Pete; Jaycox, Lisa H; McMillen, Janey S; Kataoka, Sheryl H; Wong, Marleen; DeRosier, Melissa E; Langley, Audra K; Kaufman, Joshua; Tang, Lingqi; Stein, Bradley D

    2014-11-01

    To explore the role of Web-based platforms in behavioral health, the study examined usage of a Web site for supporting training and implementation of an evidence-based intervention. Using data from an online registration survey and Google Analytics, the investigators examined user characteristics and Web site utilization. Site engagement was substantial across user groups. Visit duration differed by registrants' characteristics. Less experienced clinicians spent more time on the Web site. The training section accounted for most page views across user groups. Individuals previously trained in the Cognitive-Behavioral Intervention for Trauma in Schools intervention viewed more implementation assistance and online community pages than did other user groups. Web-based platforms have the potential to support training and implementation of evidence-based interventions for clinicians of varying levels of experience and may facilitate more rapid dissemination. Web-based platforms may be promising for trauma-related interventions, because training and implementation support should be readily available after a traumatic event.

  6. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how thesemore » findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in safely and efficiently shipping spent nuclear fuel and other radioactive materials. Additional business processes may be examined in this phase. The findings of these benchmarking efforts will help determine the organizational structure and requirements of the national transportation system. (authors)« less

  7. Computer assisted generation of the matrix elements between contracted wavefunctions in a Complete Active Space scheme

    NASA Astrophysics Data System (ADS)

    Angeli, C.; Cimiraglia, R.

    2005-02-01

    Starting from a CAS-SCF calculation a sequence of contracted functions can be generated by applying strings of spin-traced replacement operators to the CAS-SCF solution. The laborious task of producing the Hamiltonian matrix elements between such functions can be substantially reduced making use of a computer algebra system. An implementation employing the MuPAD system is presented and illustrated.

  8. High Strain Rate Deformation Modeling of a Polymer Matrix Composite. Part 1; Matrix Constitutive Equations

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Stouffer, Donald C.

    1998-01-01

    Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this first paper of a two part report, background information is presented, along with the constitutive equations which will be used to model the rate dependent nonlinear deformation response of the polymer matrix. Strain rate dependent inelastic constitutive models which were originally developed to model the viscoplastic deformation of metals have been adapted to model the nonlinear viscoelastic deformation of polymers. The modified equations were correlated by analyzing the tensile/ compressive response of both 977-2 toughened epoxy matrix and PEEK thermoplastic matrix over a variety of strain rates. For the cases examined, the modified constitutive equations appear to do an adequate job of modeling the polymer deformation response. A second follow-up paper will describe the implementation of the polymer deformation model into a composite micromechanical model, to allow for the modeling of the nonlinear, rate dependent deformation response of polymer matrix composites.

  9. Detection of nanomaterials in food and consumer products: bridging the gap from legislation to enforcement.

    PubMed

    Stamm, H; Gibson, N; Anklam, E

    2012-08-01

    This paper describes the requirements and resulting challenges for the implementation of current and upcoming European Union legislation referring to the use of nanomaterials in food, cosmetics and other consumer products. The European Commission has recently adopted a recommendation for the definition of nanomaterials. There is now an urgent need for appropriate and fit-for-purpose analytical methods in order to identify nanomaterials properly according to this definition and to assess whether or not a product contains nanomaterials. Considering the lack of such methods to date, this paper elaborates on the challenges of the legislative framework and the type of methods needed, not only to facilitate implementation of labelling requirements, but also to ensure the safety of products coming to the market. Considering the many challenges in the analytical process itself, such as interaction of nanoparticles with matrix constituents, potential agglomeration and aggregation due to matrix environment, broad variety of matrices, etc., there is a need for integrated analytical approaches, not only for sample preparation (e.g. separation from matrix), but also for the actual characterisation. Furthermore, there is an urgent need for quality assurance tools such as validated methods and (certified) reference materials, including materials containing nanoparticles in a realistic matrix (food products, cosmetics, etc.).

  10. Implementation of an Associative Flow Rule Including Hydrostatic Stress Effects Into the High Strain Rate Deformation Analysis of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos

    2003-01-01

    A previously developed analytical formulation has been modified in order to more accurately account for the effects of hydrostatic stresses on the nonlinear, strain rate dependent deformation of polymer matrix composites. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical J2 plasticity theory definitions of effective stress and effective inelastic strain, along with the equations used to compute the components of the inelastic strain rate tensor, are appropriately modified. To verify the revised formulation, the shear and tensile deformation of two representative polymers are computed across a wide range of strain rates. Results computed using the developed constitutive equations correlate well with experimental data. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite for several fiber orientation angles across a variety of strain rates. The computed values compare well to experimentally obtained results.

  11. Approximate methods in gamma-ray skyshine calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faw, R.E.; Roseberry, M.L.; Shultis, J.K.

    1985-11-01

    Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.

  12. Radar Data Processing Using a Distributed Computational System

    DTIC Science & Technology

    1992-06-01

    objects to processors must reduce Toc (N) (i.e., the time to compute on 85 N nodes) [Ref. 28]. Time spent communicating can represent a degradation of...de Sistemas e Computaq&o, s/ data. [9] Vilhena R. "IntroduqAo aos Algoritmos para Processamento de Marcaq6es e DistAncias", Escola Naval - Notas de...Aula - Automaq&o de Sistemas Navais, s/ data. (101 Averbuch A., Itzikcwitz S., and Kapon T. "Parallel Implementation of Multiple Model Tracking

  13. The Innovation Process and Command Consultation in the United States Army

    DTIC Science & Technology

    1980-06-01

    change may later "develop" a negative orientation to an innovation, and therefore be unwilling to implement it as a consequence of the barriers and...respondents had diametrically opposite notions regarding the extent to which they believed the consultation movement had gained credibility at the higher ...reached the rank of Lt. Colonel. Eleven respondents were colonels. Normally, the acquisition of higher levels of rank is a func- tion of time spent in

  14. Stochastic determination of matrix determinants

    NASA Astrophysics Data System (ADS)

    Dorn, Sebastian; Enßlin, Torsten A.

    2015-07-01

    Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations—matrices—acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

  15. Stochastic determination of matrix determinants.

    PubMed

    Dorn, Sebastian; Ensslin, Torsten A

    2015-07-01

    Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes, linear operations-matrices-acting on the data are often not accessible directly but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. While efficient probing routines to estimate a matrix's diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, there is no stochastic estimate for its determinant. We introduce a probing method for the logarithm of a determinant of a linear operator. Our method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

  16. 324 Building B-Cell Pressurized Water Reactor Spent Fuel Packaging & Shipment RL Readiness Assessment Final Report [SEC 1 Thru 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HUMPHREYS, D C

    A parallel readiness assessment (RA) was conducted by independent Fluor Hanford (FH) and U. S. Department of Energy, Richland Operations Office (RL) team to verify that an adequate state of readiness had been achieved for activities associated with the packaging and shipping of pressurized water reactor fuel assemblies from B-Cell in the 324 Building to the interim storage area at the Canister Storage Building in the 200 Area. The RL review was conducted in parallel with the FH review in accordance with the Joint RL/FH Implementation Plan (Appendix B). The RL RA Team members were assigned a FH RA Teammore » counterpart for the review. With this one-on-one approach, the RL RA Team was able to assess the FH Team's performance, competence, and adherence to the implementation plan and evaluate the level of facility readiness. The RL RA Team agrees with the FH determination that startup of the 324 Building B-Cell pressurized water reactor spent nuclear fuel packaging and shipping operations can safely proceed, pending completion of the identified pre-start items in the FH final report (see Appendix A), completion of the manageable list of open items included in the facility's declaration of readiness, and execution of the startup plan to operations.« less

  17. Effects of Resident Work Hour Limitations on Faculty Professional Lives

    PubMed Central

    Shanafelt, Tait D.; Nathens, Avery B.; Curtis, J. Randall

    2008-01-01

    Background The Accreditation Council for Graduate Medical Education resident work hour limitations were implemented in July, 2003. Effects on faculty are not well understood. Objective The objective of this study was to determine the effects of the resident work hour limitations on the professional lives of faculty physicians. Design and Participants Survey of faculty physicians at three teaching hospitals associated with university-based internal medicine and surgery residency programs in Seattle, Washington. Physicians who attended on Internal Medicine and Surgery in-patient services during the 10 mo after implementation of work hour limitations were eligible for participation (N = 366); 282 physicians (77%) returned surveys. Measurements Participants were asked about the effects of resident work hour limitations on aspects of their professional lives, including clinical work, research, teaching, and professional satisfaction. Results Most attending physicians reported that, because of work hour limitations, they spent more time on clinical work (52%), felt more responsibility for supervising patient care (65%), and spent less time on research or other academic pursuits (51%) and teaching residents (72%). Reported changes in work content were independently associated with the self-reported probability of leaving academic medicine in the next 3 y. Conclusions Resident work hour limitations have had large effects on the professional lives of faculty. These findings may have important implications for recruiting and retaining faculty at academic medical centers. PMID:18612748

  18. Development and early application of the Scottish Community Nursing Workload Measurement Tool.

    PubMed

    Grafen, May; Mackenzie, Fiona C

    2015-02-01

    This article describes the development and early application of the Scottish Community Nursing Workload Measurement Tool, part of a suite of tools aiming to ensure a consistent approach to measuring nursing workload across NHS Scotland. The tool, which enables community nurses to record and report their actual workload by collecting information on six categories of activity, is now being used by all NHS boards as part of a triangulated approach. Data being generated by the tool at national level include indications that approximately 50% of band 6 district nurses' time is spent in face-to-face and non-face-to-face contact and planned sessions with patients, and that over 60% of face-to-face contacts are at 'moderate' and 'complex' levels of intervention (2012 data). These data are providing hard evidence of key elements of community nursing activity and practice that will enable informed decisions about workforce planning to be taken forward locally and nationally. The article features an account of the early impact of the tool's implementation in an NHS board by an associate director of nursing. Positive effects from implementation include the generation of reliable data to inform planning decisions, identification of issues around nursing time spent on administrative tasks, clarification of school nursing roles, and information being fed back to teams on various aspects of performance.

  19. Electronic coupling matrix elements from charge constrained density functional theory calculations using a plane wave basis set

    NASA Astrophysics Data System (ADS)

    Oberhofer, Harald; Blumberger, Jochen

    2010-12-01

    We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.

  20. Efficient geostatistical inversion of transient groundwater flow using preconditioned nonlinear conjugate gradients

    NASA Astrophysics Data System (ADS)

    Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf

    2017-04-01

    In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.

  1. Determining the feasibility of robotic courier medication delivery in a hospital setting.

    PubMed

    Kirschling, Thomas E; Rough, Steve S; Ludwig, Brad C

    2009-10-01

    The feasibility of a robotic courier medication delivery system in a hospital setting was evaluated. Robotic couriers are self-guiding, self-propelling robots that navigate hallways and elevators to pull an attached or integrated cart to a desired destination. A robotic courier medication delivery system was pilot tested in two patient care units at a 471-bed tertiary care academic medical center. Average transit for the existing manual medication delivery system hourly hospitalwide deliveries was 32.6 minutes. Of this, 32.3% was spent at the patient care unit and 67.7% was spent pushing the cart or waiting at an elevator. The robotic courier medication delivery system traveled as fast as 1.65 ft/sec (52% speed of the manual system) in the absence of barriers but moved at an average rate of 0.84 ft/sec (26% speed of the manual system) during the study, primarily due to hallway obstacles. The robotic courier was utilized for 50% of the possible 1750 runs during the 125-day pilot due to technical or situational difficulties. Of the runs that were sent, a total of 79 runs failed, yielding an overall 91% success rate. During the final month of the pilot, the success rate reached 95.6%. Customer satisfaction with the traditional manual delivery system was high. Customer satisfaction with deliveries declined after implementation of the robotic courier medication distribution system. A robotic courier medication delivery system was implemented but was not expanded beyond the two pilot units. Challenges of implementation included ongoing education on how to properly move the robotic courier and keeping the hallway clear of obstacles.

  2. Reduced 30-day gastrostomy placement mortality following the introduction of a multidisciplinary nutrition support team: a cohort study.

    PubMed

    Hvas, C L; Farrer, K; Blackett, B; Lloyd, H; Paine, P; Lal, S

    2018-06-01

    Percutaneous endoscopic gastrostomy feeding allows patients with dysphagia to receive adequate nutritional support, although gastrostomy insertion is associated with mortality. A nutrition support team (NST) may improve a gastrostomy service. The present study aimed to evaluate the introduction of a NST for assessment and follow-up of patients referred for gastrostomy. We included adult inpatients referred for gastrostomy insertion consecutively between 1 October 2010 and 31 March 2013. During the first 6 months, a multidisciplinary NST assessment service was implemented. Patient characteristics, clinical condition, referral appropriateness and follow-up were documented prospectively. We compared the frequencies of appropriate referrals, 30-day mortality and mental capacity/consent assessment time spent between the 6 months implementation phase and 2 years following establishment of the assessment service ('established phase'). In total, 309 patients were referred for gastrostomy insertion and 199 (64%) gastrostomies placed. The percentage of appropriate referrals rose from 72% (61/85) during the implementation phase to 87% (194/224) during the established phase (P = 0.002). Thirty-day mortality reduced from 10% (5/52) to 2% (3/147) (P = 0.01), whereas time allocated to assessment of mental capacity and attainment of informed consent rose from mean 3 days (limits of normal variation 0-7) to mean 6 (0-13) days. The introduction of a NST to assess and select patients referred for gastrostomy placement was associated with a rise in the frequency of appropriate referrals and a decrease in 30-day mortality following gastrostomy insertion. Concomitantly, time spent on patient assessment and attainment of informed consent increased. © 2017 The British Dietetic Association Ltd.

  3. What goal is of most worth? The effects of the implementation of the Texas Assessment of Knowledge and Skills on elementary science teaching

    NASA Astrophysics Data System (ADS)

    Rodgers, Pamela England

    This qualitative, narrative study centered on the effects of the implementation of the science portion of the fifth grade Texas Assessment of Knowledge and Skills (TAKS) on the instruction of science at the elementary level, grades one through five. Fourteen teachers and five administrators were interviewed at two elementary schools (kindergarten through grade four) and one middle school (grades five and six). Classroom observations of each of the teachers were also conducted. The study focused on the effect of the implementation of the science TAKS on the amount of time spent on science as well as the instructional methods utilized in the elementary science classroom. Lower grade levels were found to have changed little in these areas unless strong administrative leadership---emphasizing curriculum alignment, providing adequate materials and facilities, and encouraging sustained, content-based professional development in science---was present in the school. At the fifth grade level, however, the amount of time spent on science had increased significantly, although the instructional methods utilized by the teachers were focused more often upon increasing ratings on the test rather than providing the research-based best practice methods of hands-on, inquiry-based science instruction. In addition, the study also explored the teachers' and administrators' perceptions of the state and local mandates concerning science instruction and preparation for the TAKS. Other topics that came to light during the course of the study included the teachers' views on accountability and the effects of the state assessments on children in their classrooms. It was found that most teachers readily accept accountability for themselves, but are opposed to one-shot high-stakes tests which they feel are damaging for their students emotionally and academically---adversely affecting their love of learning science.

  4. The effectiveness of state-level tobacco control interventions: a review of program implementation and behavioral outcomes.

    PubMed

    Siegel, Michael

    2002-01-01

    In 2001, nearly one billion dollars will be spent on statewide tobacco control programs, including those in California, Massachusetts, Arizona, and Oregon, funded by cigarette tax revenues, and the program in Florida, funded by the state's settlement with the tobacco industry. With such large expenditures, it is imperative to find out whether these programs are working. This paper reviews the effectiveness of the statewide tobacco control programs in California, Massachusetts, Arizona, Oregon, and Florida. It focuses on two aspects of process evaluation--the funding and implementation of the programs and the tobacco industry's response, and four elements of outcome evaluation--the programs' effects on cigarette consumption, adult and youth smoking prevalence, and protection of the public from secondhand smoke. The paper formulates general lessons learned from these existing programs and generates recommendations to improve and inform the development and implementation of these and future programs.

  5. An implicit numerical scheme for the simulation of internal viscous flows on unstructured grids

    NASA Technical Reports Server (NTRS)

    Jorgenson, Philip C. E.; Pletcher, Richard H.

    1994-01-01

    The Navier-Stokes equations are solved numerically for two-dimensional steady viscous laminar flows. The grids are generated based on the method of Delaunay triangulation. A finite-volume approach is used to discretize the conservation law form of the compressible flow equations written in terms of primitive variables. A preconditioning matrix is added to the equations so that low Mach number flows can be solved economically. The equations are time marched using either an implicit Gauss-Seidel iterative procedure or a solver based on a conjugate gradient like method. A four color scheme is employed to vectorize the block Gauss-Seidel relaxation procedure. This increases the memory requirements minimally and decreases the computer time spent solving the resulting system of equations substantially. A factor of 7.6 speed up in the matrix solver is typical for the viscous equations. Numerical results are obtained for inviscid flow over a bump in a channel at subsonic and transonic conditions for validation with structured solvers. Viscous results are computed for developing flow in a channel, a symmetric sudden expansion, periodic tandem cylinders in a cross-flow, and a four-port valve. Comparisons are made with available results obtained by other investigators.

  6. A Pragmatic Approach to Guide Implementation Evaluation Research: Strategy Mapping for Complex Interventions.

    PubMed

    Huynh, Alexis K; Hamilton, Alison B; Farmer, Melissa M; Bean-Mayberry, Bevanne; Stirman, Shannon Wiltsey; Moin, Tannaz; Finley, Erin P

    2018-01-01

    Greater specification of implementation strategies is a challenge for implementation science, but there is little guidance for delineating the use of multiple strategies involved in complex interventions. The Cardiovascular (CV) Toolkit project entails implementation of a toolkit designed to reduce CV risk by increasing women's engagement in appropriate services. The CV Toolkit project follows an enhanced version of Replicating Effective Programs (REP), an evidence-based implementation strategy, to implement the CV Toolkit across four phases: pre-conditions, pre-implementation, implementation, and maintenance and evolution. Our current objective is to describe a method for mapping implementation strategies used in real time as part of the CV Toolkit project. This method supports description of the timing and content of bundled strategies and provides a structured process for developing a plan for implementation evaluation. We conducted a process of strategy mapping to apply Proctor and colleagues' rubric for specification of implementation strategies, constructing a matrix in which we identified each implementation strategy, its conceptual group, and the corresponding REP phase(s) in which it occurs. For each strategy, we also specified the actors involved, actions undertaken, action targets, dose of the implementation strategy, and anticipated outcome addressed. We iteratively refined the matrix with the implementation team, including use of simulation to provide initial validation. Mapping revealed patterns in the timing of implementation strategies within REP phases. Most implementation strategies involving the development of stakeholder interrelationships and training and educating stakeholders were introduced during the pre-conditions or pre-implementation phases. Strategies introduced in the maintenance and evolution phase emphasized communication, re-examination, and audit and feedback. In addition to its value for producing valid and reliable process evaluation data, mapping implementation strategies has informed development of a pragmatic blueprint for implementation and longitudinal analyses and evaluation activities. We update recent recommendations on specification of implementation strategies by considering the implications for multi-strategy frameworks and propose an approach for mapping the use of implementation strategies within complex, multi-level interventions, in support of rigorous evaluation. Developing pragmatic tools to aid in operationalizing the conduct of implementation and evaluation activities is essential to enacting sound implementation research.

  7. The School Counseling Program Implementation Survey: Initial Instrument Development and Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Clemens, Elysia V.; Carey, John C.; Harrington, Karen M.

    2010-01-01

    This article details the initial development of the School Counseling Program Implementation Survey and psychometric results including reliability and factor structure. An exploratory factor analysis revealed a three-factor model that accounted for 54% of the variance of the intercorrelation matrix and a two-factor model that accounted for 47% of…

  8. Thermoplastic Joining and Assembly of Bulk Metallic Glass Composites Through Capacitive Discharge

    NASA Technical Reports Server (NTRS)

    Roberts, Scott N. (Inventor); Schramm, Joseph P. (Inventor); Hofmann, Douglas C. (Inventor); Johnson, William L. (Inventor); Kozachkov, Henry (Inventor); Demetriou, Marios D. (Inventor)

    2015-01-01

    Systems and methods for joining BMG Composites are disclosed. Specifically, the joining of BMG Composites is implemented so as to preserve the amorphicity of their matrix phase and the microstructure of their particulate phase. Implementation of the joining method with respect to the construction of modular cellular structures that comprise BMG Composites is also discussed.

  9. A Technology-Mediated Approach to the Implementation of an Evidence-Based Child Maltreatment Prevention Program.

    PubMed

    Self-Brown, Shannon R; C Osborne, Melissa; Rostad, Whitney; Feil, Ed

    2017-11-01

    Implementation of evidence-based parenting programs is critical for parents at-risk for child maltreatment perpetration; however, widespread use of effective programs is limited in both child welfare and prevention settings. This exploratory study sought to examine whether a technology-mediated approach to SafeCare ® delivery can feasibly assist newly trained providers in achieving successful implementation outcomes. Thirty-one providers working in child welfare or high-risk prevention settings were randomized to either SafeCare Implementation with Technology-Assistance (SC-TA) or SafeCare Implementation as Usual (SC-IU). SC-TA providers used a web-based program during session that provided video-based psychoeducation and modeling directly to parents and overall session guidance to providers. Implementation outcome data were collected from providers for six months. Data strongly supported the feasibility of SC-TA. Further, data indicated that SC-TA providers spent significantly less time on several activities in preparation, during, and in follow-up to SafeCare sessions compared to SC-IU providers. No differences were found between the groups with regard to SafeCare fidelity and certification status. Findings suggest that technology can augment implementation by reducing the time and training burden associated with implementing new evidence-based practices for at-risk families.

  10. Implementation of thermal residual stresses in the analysis of fiber bridged matrix crack growth in titanium matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, John G., Jr.; Johnson, W. Steven

    1994-01-01

    In this research, thermal residual stresses were incorporated in an analysis of fiber-bridged matrix cracks in unidirectional and cross-ply titanium matrix composites (TMC) containing center holes or center notches. Two TMC were investigated, namely, SCS-6/Timelal-21S laminates. Experimentally, matrix crack initiation and growth were monitored during tension-tension fatigue tests conducted at room temperature and at an elevated temperature of 200 C. Analytically, thermal residual stresses were included in a fiber bridging (FB) model. The local R-ratio and stress-intensity factor in the matrix due to thermal and mechanical loadings were calculated and used to evaluate the matrix crack growth behavior in the two materials studied. The frictional shear stress term, tau, assumed in this model was used as a curve-fitting parameter to matrix crack growth data. The scatter band in the values of tau used to fit the matrix crack growth data was significantly reduced when thermal residual stresses were included in the fiber bridging analysis. For a given material system, lay-up and temperature, a single value of tau was sufficient to analyze the crack growth data. It was revealed in this study that thermal residual stresses are an important factor overlooked in the original FB models.

  11. Application of the R-matrix method to photoionization of molecules.

    PubMed

    Tashiro, Motomichi

    2010-04-07

    The R-matrix method has been used for theoretical calculation of electron collision with atoms and molecules for long years. The method was also formulated to treat photoionization process, however, its application has been mostly limited to photoionization of atoms. In this work, we implement the R-matrix method to treat molecular photoionization problem based on the UK R-matrix codes. This method can be used for diatomic as well as polyatomic molecules, with multiconfigurational description for electronic states of both target neutral molecule and product molecular ion. Test calculations were performed for valence electron photoionization of nitrogen (N(2)) as well as nitric oxide (NO) molecules. Calculated photoionization cross sections and asymmetry parameters agree reasonably well with the available experimental results, suggesting usefulness of the method for molecular photoionization.

  12. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  13. Implementation of Fiber Substructuring Into Strain Rate Dependent Micromechanics Analysis of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2001-01-01

    A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to impact loads. Previously, strain rate dependent inelastic constitutive equations developed to model the polymer matrix were incorporated into a mechanics of materials based micromechanics method. In the current work, the micromechanics method is revised such that the composite unit cell is divided into a number of slices. Micromechanics equations are then developed for each slice, with laminate theory applied to determine the elastic properties, effective stresses and effective inelastic strains for the unit cell. Verification studies are conducted using two representative polymer matrix composites with a nonlinear, strain rate dependent deformation response. The computed results compare well to experimentally obtained values.

  14. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  15. Longitudinal analysis of time, engagement, and achievement in at-risk versus non-risk students.

    PubMed

    Greenwood, C R

    1991-05-01

    This longitudinal study investigated the effects of time spent in academic instruction and time engaged on elementary students' academic achievement gains. Three groups were compared over grades as follows: (a) an at-risk experimental group of low-socioeconomic status (SES) students for whom teachers implemented classwide peer tutoring (CWPT) beginning with the second semester of first grade continuing through Grade 3; (b) an equivalent at-risk control group; and (c) a non-risk comparison group of students of average- to high-SES. In both the control and comparison groups, teachers employed conventional instructional practices over Grades 1 through 3. Results indicated significant group differences in the time spent in academic instruction, engagement, and gains on the subtests of the Metropolitan Achievement Test that favored the experimental and comparison groups over the control group. Implications include the effectiveness of CWPT for at-risk students and the continuing vulnerability of at-risk students whose daily instructional programs provide less instructional time and foster lower levels of active academic engagement.

  16. Bio-refinery approach for spent coffee grounds valorization.

    PubMed

    Mata, Teresa M; Martins, António A; Caetano, Nídia S

    2018-01-01

    Although normally seen as a problem, current policies and strategic plans concur that if adequately managed, waste can be a source of the most interesting and valuable products, among which metals, oils and fats, lignin, cellulose and hemicelluloses, tannins, antioxidants, caffeine, polyphenols, pigments, flavonoids, through recycling, compound recovery or energy valorization, following the waste hierarchy. Besides contributing to more sustainable and circular economies, those products also have high commercial value when compared to the ones obtained by currently used waste treatment methods. In this paper, it is shown how the bio-refinery framework can be used to obtain high value products from organic waste. With spent coffee grounds as a case study, a sequential process is used to obtain first the most valuable, and then other products, allowing proper valorization of residues and increased sustainability of the whole process. Challenges facing full development and implementation of waste based bio-refineries are highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. HIGH TEMPERATURE TREATMENT OF INTERMEDIATE-LEVEL RADIOACTIVE WASTES - SIA RADON EXPERIENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobolev, I.A.; Dmitriev, S.A.; Lifanov, F.A.

    2003-02-27

    This review describes high temperature methods of low- and intermediate-level radioactive waste (LILW) treatment currently used at SIA Radon. Solid and liquid organic and mixed organic and inorganic wastes are subjected to plasma heating in a shaft furnace with formation of stable leach resistant slag suitable for disposal in near-surface repositories. Liquid inorganic radioactive waste is vitrified in a cold crucible based plant with borosilicate glass productivity up to 75 kg/h. Radioactive silts from settlers are heat-treated at 500-700 0C in electric furnace forming cake following by cake crushing, charging into 200 L barrels and soaking with cement grout. Variousmore » thermochemical technologies for decontamination of metallic, asphalt, and concrete surfaces, treatment of organic wastes (spent ion-exchange resins, polymers, medical and biological wastes), batch vitrification of incinerator ashes, calcines, spent inorganic sorbents, contaminated soil, treatment of carbon containing 14C nuclide, reactor graphite, lubricants have been developed and implemented.« less

  18. Implementation of polyatomic MCTDHF capability

    NASA Astrophysics Data System (ADS)

    Haxton, Daniel; Jones, Jeremiah; Rescigno, Thomas; McCurdy, C. William; Ibrahim, Khaled; Williams, Sam; Vecharynski, Eugene; Rouet, Francois-Henry; Li, Xiaoye; Yang, Chao

    2015-05-01

    The implementation of the Multiconfiguration Time-Dependent Hartree-Fock method for poly- atomic molecules using a cartesian product grid of sinc basis functions will be discussed. The focus will be on two key components of the method: first, the use of a resolution-of-the-identity approximation; sec- ond, the use of established techniques for triple Toeplitz matrix algebra using fast Fourier transform over distributed memory architectures (MPI 3D FFT). The scaling of two-electron matrix element transformations is converted from O(N4) to O(N log N) by including these components. Here N = n3, with n the number of points on a side. We test the prelim- inary implementation by calculating absorption spectra of small hydro- carbons, using approximately 16-512 points on a side. This work is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under the Early Career program, and by the offices of BES and Advanced Scientific Computing Research, under the SciDAC program.

  19. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  20. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  1. Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.

    PubMed

    Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano

    2014-09-09

    A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.

  2. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  3. Time Well Spent? Relating Television Use to Children’s Free-Time Activities

    PubMed Central

    Vandewater, Elizabeth A.; Bickham, David S.; Lee, June H.

    2010-01-01

    OBJECTIVES This study assessed the claim that children’s television use interferes with time spent in more developmentally appropriate activities. METHODS Data came from the first wave of the Child Development Supplement, a nationally representative sample of children aged 0 to 12 in 1997 (N = 1712). Twenty-four-hour time-use diaries from 1 randomly chosen weekday and 1 randomly chosen weekend day were used to assess children’s time spent watching television, time spent with parents, time spent with siblings, time spent reading (or being read to), time spent doing homework, time spent in creative play, and time spent in active play. Ordinary least squares multiple regression was used to assess the relationship between children’s television use and time spent pursuing other activities. RESULTS Results indicated that time spent watching television both with and without parents or siblings was negatively related to time spent with parents or siblings, respectively, in other activities. Television viewing also was negatively related to time spent doing homework for 7- to 12-year-olds and negatively related to creative play, especially among very young children (younger than 5 years). There was no relationship between time spent watching television and time spent reading (or being read to) or to time spent in active play. CONCLUSIONS The results of this study are among the first to provide empirical support for the assumptions made by the American Academy of Pediatrics in their screen time recommendations. Time spent viewing television both with and without parents and siblings present was strongly negatively related to time spent interacting with parents or siblings. Television viewing was associated with decreased homework time and decreased time in creative play. Conversely, there was no support for the widespread belief that television interferes with time spent reading or in active play. PMID:16452327

  4. Surviving the Implementation of a New Science Curriculum

    NASA Astrophysics Data System (ADS)

    Lowe, Beverly; Appleton, Ken

    2015-12-01

    Queensland schools are currently teaching with the first National Curriculum for Australia. This new curriculum was one of a number of political responses to address the recurring low scores in literacy, mathematics, and science that continue to hold Australia in poor international rankings. Teachers have spent 2 years getting to know the new science curriculum through meetings, training, and exploring the new Australian curriculum documents. This article examines the support and preparation for implementation provided in two regional schools, with a closer look at six specific teachers and their science teaching practices as they attempted to implement the new science curriculum. The use of a survey, field observations, and interviews revealed the schools' preparation practices and the teachers' practices, including the support provided to implement the new science curriculum. A description and analysis of school support and preparation as well as teachers' views of their experiences implementing the new science curriculum reveal both achievements and shortcomings. Problematic issues for the two schools and teachers include time to read and comprehend the curriculum documents and content expectations as well as time to train and change the current processes effectively. The case teachers' experiences reveal implications for the successful and effective implementation of new curriculum and curriculum reform.

  5. Using online program development to foster curricular change and innovation.

    PubMed

    Gwozdek, Anne E; Springfield, Emily C; Peet, Melissa R; Kerschbaum, Wendy E

    2011-03-01

    Distance education offers an opportunity to catalyze sweeping curricular change. Faculty members of the University of Michigan Dental Hygiene Program spent eighteen months researching best practices, planning outcomes and courses, and implementing an e-learning (online) dental hygiene degree completion program. The result is a collaborative and portfolio-integrated program that focuses on the development of reflective practitioners and leaders in the profession. A team-based, systems-oriented model for production, implementation, and evaluation has been critical to the program's success. The models and best practices on which this program was founded are described. Also provided is a framework of strategies for development, including the utilization of backward course design, which can be used in many areas of professional education.

  6. a Global Registration Algorithm of the Single-Closed Ring Multi-Stations Point Cloud

    NASA Astrophysics Data System (ADS)

    Yang, R.; Pan, L.; Xiang, Z.; Zeng, H.

    2018-04-01

    Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  7. Pattern identification in time-course gene expression data with the CoGAPS matrix factorization.

    PubMed

    Fertig, Elana J; Stein-O'Brien, Genevieve; Jaffe, Andrew; Colantuoni, Carlo

    2014-01-01

    Patterns in time-course gene expression data can represent the biological processes that are active over the measured time period. However, the orthogonality constraint in standard pattern-finding algorithms, including notably principal components analysis (PCA), confounds expression changes resulting from simultaneous, non-orthogonal biological processes. Previously, we have shown that Markov chain Monte Carlo nonnegative matrix factorization algorithms are particularly adept at distinguishing such concurrent patterns. One such matrix factorization is implemented in the software package CoGAPS. We describe the application of this software and several technical considerations for identification of age-related patterns in a public, prefrontal cortex gene expression dataset.

  8. Robust Assignment Of Eigensystems For Flexible Structures

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Lim, Kyong B.; Junkins, John L.

    1992-01-01

    Improved method for placement of eigenvalues and eigenvectors of closed-loop control system by use of either state or output feedback. Applied to reduced-order finite-element mathematical model of NASA's MAST truss beam structure. Model represents deployer/retractor assembly, inertial properties of Space Shuttle, and rigid platforms for allocation of sensors and actuators. Algorithm formulated in real arithmetic for efficient implementation. Choice of open-loop eigenvector matrix and its closest unitary matrix believed suitable for generating well-conditioned eigensystem with small control gains. Implication of this approach is that element of iterative search for "optimal" unitary matrix appears unnecessary in practice for many test problems.

  9. S-matrix analysis of the baryon electric charge correlation

    NASA Astrophysics Data System (ADS)

    Lo, Pok Man; Friman, Bengt; Redlich, Krzysztof; Sasaki, Chihiro

    2018-03-01

    We compute the correlation of the net baryon number with the electric charge (χBQ) for an interacting hadron gas using the S-matrix formulation of statistical mechanics. The observable χBQ is particularly sensitive to the details of the pion-nucleon interaction, which are consistently incorporated in the current scheme via the empirical scattering phase shifts. Comparing to the recent lattice QCD studies in the (2 + 1)-flavor system, we find that the natural implementation of interactions and the proper treatment of resonances in the S-matrix approach lead to an improved description of the lattice data over that obtained in the hadron resonance gas model.

  10. CoCoa: a software tool for estimating the coefficient of coancestry from multilocus genotype data.

    PubMed

    Maenhout, Steven; De Baets, Bernard; Haesaert, Geert

    2009-10-15

    Phenotypic data collected in breeding programs and marker-trait association studies are often analyzed by means of linear mixed models. In these models, the covariance between the genetic background effects of all genotypes under study is modeled by means of pairwise coefficients of coancestry. Several marker-based coancestry estimation procedures allow to estimate this covariance matrix, but generally introduce a certain amount of bias when the examined genotypes are part of a breeding program. CoCoa implements the most commonly used marker-based coancestry estimation procedures and as such, allows to select the best fitting covariance structure for the phenotypic data at hand. This better model fit translates into an increased power and improved type I error control in association studies and an improved accuracy in phenotypic prediction studies. The presented software package also provides an implementation of the new Weighted Alikeness in State (WAIS) estimator for use in hybrid breeding programs. Besides several matrix manipulation tools, CoCoa implements two different bending heuristics, in case the inverse of an ill-conditioned coancestry matrix estimate is needed. The software package CoCoa is freely available at http://webs.hogent.be/cocoa. Source code, manual, binaries for 32 and 64-bit Linux systems and an installer for Microsoft Windows are provided. The core components of CoCoa are written in C++, while the graphical user interface is written in Java.

  11. Laminated Object Manufacturing-Based Design Ceramic Matrix Composites

    DTIC Science & Technology

    2001-04-01

    components for DoD applications. Program goals included the development of (1) a new LOM based design methodology for CMC, (2) optimized preceramic polymer ...3.1.1-20 3.1.1-12 Detail of LOM Composites Forming System w/ glass fiber/ polymer laminate................ 3.1.1-21 3.1.1-13...such as polymer matrix composites have faced similar barriers to implementation. These barriers have been overcome through the development of suitable

  12. Computational Investigation of Structured Shocks in Al/SiC-Particulate Metal-Matrix Composites

    DTIC Science & Technology

    2011-06-01

    used to implement the dynamic-mixture model into the VUMAT user-material subroutine of ABAQUS /Explicit. Owing to the attendant large strains and...that the residual thermal - expansion effects are more pronounced in the aluminium-matrix than in SiC-particulates. This finding is consistent with the...simple waves (CSWs) (Davison, 2008). . In accordance with the previously observed larger thermal - expansion effects in Al, Figure 5(b) shows that the

  13. Application of unsteady flow rate evaluations to identify the dynamic transfer function of a cavitatingVenturi

    NASA Astrophysics Data System (ADS)

    Marie-Magdeleine, A.; Fortes-Patella, R.; Lemoine, N.; Marchand, N.

    2012-11-01

    This study concerns the simulation of the implementation of the Kinetic Differential Pressure (KDP) method used for the unsteady mass flow rate evaluation in order to identify the dynamic transfer matrix of a cavitatingVenturi. Firstly, the equations of the IZ code used for this simulation are introduced. Next, the methodology for evaluating unsteady pressures and mass flow rates at the inlet and the outlet of the cavitatingVenturi and for identifying the dynamic transfer matrix is presented. Later, the robustness of the method towards measurement uncertainties implemented as a Gaussian white noise is studied. The results of the numerical simulations let us estimate the system linearity domain and to perform the Empirical Transfer Function Evaluation on the inlet frequency per frequency signal and on the chirp signal tests. Then the pressure data obtained with the KDP method is taken and the identification procedure by ETFE and by the user-made Auto-Recursive Moving-Average eXogenous algorithms is performed and the obtained transfer matrix coefficients are compared with those obtained from the simulated input and output data.

  14. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  15. TEST SYSTEM FOR EVALUATING SPENT NUCLEAR FUEL BENDING STIFFNESS AND VIBRATION INTEGRITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong; Bevard, Bruce Balkcom

    2013-01-01

    Transportation packages for spent nuclear fuel (SNF) must meet safety requirements specified by federal regulations. For normal conditions of transport, vibration loads incident to transport must be considered. This is particularly relevant for high-burnup fuel (>45 GWd/MTU). As the burnup of the fuel increases, a number of changes occur that may affect the performance of the fuel and cladding in storage and during transportation. The mechanical properties of high-burnup de-fueled cladding have been previously studied by subjecting defueled cladding tubes to longitudinal (axial) tensile tests, ring-stretch tests, ring-compression tests, and biaxial tube burst tests. The objective of this study ismore » to investigate the mechanical properties and behavior of both the cladding and the fuel in it under vibration/cyclic loads similar to the sustained vibration loads experienced during normal transport. The vibration loads to SNF rods during transportation can be characterized by dynamic, cyclic, bending loads. The transient vibration signals in a specified transport environment can be analyzed, and frequency, amplitude and phase components can be identified. The methodology being implemented is a novel approach to study the vibration integrity of actual SNF rod segments through testing and evaluating the fatigue performance of SNF rods at defined frequencies. Oak Ridge National Laboratory (ORNL) has developed a bending fatigue system to evaluate the response of the SNF rods to vibration loads. A three-point deflection measurement technique using linear variable differential transformers is used to characterize the bending rod curvature, and electromagnetic force linear motors are used as the driving system for mechanical loading. ORNL plans to use the test system in a hot cell for SNF vibration testing on high burnup, irradiated fuel to evaluate the pellet-clad interaction and bonding on the effective lifetime of fuel-clad structure bending fatigue performance. Technical challenges include pure bending implementation, remote installation and detachment of the SNF test specimen, test specimen deformation measurement, and identification of a driving system suitable for use in a hot cell. Surrogate test specimens have been used to calibrate the test setup and conduct systematic cyclic tests. The calibration and systematic cyclic tests have been used to identify test protocol issues prior to implementation in the hot cell. In addition, cyclic hardening in unidirectional bending and softening in reverse bending were observed in the surrogate test specimens. The interface bonding between the surrogate clad and pellets was found to impact the bending response of the surrogate rods; confirming this behavior in the actual spent fuel segments will be an important aspect of the hot cell test implementation,« less

  16. Report to the Office of Naval Research for Contract N00014-89-J-1108 (Texas A&M University)

    DTIC Science & Technology

    1989-12-31

    class of undetermined coefficient problems of parabolic and elliptic type , and is easy to implement provided that the boundary conditions are in a ...considerable expertise to our efforts. Richard Fabiano, a student of John Burns, spent 3 years at Brown working with Tom Banks. His speciality is in... 3 ] J. R. Cannon and H. M. Yin, A uniqueness theorem for a class of parabolic inverse problems, J. Inverse Problems, 4, (1988), 411-416.

  17. Treatment of Spent Argentine Ion Exchange Resin Using Vitrification - Results of FY01 Testing at the Savannah River Technology Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, C.L.

    2002-08-14

    Under the Science and Technology Implementing Arrangement for Cooperation on Radioactive and Mixed Waste Management (JCCRM), the Department of Energy (DOE) is helping to transfer waste treatment technology to international atomic energy commissions. In 1996, as part of the JCCRM, DOE established a collaborative research agreement with Argentina's Comision Nacional de Energia Atomica (CNEA). A primary mission of the CNEA is to direct waste management activities for Argentina's nuclear industry.

  18. Defense Civilian Compensation: DOD and OPM Could Improve the Consistency of DOD’s Eligibility Determinations for Living Quarters Allowances

    DTIC Science & Technology

    2015-06-01

    Allowances Why GAO Did This Study DOD provides LQA as an incentive to recruit eligible individuals for civilian employee assignments overseas. In 2014 DOD...spent almost $504 million on LQA for about 16,500 civilian employees to help defray overseas living expenses, such as rent and utilities. GAO was...asked to review DOD’s implementation of LQA policies for overseas employees. This report evaluates the extent to which (1) DOD has clarified its

  19. On feasibility of a closed nuclear power fuel cycle with minimum radioactivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrianova, E. A.; Davidenko, V. D.; Tsibulskiy, V. F., E-mail: Tsibulskiy-VF@nrcki.ru

    2015-12-15

    Practical implementation of a closed nuclear fuel cycle implies solution of two main tasks. The first task is creation of environmentally acceptable operating conditions of the nuclear fuel cycle considering, first of all, high radioactivity of the involved materials. The second task is creation of effective and economically appropriate conditions of involving fertile isotopes in the fuel cycle. Creation of technologies for management of the high-level radioactivity of spent fuel reliable in terms of radiological protection seems to be the hardest problem.

  20. Methods for Assessment of Species Richness and Occupancy Across Space, Time, Taxonomic Groups, and Ecoregions

    DTIC Science & Technology

    2017-03-26

    logistic constraints and associated travel time between points in the central and western Great Basin. The geographic and temporal breadth of our...surveys (MacKenzie and Royle 2005). In most cases, less time is spent traveling between sites on a given day when the single-day design is implemented...with the single-day design (110 hr). These estimates did not include return- travel time , which did not limit sampling effort. As a result, we could

  1. Implementation of an Open-Loop Rule-Based Control Strategy for a Hybrid-Electric Propulsion System On a Small RPA

    DTIC Science & Technology

    2011-03-01

    input spindle from the engine to over tighten and apply an even greater amount of resistance to the engine shaft . Not only was this dangerous to...Mengistu, Todd Rotramel, and Matt Rippl, all of whom worked together with me to design and build the test rig for our dynamometer setup. Countless...hours were spent together planning and executing the design and building the stand itself. The AFIT machine shop crew and ENY lab techs also

  2. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  3. PNNL Technical Support to The Implementation of EMTA and EMTA-NLA Models in Autodesk® Moldflow® Packages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Wang, Jin

    2012-12-01

    Under the Predictive Engineering effort, PNNL developed linear and nonlinear property prediction models for long-fiber thermoplastics (LFTs). These models were implemented in PNNL’s EMTA and EMTA-NLA codes. While EMTA is a standalone software for the computation of the composites thermoelastic properties, EMTA-NLA presents a series of nonlinear models implemented in ABAQUS® via user subroutines for structural analyses. In all these models, it is assumed that the fibers are linear elastic while the matrix material can exhibit a linear or typical nonlinear behavior depending on the loading prescribed to the composite. The key idea is to model the constitutive behavior ofmore » the matrix material and then to use an Eshelby-Mori-Tanaka approach (EMTA) combined with numerical techniques for fiber length and orientation distributions to determine the behavior of the as-formed composite. The basic property prediction models of EMTA and EMTA-NLA have been subject for implementation in the Autodesk® Moldflow® software packages. These models are the elastic stiffness model accounting for fiber length and orientation distributions, the fiber/matrix interface debonding model, and the elastic-plastic models. The PNNL elastic-plastic models for LFTs describes the composite nonlinear stress-strain response up to failure by an elastic-plastic formulation associated with either a micromechanical criterion to predict failure or a continuum damage mechanics formulation coupling damage to plasticity. All the models account for fiber length and orientation distributions as well as fiber/matrix debonding that can occur at any stage of loading. In an effort to transfer the technologies developed under the Predictive Engineering project to the American automotive and plastics industries, PNNL has obtained the approval of the DOE Office of Vehicle Technologies to provide Autodesk, Inc. with the technical support for the implementation of the basic property prediction models of EMTA and EMTA-NLA in the Autodesk® Moldflow® packages. This report summarizes the recent results from Autodesk Simulation Moldlow Insight (ASMI) analyses using the EMTA models and EMTA-NLA/ABAQUS® analyses for further assessment of the EMTA-NLA models to support their implementation in Autodesk Moldflow Structural Alliance (AMSA). PNNL’s technical support to Autodesk, Inc. included (i) providing the theoretical property prediction models as described in published journal articles and reports, (ii) providing explanations of these models and computational procedure, (iii) providing the necessary LFT data for process simulations and property predictions, and (iv) performing ABAQUS/EMTA-NLA analyses to further assess and illustrate the models for selected LFT materials.« less

  4. LCMV beamforming for a novel wireless local positioning system: a stationarity analysis

    NASA Astrophysics Data System (ADS)

    Tong, Hui; Zekavat, Seyed A.

    2005-05-01

    In this paper, we discuss the implementation of Linear Constrained Minimum Variance (LCMV) beamforming (BF) for a novel Wireless Local Position System (WLPS). WLPS main components are: (a) a dynamic base station (DBS), and (b) a transponder (TRX), both mounted on mobiles. WLPS might be considered as a node in a Mobile Adhoc NETwork (MANET). Each TRX is assigned an identification (ID) code. DBS transmits periodic short bursts of energy which contains an ID request (IDR) signal. The TRX transmits back its ID code (a signal with a limited duration) to the DBS as soon as it detects the IDR signal. Hence, the DBS receives non-continuous signals transmitted by TRX. In this work, we assume asynchronous Direct-Sequence Code Division Multiple Access (DS-CDMA) transmission from the TRX with antenna array/LCMV BF mounted at the DBS, and we discuss the implementation of the observed signal covariance matrix for LCMV BF. In LCMV BF, the observed covariance matrix should be estimated. Usually sample covariance matrix (SCM) is used to estimate this covariance matrix assuming a stationary model for the observed data which is the case in many communication systems. However, due to the non-stationary behavior of the received signal in WLPS systems, SCM does not lead to a high WLPS performance compared to even a conventional beamformer. A modified covariance matrix estimation method which utilizes the cyclostationarity property of WLPS system is introduced as a solution to this problem. It is shown that this method leads to a significant improvement in the WLPS performance.

  5. Implementation of a Parameterized Interacting Multiple Model Filter on an FPGA for Satellite Communications

    NASA Technical Reports Server (NTRS)

    Hackett, Timothy M.; Bilen, Sven G.; Ferreira, Paulo Victor R.; Wyglinski, Alexander M.; Reinhart, Richard C.

    2016-01-01

    In a communications channel, the space environment between a spacecraft and an Earth ground station can potentially cause the loss of a data link or at least degrade its performance due to atmospheric effects, shadowing, multipath, or other impairments. In adaptive and coded modulation, the signal power level at the receiver can be used in order to choose a modulation-coding technique that maximizes throughput while meeting bit error rate (BER) and other performance requirements. It is the goal of this research to implement a generalized interacting multiple model (IMM) filter based on Kalman filters for improved received power estimation on software-dened radio (SDR) technology for satellite communications applications. The IMM filter has been implemented in Verilog consisting of a customizable bank of Kalman filters for choosing between performance and resource utilization. Each Kalman filter can be implemented using either solely a Schur complement module (for high area efficiency) or with Schur complement, matrix multiplication, and matrix addition modules (for high performance). These modules were simulated and synthesized for the Virtex II platform on the JPL Radio Experimenter Development System (EDS) at NASA Glenn Research Center. The results for simulation, synthesis, and hardware testing are presented.

  6. University building safety index measurement using risk and implementation matrix

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Arumsari, F.; Maryani, A.

    2018-04-01

    Many high rise building constructed in several universities in Indonesia. The high-rise building management must provide the safety planning and proper safety equipment in each part of the building. Unfortunately, most of the university in Indonesia have not been applying safety policy yet and less awareness on treating safety facilities. Several fire accidents in university showed that some significant risk should be managed by the building management. This research developed a framework for measuring the high rise building safety index in university The framework is not only assessed the risk magnitude but also designed modular building safety checklist for measuring the safety implementation level. The safety checklist has been developed for 8 types of the university rooms, i.e.: office, classroom, 4 type of laboratories, canteen, and library. University building safety index determined using risk-implementation matrix by measuring the risk magnitude and assessing the safety implementation level. Building Safety Index measurement has been applied in 4 high rise buildings in ITS Campus. The building assessment showed that the rectorate building in secure condition and chemical department building in beware condition. While the library and administration center building was in less secure condition.

  7. Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.; Horn, P. W.

    1994-01-01

    Most of the time on this project was spent on the trajectory planning problem. The construction is equivalent to the classical spline construction in the case that the system matrix is nilpotent. If the dimension of the system is n then the spline of degree 2n-1 is constructed. This gives a new approach to the construction of splines that is more efficient than the usual construction and at the same time allows the construction of a much larger class of splines. All known classes of splines are reconstructed using the approach of linear control theory. As a numerical analysis tool control theory gives a very good tool for constructing splines. However, for the purposes of trajectory planning it is quite another story. Enclosed in this document are four reports done under this grant.

  8. SC'11 Poster: A Highly Efficient MGPT Implementation for LAMMPS; with Strong Scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppelstrup, T; Stukowski, A; Marian, J

    2011-12-07

    The MGPT potential has been implemented as a drop in package to the general molecular dynamics code LAMMPS. We implement an improved communication scheme that shrinks the communication layer thickness, and increases the load balancing. This results in unprecedented strong scaling, and speedup continuing beyond 1/8 atom/core. In addition, we have optimized the small matrix linear algebra with generic blocking (for all processors) and specific SIMD intrinsics for vectorization on Intel, AMD, and BlueGene CPUs.

  9. Computationally efficient multibody simulations

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, Jayant; Kumar, Manoj

    1994-01-01

    Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.

  10. Using the RE-AIM framework to evaluate a school-based municipal programme tripling time spent on PE.

    PubMed

    Nielsen, Jonas Vestergaard; Skovgaard, Thomas; Bredahl, Thomas Viskum Gjelstrup; Bugge, Anna; Wedderkopp, Niels; Klakk, Heidi

    2018-06-01

    Documenting the implementation of effective real-world programmes is considered an important step to support the translation of evidence into practice. Thus, the aim of this study was to identify factors influencing the adoption, implementation and maintenance of the Svendborgproject (SP) - an effective real-world programme comprising schools to implement triple the amount of physical education (PE) in pre-school to sixth grade in six primary schools in the municipality of Svendborg, Denmark. SP has been maintained for ten years and scaled up to all municipal schools since it was initiated in 2008. The Reach, Effectiveness, Adoption, Implementation and Maintenance framework (RE-AIM) was applied as an analytic tool through a convergent mixed method triangulation design. Results show that SP has been implemented with high fidelity and become an established part of the municipality and school identity. The successful implementation and dissemination of the programme has been enabled through the introduction of a predominantly bottom-up approach combined with simple non-negotiable requirements. The results show that this combination has led to a better fit of programmes to the individual school context while still obtaining high implementation fidelity. Finally, the early integration of research has legitimated and benefitted the programme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Implementation of a multi-professional standardized care plan in electronic health records for the care of stroke patients.

    PubMed

    Pöder, Ulrika; Fogelberg-Dahm, Marie; Wadensten, Barbro

    2011-09-01

    To compare staff opinions about standardized care plans and self-reported habits with regard to documentation, and their perceived knowledge about the evidence-based guidelines in stroke care before and after implementation of an evidence-based-standardized care plan (EB-SCP) and quality standard for stroke care. The aim was also to describe staff opinions about, and their use of, the implemented EB-SCP. To facilitate evidence-based practice (EBP), a multi-professional EB-SCP and quality standard for stroke care was implemented in the electronic health record (EHR). Quantitative, descriptive and comparative, based on questionnaires completed before and after implementation. Perceived knowledge about evidence-based guidelines in stroke care increased after implementation of the EB-SCP. The majority agreed that the EB-SCP is useful and facilitates their work. There was no change between before and after implementation with regard to opinions about standardized care plans, self-reported documentation habits or time spent on documentation. An evidence-based SCP seems to be useful in patient care and improves perceived knowledge about evidence-based guidelines in stroke care. For nursing managers, introduction of evidence-based SCP in the EHR may improve the prerequisites for promoting high-quality EBP in multi-professional care. 2011 Blackwell Publishing Ltd.

  12. Transfer matrix calculation for ion optical elements using real fields

    NASA Astrophysics Data System (ADS)

    Mishra, P. M.; Blaum, K.; George, S.; Grieser, M.; Wolf, A.

    2018-03-01

    With the increasing importance of ion storage rings and traps in low energy physics experiments, an efficient transport of ion species from the ion source area to the experimental setup becomes essential. Some available, powerful software packages rely on transfer matrix calculations in order to compute the ion trajectory through the ion-optical beamline systems of high complexity. With analytical approaches, so far the transfer matrices are documented only for a few ideal ion optical elements. Here we describe an approach (using beam tracking calculations) to determine the transfer matrix for any individual electrostatic or magnetostatic ion optical element. We verify the procedure by considering the well-known cases and then apply it to derive the transfer matrix of a 90-degree electrostatic quadrupole deflector including its realistic geometry and fringe fields. A transfer line consisting of a quadrupole deflector and a quadrupole doublet is considered, where the results from the standard first order transfer matrix based ion optical simulation program implementing the derived transfer matrix is compared with the real field beam tracking simulations.

  13. Electric and Magnetic Manipulation of Biological Systems

    NASA Astrophysics Data System (ADS)

    Lee, H.; Hunt, T. P.; Liu, Y.; Ham, D.; Westervelt, R. M.

    2005-06-01

    New types of biological cell manipulation systems, a micropost matrix, a microelectromagnet matrix, and a microcoil array, were developed. The micropost matrix consists of post-shaped electrodes embedded in an insulating layer. With a separate ac voltage applied to each electrode, the micropost matrix generates dielectrophoretic force to trap and move individual biological cells. The microelectromagnet matrix consists of two arrays of straight wires aligned perpendicular to each other, that are covered with insulating layers. By independently controlling the current in each wire, the microelectromagnet matrix creates versatile magnetic fields to manipulate individual biological cells attached to magnetic beads. The microcoil array is a set of coils implemented in a foundry using a standard silicon fabrication technology. Current sources to the coils, and control circuits are integrated on a single chip, making the device self-contained. Versatile manipulation of biological cells was demonstrated using these devices by generating optimized electric or magnetic field patterns. A single yeast cell was trapped and positioned with microscopic resolution, and multiple yeast cells were trapped and independently moved along the separate paths for cell-sorting.

  14. Short Course on Implementation of Zone Technology in the Repair and Overhaul Environment

    DTIC Science & Technology

    1996-04-01

    Pier Zone & Sys Pier/DD/Staging Zone Management Approach Varies Function to Project Project/Matrix Project/Matrix Project Project Fig. 9-3. Nature of...intractable problems that currently exist. Nature can give us many clues. If only we could harness the material that makes the dolphin’s outer shell so smooth...the natural effect of requiring peak manning and confined outfitting schedules. Through the application of system oriented logic to actual work accom

  15. Spin Forming of Aluminum Metal Matrix Composites

    NASA Technical Reports Server (NTRS)

    Lee, Jonathan A.; Munafo, Paul M. (Technical Monitor)

    2001-01-01

    An exploratory effort between NASA-Marshall Space Flight Center (MSFC) and SpinCraft, Inc., to experimentally spin form cylinders and concentric parts from small and thin sheets of aluminum Metal Matrix Composites (MMC), successfully yielded good microstructure data and forming parameters. MSFC and SpinCraft will collaborate on the recent technical findings and develop strategy to implement this technology for NASA's advanced propulsion and airframe applications such as pressure bulkheads, combustion liner assemblies, propellant tank domes, and nose cone assemblies.

  16. Quantum privacy and Schur product channels

    NASA Astrophysics Data System (ADS)

    Levick, Jeremy; Kribs, David W.; Pereira, Rajesh

    2017-12-01

    We investigate the quantum privacy properties of an important class of quantum channels, by making use of a connection with Schur product matrix operations and associated correlation matrix structures. For channels implemented by mutually commuting unitaries, which cannot privatise qubits encoded directly into subspaces, we nevertheless identify private algebras and subsystems that can be privatised by the channels. We also obtain further results by combining our analysis with tools from the theory of quasi-orthogonal operator algebras and graph theory.

  17. Invariant Imbedded T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Technical Reports Server (NTRS)

    Pelissier, Craig; Kuo, Kwo-Sen; Clune, Thomas; Adams, Ian; Munchak, Stephen

    2017-01-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM IITM+SOV software to the community under an open source license.

  18. Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios

    NASA Astrophysics Data System (ADS)

    Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.

    2017-12-01

    The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.

  19. A Coupled/Uncoupled Computational Scheme for Deformation and Fatigue Damage Analysis of Unidirectional Metal-Matrix Composites

    NASA Technical Reports Server (NTRS)

    Wilt, Thomas E.; Arnold, Steven M.; Saleeb, Atef F.

    1997-01-01

    A fatigue damage computational algorithm utilizing a multiaxial, isothermal, continuum-based fatigue damage model for unidirectional metal-matrix composites has been implemented into the commercial finite element code MARC using MARC user subroutines. Damage is introduced into the finite element solution through the concept of effective stress that fully couples the fatigue damage calculations with the finite element deformation solution. Two applications using the fatigue damage algorithm are presented. First, an axisymmetric stress analysis of a circumferentially reinforced ring, wherein both the matrix cladding and the composite core were assumed to behave elastic-perfectly plastic. Second, a micromechanics analysis of a fiber/matrix unit cell using both the finite element method and the generalized method of cells (GMC). Results are presented in the form of S-N curves and damage distribution plots.

  20. A unique set of micromechanics equations for high temperature metal matrix composites

    NASA Technical Reports Server (NTRS)

    Hopkins, D. A.; Chamis, C. C.

    1985-01-01

    A unique set of micromechanic equations is presented for high temperature metal matrix composites. The set includes expressions to predict mechanical properties, thermal properties and constituent microstresses for the unidirectional fiber reinforced ply. The equations are derived based on a mechanics of materials formulation assuming a square array unit cell model of a single fiber, surrounding matrix and an interphase to account for the chemical reaction which commonly occurs between fiber and matrix. A three-dimensional finite element analysis was used to perform a preliminary validation of the equations. Excellent agreement between properties predicted using the micromechanics equations and properties simulated by the finite element analyses are demonstrated. Implementation of the micromechanics equations as part of an integrated computational capability for nonlinear structural analysis of high temperature multilayered fiber composites is illustrated.

Top