ERIC Educational Resources Information Center
Trochim, William M. K.
Investigated is the topic of research implementation and how it can affect evaluation results. Even when evaluations are well planned, the obtained results can be misleading if the conscientiously-constructed research plan is not correctly implemented in practice. In virtually every research arena, one finds major difficulties in implementing the…
NASA Technical Reports Server (NTRS)
Arya, V. K.; Kaufman, A.
1989-01-01
A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.
NASA Technical Reports Server (NTRS)
Arya, V. K.; Kaufman, A.
1987-01-01
A description of the finite element implementation of Robinson's unified viscoplastic model into the General Purpose Finite Element Program (MARC) is presented. To demonstrate its application, the implementation is applied to some uniaxial and multiaxial problems. A comparison of the results for the multiaxial problem of a thick internally pressurized cylinder, obtained using the finite element implementation and an analytical solution, is also presented. The excellent agreement obtained confirms the correct finite element implementation of Robinson's model.
Lean healthcare in developing countries: evidence from Brazilian hospitals.
Costa, Luana Bonome Message; Filho, Moacir Godinho; Rentes, Antonio Freitas; Bertani, Thiago Moreno; Mardegan, Ronaldo
2017-01-01
The present study evaluates how five sectors of two Brazilian hospitals have implemented lean healthcare concepts in their operations. The main characteristics of the implementation process are analyzed in the present study: the motivational factor for implementation, implementation time, form (consultancy or internal), team (hospital and consultants), lean implementation continuity/sustainability, lean healthcare tools and methods implemented, problems/improvement opportunities, lean healthcare barriers faced during the implementation process, and critical factors that affected the implementation and the results obtained in each case. The case studies indicate that reducing patient lead times and costs and making financial improvements were the primary factors that motivated lean healthcare implementation in the hospitals studied. Several tools and methods were used in the cases studied, especially value stream mapping and DMAIC. The barriers found in both hospitals are primarily associated with the human factor. Additionally, the results obtained after implementation were analyzed and improvements in financial aspects, productivity and capacity, and lead time reduction of the analyzed sectors were observed. Further, this study also exhibited four propositions elaborated from the results obtained from the cases that highlighted barriers and challenges to lean healthcare implementation in developing countries. Two of these barriers are hospital organizational structure (and, consequently, how the senior management works with medical staff), and outsourcing hospital activities. This study also concluded that the initialization and maintenance of lean healthcare implementation rely heavily on external support because lean healthcare subject knowledge is not yet available in the healthcare organization, which represents a challenge. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Lattice input on the inclusive flavor-breaking τ Vus puzzle
NASA Astrophysics Data System (ADS)
Maltman, Kim; Hudspith, Renwick; Lewis, Randy; Wolfe, Carl; Zanotti, James
2015-10-01
Recent versions of the standard approach to implementing the flavor-breaking finite-energy sum rule determination of Vus using spectral data obtained from hadronic tau decays produce values of Vus more than 3 sigma low relative to the expectations of 3-family unitarity. We revisit this problem, focusing on systematic issues in the treatment of OPE contributions, employing lattice data for the relevant flavor-breaking correlator combinination to help in understanding how to treat the slowly converging D = 2 series and investigate potential D > 4 non-perturbative contributions. The results, in combination with observations from additional flavor-breaking continuum sum rules, are shown to suggest an alternate implementation of the flavor-breaking sum rule approach. This alternate analysis approach is shown to produce significantly higher Vus than obtained using the assumptions of the conventional implementation, for reasons that will be explained in detail. We also show that, when this approach is implemented using new preliminary results for the tau K pi branching fractions, the Vus obtained is in excellent agreement with that obtained from recent analyses of Kell3 using lattice input for f+(0).
Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain
The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.
Acceleration and torque feedback for robotic control - Experimental results
NASA Technical Reports Server (NTRS)
Mclnroy, John E.; Saridis, George N.
1990-01-01
Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.
2013-01-01
Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to obtain information necessary to implement MCDA is described, and 2) the results in terms of information on the benefits provided by each of the twelve areas of physiotherapy practice can be used by decision-makers as a starting point in the implementation of MCDA at the local level. PMID:23688138
Ozel, Halil Bariş
2016-07-01
In the present study, the effect of some pre-treatments implemented on seeds of Oriental hornbeam (Carpinus orientalis), which has wide geographical variation along Turkey on germination percentage values were investigated. For this purpose, 13 different pre-treatments were implemented to seeds obtained from 17 different populations. According to the obtained results (except control seeds), pre-treatments leading to lowest germination percentage value (8.1%) in Oriental hornbeam seeds was PT10: Keeping seeds for 90 min in sulfuric acid, while highest germination percentage (86.58%) has been obtained with pre-treatment PT13: Implementation of 40% dose of Baikal EM1 + Biohoumous mixture to the seeds, while lowest germination percentage (40.50%) was observed on seeds collected from P7 (Bartin-Kozcağiz) population, highest germination percentage was observed in seeds obtained from P17 (Artvin-Hopa) population.
Simulation of Propagation of Compartment Fire on Building Facades
NASA Astrophysics Data System (ADS)
Simion, A.; Dragne, H.; Stoica, D.; Anghel, I.
2018-06-01
The façade fire simulation of buildings is carried out with Pyrosim numerical fire modeling program, following the implementation of a fire scenario in this simulation program. The scenario that was implemented in the Pyrosim program by researchers from the INCERC Fire Safety Research and Testing Laboratory complied with the requirements of BS 8414. The results obtained following the run of the computational program led to the visual validation of effluents at different time points from the beginning of the thermal load burning, as well as the validation in terms of recorded temperatures. It is considered that the results obtained are reasonable, the test being fully validated from the point of view of the implementation of the fire scenario, of the correct development of the effluents and of the temperature values [1].
Bakić-Mirić, Natasa
2010-01-01
Theory of multiple intelligences (MI) is considered an innovation in learning the English language because it helps students develop all eight intelligences that, on the other hand, represent ways people understand the world around them, solve problems and learn. They are: verbal/linguistic, logical/mathematical, visual/spatial, bodily/kinaesthetic, musical/rhythmic, interpersonal, intrapersonal and naturalist. Also, by focusing on the problem-solving activities, teachers, by implementing theory of multiple intelligences, encourage students not only to build their existing language knowledge but also learn new content and skills. The objective of this study has been to determine the importance of implementation of the theory of multiple intelligences in the English language course syllabus at the University of Nis Medical School. Ways in which the theory of multiple intelligences has been implemented in the English language course syllabus particularly in one lecture for junior year students of pharmacy in the University of Nis Medical School. The English language final exam results from February 2009 when compared with the final exam results from June 2007 prior to the implementation of MI theory showed the following: out of 80 junior year students of pharmacy, 40 obtained grade 10 (outstanding), 16 obtained grade 9 (excellent), 11 obtained grade 8 (very good), 4 obtained grade 7 (good) and 9 obtained grade 6 (pass). No student failed. The implementation of the theory of multiple intelligences in the English language course syllabus at the University of Nis Medical School has had a positive impact on learning the English language and has increased students' interest in language learning. Genarally speaking, this theory offers better understanding of students' intelligence and greater appreciation of their strengths. It provides numerous opportunities for students to use and develop all eight intelligences not just the few they excel in prior to enrolling in a university or college.
Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card
NASA Astrophysics Data System (ADS)
Jiang, Jinpeng; Zhu, Peimin
2018-05-01
Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.
NASA Astrophysics Data System (ADS)
García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto
2014-05-01
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
Performance validation of the ANSER control laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
Performance validation of the ANSER Control Laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1995-01-01
The ANSER control laws were implemented in Ada by NASA Dryden for flight test on the High Alpha Research Vehicle (HARV). The Ada implementation was tested in the hardware-in-the-loop (HIL) simulation, and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model'. This report documents the performance validation test results between these implementations. This report contains the ANSER performance validation test plan, HIL versus batch time-history comparisons, simulation scripts used to generate checkcases, and detailed analysis of discrepancies discovered during testing.
Determination of the spin and recovery characteristics of a typical low-wing general aviation design
NASA Technical Reports Server (NTRS)
Tischler, M. B.; Barlow, J. B.
1980-01-01
The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.
Monteiro, Baltazar Ricardo; Pisco, Ana Maria Silva Azenha; Candoso, Fátima; Bastos, Sónia; Reis, Magda
2017-03-01
Contractualization consists in the development and implementation of a documented agreement whereby one party (payer) provides compensation to the other party (provider) in exchange for a set of health services to a targeted population. We describe, through a case study, the history and the process of implementation of primary health care contractualization (since 1992) in Portugal, emphasizing the consolidation and future challenges of the primary healthcare reform started in 2005. This article resorts to a case study to reflect on the results obtained in the Cluster of Health Centers of the Northern West, Regional Administration of Lisbon and Tagus Valley, between 2009 and 2015, following implementation of contractualization. It was found that the incentive-related payments will have to be weighted considering the results obtained, strongly influenced by epidemiological and socioeconomic change.
An Implementation of Wireless Body Area Networks for Improving Priority Data Transmission Delay.
Gündoğdu, Köksal; Çalhan, Ali
2016-03-01
The rapid growth of wireless sensor networks has enabled the human health monitoring of patients using body sensor nodes that gather and evaluate human body parameters and movements. This study describes both simulation model and implementation of a new traffic sensitive wireless body area network by using non-preemptive priority queue discipline. A wireless body area network implementation employing TDMA is designed with three different priorities of data traffics. Besides, a coordinator node having the non-preemptive priority queue is performed in this study. We have also developed, modeled and simulated example network scenarios by using the Riverbed Modeler simulation software with the purpose of verifying the implementation results. The simulation results obtained under various network load conditions are consistent with the implementation results.
Performance Validation of Version 152.0 ANSER Control Laws for the F-18 HARV
NASA Technical Reports Server (NTRS)
Messina, Michael D.
1996-01-01
The Actuated Nose Strakes for Enhanced Rolling (ANSER) Control Laws were modified as a result of Phase 3 F/A-18 High Alpha Research Vehicle (HARV) flight testing. The control law modifications for the next software release were designated version 152.0. The Ada implementation was tested in the Hardware-In-the-Loop (HIL) simulation and results were compared to those obtained with the NASA Langley batch Fortran implementation of the control laws which are considered the 'truth model.' This report documents the performance validation test results between these implementations for ANSER control law version 152.0.
Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment
NASA Astrophysics Data System (ADS)
Kurnia, Feni; Rosana, Dadan; Supahar
2017-08-01
This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.
Implementation of Magnetic Dipole Interaction in the Planewave-Basis Approach for Slab Systems
NASA Astrophysics Data System (ADS)
Oda, Tatsuki; Obata, Masao
2018-06-01
We implemented the magnetic dipole interaction (MDI) in a first-principles planewave-basis electronic structure calculation based on spin density functional theory. This implementation, employing the two-dimensional Ewald summation, enables us to obtain the total magnetic anisotropy energy of slab materials with contributions originating from both spin-orbit and magnetic dipole-dipole couplings on the same footing. The implementation was demonstrated using an iron square lattice. The result indicates that the magnetic anisotropy of the MDI is much less than that obtained from the atomic magnetic moment model due to the prolate quadrupole component of the spin magnetic moment density. We discuss the reduction in the anisotropy of the MDI in the case of modulation of the quadrupole component and the effect of magnetic field arising from the MDI on atomic scale.
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Halford, Gary R.
1993-01-01
The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.
Laquintana, Dario; Pazzaglia, Silvia; Demarchi, Antonia
2017-01-01
. The new methods to define the staffing requirements for doctors, nurses and nurses aides: an example of their implementation in an Italian hospital. The Italian government, after the transposition of European Union legislation on working hours, made a declaration of commitment to increase the number of staff of the National Health Service (NHS). The method for assessing the staffing needs innovates the old one that dated back a few decades. To implement the method proposed by the Ministry of Health to an Italian hospital and assess its impact on staffing and costs. The model was implemented on all the wards, multiplying the minutes of care expected in 2016, dividing the result by 60 to obtain the hours of care, and further dividing by the number of yearly hours of work of a nurse (1418). Same was done for nurses aides. The minutes of care were related to mean weight of the Diagnosis Related Groups of the ward and the results obtained compared to the actual staffing of nurses and nurses aides. The costs of the differences were calculated. The implementation of the model produced an excess of 23 nurses and a scarcity of 95 nurses aides compared to the actual staffing, with an increase of the costs of € 1.828.562,00. The results obtained and the criticisms received so far show the need of major changes. The data from international studies that associate staffing and patients outcomes and the nurse/patient ratio are macro-indicators already available that may orient choices and investments on the health care professions.
ERIC Educational Resources Information Center
Collison, Christina G.; Kim, Thomas; Cody, Jeremy; Anderson, Jason; Edelbach, Brian; Marmor, William; Kipsang, Rodgers; Ayotte, Charles; Saviola, Daniel; Niziol, Justin
2018-01-01
Reformed experimental activities (REActivities) are an innovative approach to the delivery of the traditional material in an undergraduate organic chemistry laboratory. A description of the design and implementation of REActivities at both a four- and two-year institution is discussed. The results obtained using a reformed teaching observational…
Cooperative Learning in a Soil Mechanics Course at Undergraduate Level
ERIC Educational Resources Information Center
Pinho-Lopes, M.; Macedo, J.; Bonito, F.
2011-01-01
The implementation of the Bologna Process enforced a significant change on traditional learning models, which were focused mainly on the transmission of knowledge. The results obtained in a first attempt at implementation of a cooperative learning model in the Soil Mechanics I course of the Department of Civil Engineering of the University of…
Introduction of steered molecular dynamics into UNRES coarse-grained simulations package.
Sieradzan, Adam K; Jakubowski, Rafał
2017-03-30
In this article, an implementation of steered molecular dynamics (SMD) in coarse-grain UNited RESidue (UNRES) simulations package is presented. Two variants of SMD have been implemented: with a constant force and a constant velocity. The huge advantage of SMD implementation in the UNRES force field is that it allows to pull with the speed significantly lower than the accessible pulling speed in simulations with all-atom representation of a system, with respect to a reasonable computational time. Therefore, obtaining pulling speed closer to those which appear in the atomic force spectroscopy is possible. The newly implemented method has been tested for behavior in a microcanonical run to verify the influence of introduction of artificial constrains on keeping total energy of the system. Moreover, as time dependent artificial force was introduced, the thermostat behavior was tested. The new method was also tested via unfolding of the Fn3 domain of human contactin 1 protein and the I27 titin domain. Obtained results were compared with Gø-like force field, all-atom force field, and experimental results. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Symbolic Algebra Development for Higher-Order Electron Propagator Formulation and Implementation.
Tamayo-Mendoza, Teresa; Flores-Moreno, Roberto
2014-06-10
Through the use of symbolic algebra, implemented in a program, the algebraic expression of the elements of the self-energy matrix for the electron propagator to different orders were obtained. In addition, a module for the software package Lowdin was automatically generated. Second- and third-order electron propagator results have been calculated to test the correct operation of the program. It was found that the Fortran 90 modules obtained automatically with our algorithm succeeded in calculating ionization energies with the second- and third-order electron propagator in the diagonal approximation. The strategy for the development of this symbolic algebra program is described in detail. This represents a solid starting point for the automatic derivation and implementation of higher-order electron propagator methods.
Design and implementation of a biomimetic turtle hydrofoil for an autonomous underwater vehicle.
Font, Davinia; Tresanchez, Marcel; Siegentahler, Cedric; Pallejà, Tomàs; Teixidó, Mercè; Pradalier, Cedric; Palacin, Jordi
2011-01-01
This paper presents the design and implementation of a turtle hydrofoil for an Autonomous Underwater Vehicle (AUV). The final design of the AUV must have navigation performance like a turtle, which has also been the biomimetic inspiration for the design of the hydrofoil and propulsion system. The hydrofoil design is based on a National Advisory Committee for Aeronautics (NACA) 0014 hydrodynamic profile. During the design stage, four different propulsion systems were compared in terms of propulsion path, compactness, sealing and required power. The final implementation is based on a ball-and-socket mechanism because it is very compact and provides three degrees of freedom (DoF) to the hydrofoil with very few restrictions on the propulsion path. The propulsion obtained with the final implementation of the hydrofoil has been empirically evaluated in a water channel comparing different motion strategies. The results obtained have confirmed that the proposed turtle hydrofoil controlled with a mechanism with three DoF generates can be used in the future implementation of the planned AUV.
Transition properties from the Hermitian formulation of the coupled cluster polarization propagator
NASA Astrophysics Data System (ADS)
Tucholska, Aleksandra M.; Modrzejewski, Marcin; Moszynski, Robert
2014-09-01
Theory of one-electron transition density matrices has been formulated within the time-independent coupled cluster method for the polarization propagator [R. Moszynski, P. S. Żuchowski, and B. Jeziorski, Coll. Czech. Chem. Commun. 70, 1109 (2005)]. Working expressions have been obtained and implemented with the coupled cluster method limited to single, double, and linear triple excitations (CC3). Selected dipole and quadrupole transition probabilities of the alkali earth atoms, computed with the new transition density matrices are compared to the experimental data. Good agreement between theory and experiment is found. The results obtained with the new approach are of the same quality as the results obtained with the linear response coupled cluster theory. The one-electron density matrices for the ground state in the CC3 approximation have also been implemented. The dipole moments for a few representative diatomic molecules have been computed with several variants of the new approach, and the results are discussed to choose the approximation with the best balance between the accuracy and computational efficiency.
Implementation An image processing technique for video motion analysis during the gait cycle canine
NASA Astrophysics Data System (ADS)
López, G.; Hernández, J. O.
2017-01-01
Nowadays the analyses of human movement, more specifically of the gait have ceased to be a priority for our species. Technological advances and implementations engineering have joined to obtain data and information regarding the gait cycle in another animal species. The aim of this paper is to analyze the canine gait in order to get results that describe the behavior of the limbs during the gait cycle. The research was performed by: 1. Dog training, where it is developed the step of adaptation and trust; 2. Filming gait cycle; 3. Data acquisition, in order to obtain values that describe the motion cycle canine and 4. Results, obtaining the kinematics variables involved in the march. Which are essential to determine the behavior of the limbs, as well as for the development of prosthetic or orthotic. This project was carried out with conventional equipment and using computational tools easily accessible.
NASA Astrophysics Data System (ADS)
Abeygunawardane, Saranga Kumudu
2018-02-01
Any electrical utility prefers to implement demand side management and change the shape of the demand curve in a beneficial manner. This paper aims to assess the financial gains (or losses) to the generating sector through the implementation of demand side management programs. An optimization algorithm is developed to find the optimal generation mix that minimizes the daily total generating cost. This daily total generating cost includes the daily generating cost as well as the environmental damage cost. The proposed optimization algorithm is used to find the daily total generating cost for the base case and for several demand side management programs using the data obtained from the Sri Lankan power system. Results obtained for DSM programs are compared with the results obtained for the base case to assess the financial benefits of demand side management to the generating sector.
Nguimdo, Romain Modeste; Lacot, Eric; Jacquin, Olivier; Hugon, Olivier; Van der Sande, Guy; Guillet de Chatellus, Hugues
2017-02-01
Reservoir computing (RC) systems are computational tools for information processing that can be fully implemented in optics. Here, we experimentally and numerically show that an optically pumped laser subject to optical delayed feedback can yield similar results to those obtained for electrically pumped lasers. Unlike with previous implementations, the input data are injected at a time interval that is much larger than the time-delay feedback. These data are directly coupled to the feedback light beam. Our results illustrate possible new avenues for RC implementations for prediction tasks.
Real-time implementation of logo detection on open source BeagleBoard
NASA Astrophysics Data System (ADS)
George, M.; Kehtarnavaz, N.; Estevez, L.
2011-03-01
This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.
NASA Technical Reports Server (NTRS)
Komerath, Narayanan M.; Schreiber, Olivier A.
1987-01-01
The wake model was implemented using a VAX 750 and a Microvax II workstation. Online graphics capability using a DISSPLA graphics package. The rotor model used by Beddoes was significantly extended to include azimuthal variations due to forward flight and a simplified scheme for locating critical points where vortex elements are placed. A test case was obtained for validation of the predictions of induced velocity. Comparison of the results indicates that the code requires some more features before satisfactory predictions can be made over the whole rotor disk. Specifically, shed vorticity due to the azimuthal variation of blade loading must be incorporated into the model. Interactions between vortices shed from the four blades of the model rotor must be included. The Scully code for calculating the velocity field is being modified in parallel with these efforts to enable comparison with experimental data. To date, some comparisons with flow visualization data obtained at Georgia Tech were performed and show good agreement for the isolated rotor case. Comparison of time-resolved velocity data obtained at Georgia Tech also shows good agreement. Modifications are being implemented to enable generation of time-averaged results for comparison with NASA data.
Vet, Raymond; de Wit, John B F; Das, Enny
2014-02-01
This study assessed the separate and joint effects of having a goal intention and the completeness of implementation intention formation on the likelihood of attending an appointment to obtain vaccination against the hepatitis B virus among men who have sex with men (MSM) in the Netherlands. Extending previous research, it was hypothesized that to be effective in promoting vaccination, implementation intention formation not only requires a strong goal intention, but also complete details specifying when, where and how to make an appointment to obtain hepatitis B virus vaccination among MSM. MSM at risk for hepatitis B virus (N = 616), with strong or weak intentions to obtain hepatitis B virus vaccination, were randomly assigned to form an implementation intention or not. Completeness of implementation intentions was rated and hepatitis B virus uptake was assessed through data linkage with the joint vaccination registry of the collaborating Public Health Services. Having a strong goal intention to obtain hepatitis B virus vaccination and forming an implementation intention, each significantly and independently increased the likelihood of MSM obtaining hepatitis B virus vaccination. In addition, MSM who formed complete implementation intentions were more successful in obtaining vaccination (p < 0.01). The formation of complete implementation intentions was promoted by strong goal intentions (p < 0.01).
An M-step preconditioned conjugate gradient method for parallel computation
NASA Technical Reports Server (NTRS)
Adams, L.
1983-01-01
This paper describes a preconditioned conjugate gradient method that can be effectively implemented on both vector machines and parallel arrays to solve sparse symmetric and positive definite systems of linear equations. The implementation on the CYBER 203/205 and on the Finite Element Machine is discussed and results obtained using the method on these machines are given.
A Decade of Education Reform in Thailand: Broken Promise or Impossible Dream?
ERIC Educational Resources Information Center
Hallinger, Philip; Lee, Moosung
2011-01-01
This study addresses the perceived gap between the vision of education reform in Thailand embodied in its Education Reform Law of 1999 and the results of implementation a decade later. Drawing upon opportunistic data obtained from a sample of 162 Thai school principals, we analyze trends in reform implementation across schools in all regions and…
Standardization of ¹³¹I: implementation of CIEMAT/NIST method at BARC, India.
Kulkarni, D B; Anuradha, R; Reddy, P J; Joseph, Leena
2011-10-01
The CIEMAT/NIST efficiency tracing method using ³H standard was implemented at Radiation Safety Systems Division, Bhabha Atomic Research Centre (BARC) for the standardization of ¹³¹I radioactive solution. Measurements were also carried out using the 4π β-γ coincidence counting system maintained as a primary standard at the laboratory. The implementation of the CIEMAT/NIST method was verified by comparing the activity concentration obtained in the laboratory with that of the average value of the APMP intercomparison (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)). The results obtained by the laboratory is linked to the CIPM Key Comparison Reference Value (KCRV) through the equivalent activity value of National Metrology Institute of Japan (NMIJ) (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)), which was the pilot laboratory for the intercomparison. The procedure employed to standardize ¹³¹I by the CIEMAT/NIST efficiency tracing technique is presented. The activity concentrations obtained have been normalized with the activity concentration measured by NMIJ to maintain confidentiality of results until the Draft-A report is accepted by all participants. The normalized activity concentrations obtained with the CIEMAT/NIST method was 0.9985 ± 0.0035 kBq/g and using 4π β-γ coincidence counting method was 0.9909 ± 0.0046 kBq/g as on 20 March 2009, 0 h UTC. The normalized activity concentration measured by the NMIJ was 1 ± 0.0024 kBq/g. The normalized average of the activity concentrations of all the participating laboratories was 1.004 ± 0.028 kBq/g. The results obtained in the laboratory are comparable with the other international standards within the uncertainty limits. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobina, C.B.; Silva, E.R.C. da; Lima, A.M.N.
This paper investigates the PWM operation of a four switch three phase inverter (FSTPI), in the case of digital implementation. Different switching sequence strategies for vector control are described and a digital scalar method is also presented. The influence of different switching patterns on the output voltage symmetry, current waveform and switching frequency are examined. The results obtained by employing the vector and scalar strategies are compared and a relationship between them is established. This comparison is based on analytical study and is corroborated either by the computer simulations and by the experimental results. The vector approach makes ease themore » understanding and analysis of the FSTPI, as well the choice of a PWM pattern. However, similar results may be obtained through the scalar approach, which has a simpler implementation. The experimental results of the use of the FSTPI and digital PWM to control an induction motor are presented.« less
A Real-Time Earthquake Precursor Detection Technique Using TEC from a GPS Network
NASA Astrophysics Data System (ADS)
Alp Akyol, Ali; Arikan, Feza; Arikan, Orhan
2016-07-01
Anomalies have been observed in the ionospheric electron density distribution prior to strong earthquakes. However, most of the reported results are obtained by earthquake analysis. Therefore, their implementation in practice is highly problematic. Recently, a novel earthquake precursor detection technique based on spatio-temporal analysis of Total Electron Content (TEC) data obtained from Turkish National Permanent GPS Network (TNPGN) is developed by IONOLAB group (www.ionolab.org). In the present study, the developed detection technique is implemented in a causal setup over the available data set in test phase that enables the real time implementation. The performance of the developed earthquake prediction technique is evaluated by using 10 fold cross validation over the data obtained in 2011. Among the 23 earthquakes that have magnitudes higher than 5, the developed technique can detect precursors of 14 earthquakes while producing 8 false alarms. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Simulation of fault performance of a diesel engine driven brushless alternator through PSPICE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanan, S.S.Y.; Ananthakrishnan, P.; Hangari, V.U.
1995-12-31
Analysis of the fault performance of a brushless alternator with damper windings in the main alternator has been handled ab initio as a total modeling and simulation problem through proper application of Park`s equivalent circuit approach individually to the main exciter alternator units of the brushless alternator and the same has been implemented through PSPICE. The accuracy of the parameters used in the modeling and results obtained through PSPICE implementation are then evaluated for a specific 125 kVA brushless alternator in two stages as followed: first, by comparison of the predicted fault performance obtained from simulation of the 125 kVAmore » main alternator alone treated as a conventional alternator with the results obtained through the use of closed form analytical expressions available in the literature for fault currents and torques in such conventional alternators. Secondly, by comparison of some of the simulation results with those obtained experimentally on the brushless alternator itself. To enable proper calculation of derating factors to be used in the design of such brushless alternators, simulation results then include harmonic analysis of the steady state fault currents and torques. Throughout these studies, the brushless alternator is treated to be on no load at the instant of occurrence of fault.« less
[Implementation of a patient safety strategy in primary care of the Community of Madrid].
Cañada Dorado, A; Drake Canela, M; Olivera Cañadas, G; Mateos Rodilla, J; Mediavilla Herrera, I; Miquel Gómez, A
2015-01-01
This paper describes the implementation of a patient safety strategy in primary care within the new organizational and functional structure that was created in October 2010 to cover the single primary health care area of the Community of Madrid. The results obtained in Patient Safety after the implementation of this new model over the first two years of its development are also presented. Copyright © 2014 SECA. Published by Elsevier Espana. All rights reserved.
The Feasibility of Job Sharing by Public Employees in Hawaii. Some Preliminary Considerations.
ERIC Educational Resources Information Center
Nishimura, Charles H.; And Others
A two-part study was conducted to determine the feasibility of implementing job-sharing in state and county governments in Hawaii. First, a literature review was performed to obtain an overview of the job-sharing concept and of the results of its implementation in other state and local governments and businesses. The legislation relating to…
NASA Astrophysics Data System (ADS)
Arce, A.; Tarquis, A. M.; Cartagena, M. C.
2012-04-01
The Bologna Process is to improve the quality of education, mobility, diversity and the competitiveness and involves three fundamental changes: transform of the structure of titles, changing in methods of teaching and implementation of the systems of quality assurance. Once that titles structure given by the E.T.S. Agronomic Engineer (ETSIA) have been defined, and introduced new methods of learning, this work has focused in the third point: implementation of quality assurance systems as well as the new three titles planning that begins to impart at ETSIA, Madrid, during 2010-2011 course. The academic year 2010-2011 was the first year of implementation of the Bologna Process, this paper attempts to compare the academic results obtained by students in the three new degrees in the subject of Chemistry I and II compared with the results obtained in the same subject in the degree of Agronomic Engineer in the past four years. The academic results have been lower than expected and worse than in previous courses. The paper tries to account for these results based on the percentage of compliance with the guidance of teachers, and based on student participation and training prior to beginning the course. Finally, propose possible solutions to try to correct these results in future courses, with the aim of improving efficiency rates, success and absenteeism important in the first year since it will condition the dropout rate of these new degrees.
Design and Implementation of a Biomimetic Turtle Hydrofoil for an Autonomous Underwater Vehicle
Font, Davinia; Tresanchez, Marcel; Siegentahler, Cedric; Pallejà, Tomàs; Teixidó, Mercè; Pradalier, Cedric; Palacin, Jordi
2011-01-01
This paper presents the design and implementation of a turtle hydrofoil for an Autonomous Underwater Vehicle (AUV). The final design of the AUV must have navigation performance like a turtle, which has also been the biomimetic inspiration for the design of the hydrofoil and propulsion system. The hydrofoil design is based on a National Advisory Committee for Aeronautics (NACA) 0014 hydrodynamic profile. During the design stage, four different propulsion systems were compared in terms of propulsion path, compactness, sealing and required power. The final implementation is based on a ball-and-socket mechanism because it is very compact and provides three degrees of freedom (DoF) to the hydrofoil with very few restrictions on the propulsion path. The propulsion obtained with the final implementation of the hydrofoil has been empirically evaluated in a water channel comparing different motion strategies. The results obtained have confirmed that the proposed turtle hydrofoil controlled with a mechanism with three DoF generates can be used in the future implementation of the planned AUV. PMID:22247660
Field-programmable analogue arrays for the sensorless control of DC motors
NASA Astrophysics Data System (ADS)
Rivera, J.; Dueñas, I.; Ortega, S.; Del Valle, J. L.
2018-02-01
This work presents the analogue implementation of a sensorless controller for direct current motors based on the super-twisting (ST) sliding mode technique, by means of field programmable analogue arrays (FPAA). The novelty of this work is twofold, first is the use of the ST algorithm in a sensorless scheme for DC motors, and the implementation method of this type of sliding mode controllers in FPAAs. The ST algorithm reduces the chattering problem produced with the deliberate use of the sign function in classical sliding mode approaches. On the other hand, the advantages of the implementation method over a digital one are that the controller is not digitally approximated, the controller gains are not fine tuned and the implementation does not require the use of analogue-to-digital and digital-to-analogue converter circuits. In addition to this, the FPAA is a reconfigurable, lower cost and power consumption technology. Simulation and experimentation results were registered, where a more accurate transient response and lower power consumption were obtained by the proposed implementation method when compared to a digital implementation. Also, a more accurate performance by the DC motor is obtained with proposed sensorless ST technique when compared with a classical sliding mode approach.
[Implementation of vaccinations in Chechen refugees' children in Poland].
Hartmann, Piotr; Jackowska, Teresa
2010-01-01
Poland is a destination country or temporary living place for many refugees from Chechnya. Refugees are provided with full, free of charge, health care including the vaccination programme according to the present National Vaccination Programme (NVP). To assess the implementation of vaccinations in Chechen refugees' children in Poland. The group comprised 310 children from the Centre for Foreigners in Warsaw-Bielany. The mean age of the examined children was 7.5 years. The investigations were performed three times during the study--the first was conducted in a group of 220 children in June, the second one in 303 children in August and the third in 310 children in October 2008 (the differences in the numbers resulted from the changes in the size of the Chechen population living in the Centre). The vaccination records were assessed paying special attention to the implementation of vaccinations. During the consecutive two examinations the implementation of vaccination recommendations was analyzed as well as the availability of this information in the records. At every visit the history was obtained on the reasons for not having the vaccination programme implemented. The information on vaccination programme implementation was available in 19, 30 and 45%, of analyzed records from the Centre at the first, second and third visit, respectively. The majority of the obtained data regarded the implementation of vaccinations in children in the first year of life (85%), while the least data was on vaccinations in children over 12 years of age (30%). Similar results were obtained when analyzing a group of 168 children at the all three visits (18, 32 and 48%, respectively). The reasons for non-implementation of vaccinations were as follows: (a) low parents' awareness of the necessity of vaccinations; (b) lack of self-discipline (every other child did not report for a scheduled appointment); (c) relocation of refugees to other Centres; (d) exceedingly frequent postponing of vaccinations (in every fourth child). Substantially better implementation of vaccinations was found in refugees' children born in Poland. On account of the differences in the Polish and Chechen vaccination programmes and the low awareness among the Chechen parents' regarding the need of vaccinations, implementation of health care programmes and monitoring of the sanitary-epidemiologic situation in the Centres for Foreigners is necessary in order to prevent local outbreaks of an epidemic.
Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set
Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; ...
2018-02-07
The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less
Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.
The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. As a result, potential applications of the LCAO based scheme in the context ofmore » extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.« less
A quantum-implementable neural network model
NASA Astrophysics Data System (ADS)
Chen, Jialin; Wang, Lingli; Charbon, Edoardo
2017-10-01
A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.
Jackson, M I; Hiley, M J; Yeadon, M R
2011-10-13
In the table contact phase of gymnastics vaulting both dynamic and static friction act. The purpose of this study was to develop a method of simulating Coulomb friction that incorporated both dynamic and static phases and to compare the results with those obtained using a pseudo-Coulomb implementation of friction when applied to the table contact phase of gymnastics vaulting. Kinematic data were obtained from an elite level gymnast performing handspring straight somersault vaults using a Vicon optoelectronic motion capture system. An angle-driven computer model of vaulting that simulated the interaction between a seven segment gymnast and a single segment vaulting table during the table contact phase of the vault was developed. Both dynamic and static friction were incorporated within the model by switching between two implementations of the tangential frictional force. Two vaulting trials were used to determine the model parameters using a genetic algorithm to match simulations to recorded performances. A third independent trial was used to evaluate the model and close agreement was found between the simulation and the recorded performance with an overall difference of 13.5%. The two-state simulation model was found to be capable of replicating performance at take-off and also of replicating key contact phase features such as the normal and tangential motion of the hands. The results of the two-state model were compared to those using a pseudo-Coulomb friction implementation within the simulation model. The two-state model achieved similar overall results to those of the pseudo-Coulomb model but obtained solutions more rapidly. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optoelectronic Reservoir Computing
Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S.
2012-01-01
Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations. PMID:22371825
NASA Astrophysics Data System (ADS)
Scukins, A.; Nerukh, D.; Pavlov, E.; Karabasov, S.; Markesteijn, A.
2015-09-01
A multiscale Molecular Dynamics/Hydrodynamics implementation of the 2D Mercedes Benz (MB or BN2D) [1] water model is developed and investigated. The concept and the governing equations of multiscale coupling together with the results of the two-way coupling implementation are reported. The sensitivity of the multiscale model for obtaining macroscopic and microscopic parameters of the system, such as macroscopic density and velocity fluctuations, radial distribution and velocity autocorrelation functions of MB particles, is evaluated. Critical issues for extending the current model to large systems are discussed.
Automatic tissue characterization from ultrasound imagery
NASA Astrophysics Data System (ADS)
Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.
1993-08-01
In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.
Disciplined rubidium oscillator with GPS selective availability
NASA Technical Reports Server (NTRS)
Dewey, Wayne P.
1993-01-01
A U.S. Department of Defense decision for continuous implementation of GPS Selective Availability (S/A) has made it necessary to modify Rubidium oscillator disciplining methods. One such method for reducing the effects of S/A on the oscillator disciplining process was developed which achieves results approaching pre-S/A GPS. The Satellite Hopping algorithm used in minimizing the effects of S/A on the oscillator disciplining process is described, and the results of using this process to those obtained prior to the implementation of S/A are compared. Test results are from a TrueTime Rubidium based Model GPS-DC timing receiver.
Deeper and sparser nets are optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1998-03-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions (Home and Hush, 1994); and (2) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Home and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will provemore » that size-optimal solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
Deeper sparsely nets are size-optimal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1997-12-01
The starting points of this paper are two size-optimal solutions: (i) one for implementing arbitrary Boolean functions (Horne, 1994); and (ii) another one for implementing certain sub-classes of Boolean functions (Red`kin, 1970). Because VLSI implementations do not cope well with highly interconnected nets--the area of a chip grows with the cube of the fan-in (Hammerstrom, 1988)--this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush (1994) valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimalmore » solutions are obtained for small constant fan-in for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e. minimizing AT{sup 2}) solutions (Beiu, 1997a), while there are similar small constants relating to the capacity of processing information (Miller 1956).« less
Landsat Data Continuity Mission (LDCM) - Optimizing X-Band Usage
NASA Technical Reports Server (NTRS)
Garon, H. M.; Gal-Edd, J. S.; Dearth, K. W.; Sank, V. I.
2010-01-01
The NASA version of the low-density parity check (LDPC) 7/8-rate code, shortened to the dimensions of (8160, 7136), has been implemented as the forward error correction (FEC) schema for the Landsat Data Continuity Mission (LDCM). This is the first flight application of this code. In order to place a 440 Msps link within the 375 MHz wide X band we found it necessary to heavily bandpass filter the satellite transmitter output . Despite the significant amplitude and phase distortions that accompanied the spectral truncation, the mission required BER is maintained at < 10(exp -12) with less than 2 dB of implementation loss. We utilized a band-pass filter designed ostensibly to replicate the link distortions to demonstrate link design viability. The same filter was then used to optimize the adaptive equalizer in the receiver employed at the terminus of the downlink. The excellent results we obtained could be directly attributed to the implementation of the LDPC code and the amplitude and phase compensation provided in the receiver. Similar results were obtained with receivers from several vendors.
Parallel halftoning technique using dot diffusion optimization
NASA Astrophysics Data System (ADS)
Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara
2017-05-01
In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.
Design and implementation of monitoring and evaluation of healthcare organization management
NASA Astrophysics Data System (ADS)
Charalampos, Platis; Emmanouil, Zoulias; Dimitrios, Iracleous; Lappa, Evaggelia
2017-09-01
The management of a healthcare organization is monitored using a suitably designed questionnaire to 271 nurses operating in Greek hospital. The data are fed to an automatic data mining system to obtain a suitable series of models to analyse, visualise and study the obtained information. Hidden patterns, correlations and interdependencies are investigated and the results are analytically presented.
Multidisciplinary Analysis of a Hypersonic Engine
NASA Technical Reports Server (NTRS)
Stewart, M. E. M.; Suresh, A.; Liou, M. S.; Owen, A. K.; Messitt, D. G.
2002-01-01
This paper describes implementation of a technique used to obtain a high fidelity fluid-thermal-structural solution of a combined cycle engine at its scram design point. Single-discipline simulations are insufficient here since interactions from other disciplines are significant. Using off-the-shelf, validated solvers for the fluid, chemistry, thermal, and structural solutions, this approach couples together their results to obtain consistent solutions.
Opportunity costs of implementing forest plans
NASA Astrophysics Data System (ADS)
Fox, Bruce; Keller, Mary Anne; Schlosberg, Andrew J.; Vlahovich, James E.
1989-01-01
Intellectual concern with the National Forest Management Act of 1976 has followed a course emphasizing the planning aspects of the legislation associated with the development of forest plans. Once approved, however, forest plans must be implemented. Due to the complex nature of the ecological systems of interest, and the multiple and often conflicting desires of user clientele groups, the feasibility and costs of implementing forest plans require immediate investigation. For one timber sale on the Coconino National Forest in Arizona, forest plan constraints were applied and resulting resource outputs predicted using the terrestrial ecosystem analysis and modeling system (TEAMS), a computer-based decision support system developed at the School of Forestry, Northern Arizona University, With forest plan constraints for wildlife habitat, visual diversity, riparian area protection, and soil and slope harvesting restrictions, the maximum timber harvest obtainable was reduced 58% from the maximum obtainable without plan constraints.
A neural network device for on-line particle identification in cosmic ray experiments
NASA Astrophysics Data System (ADS)
Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G. C.
2004-05-01
On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification.
How best to measure implementation of school health curricula: a comparison of three measures.
Resnicow, K; Davis, M; Smith, M; Lazarus-Yaroch, A; Baranowski, T; Baranowski, J; Doyle, C; Wang, D T
1998-06-01
The impact of school health education programs is often attenuated by inadequate teacher implementation. Using data from a school-based nutrition education program delivered in a sample of fifth graders, this study examines the discriminant and predictive validity of three measures of curriculum implementation: class-room observation of fidelity, and two measures of completeness, teacher self-report questionnaire and post-implementation interview. A fourth measure, obtained during teacher observations, that assessed student and teacher interaction and student receptivity to the curriculum (labeled Rapport) was also obtained. Predictive validity was determined by examining the association of implementation measures with three study outcomes; health knowledge, asking behaviors related to fruit and vegetables, and fruit and vegetable intake, assessed by 7-day diary. Of the 37 teachers observed, 21 were observed for two sessions and 16 were observed once. Implementation measures were moderately correlated, an indication of discriminant validity. Predictive validity analyses indicated that the observed fidelity, Rapport and interview measures were significantly correlated with post-test student knowledge. The association between health knowledge and observed fidelity (based on dual observation only), Rapport and interview measures remained significant after adjustment for pre-test knowledge values. None of the implementation variables were significantly associated with student fruit and vegetable intake or asking behaviors controlling for pre-test values. These results indicate that the teacher self-report questionnaire was not a valid measure of implementation completeness in this study. Post-implementation completeness interviews and dual observations of fidelity and Rapport appear to be more valid, and largely independent methods of implementation assessment.
Utkin, V F; Lukjashchenko, V I; Borisov, V V; Suvorov, V V; Tsymbalyuk, M M
2003-07-01
This article presents main scientific and practical results obtained in course of scientific and applied research and experiments on Mir space station. Based on Mir experience, processes of research program formation for the Russian Segment of the ISS are briefly described. The major trends of activities planned in the frames of these programs as well as preliminary results of increment research programs implementation in the ISS' first missions are also presented. c2003 Elsevier Science Ltd. All rights reserved.
The Implementation of Cumulative Learning Theory in Calculating Triangular Prism and Tube Volumes
NASA Astrophysics Data System (ADS)
Muklis, M.; Abidin, C.; Pamungkas, M. D.; Masriyah
2018-01-01
This study aims at describing the application of cumulative learning theory in calculating the volume of a triangular prism and a tube as well as revealing the students’ responses toward the learning. The research method used was descriptive qualitative with elementary school students as the subjects of the research. Data obtained through observation, field notes, questionnaire, tests, and interviews. The results from the application of cumulative learning theory obtained positive students’ responses in following the learning and students’ learning outcomes was dominantly above the average. This showed that cumulative learning could be used as a reference to be implemented in learning, so as to improve the students’ achievement.
Ruiz-Ramos, Jesus; Frasquet, Juan; Romá, Eva; Poveda-Andres, Jose Luis; Salavert-Leti, Miguel; Castellanos, Alvaro; Ramirez, Paula
2017-06-01
To evaluate the cost-effectiveness of antimicrobial stewardship (AS) program implementation focused on critical care units based on assumptions for the Spanish setting. A decision model comparing costs and outcomes of sepsis, community-acquired pneumonia, and nosocomial infections (including catheter-related bacteremia, urinary tract infection, and ventilator-associated pneumonia) in critical care units with or without an AS was designed. Model variables and costs, along with their distributions, were obtained from the literature. The study was performed from the Spanish National Health System (NHS) perspective, including only direct costs. The Incremental Cost-Effectiveness Ratio (ICER) was analysed regarding the ability of the program to reduce multi-drug resistant bacteria. Uncertainty in ICERs was evaluated with probabilistic sensitivity analyses. In the short-term, implementing an AS reduces the consumption of antimicrobials with a net benefit of €71,738. In the long-term, the maintenance of the program involves an additional cost to the system of €107,569. Cost per avoided resistance was €7,342, and cost-per-life-years gained (LYG) was €9,788. Results from the probabilistic sensitivity analysis showed that there was a more than 90% likelihood that an AS would be cost-effective at a level of €8,000 per LYG. Wide variability of economic results obtained from the implementation of this type of AS program and short information on their impact on patient evolution and any resistance avoided. Implementing an AS focusing on critical care patients is a long-term cost-effective tool. Implementation costs are amortized by reducing antimicrobial consumption to prevent infection by multidrug-resistant pathogens.
Implementation of the SPH Procedure Within the MOOSE Finite Element Framework
NASA Astrophysics Data System (ADS)
Laurier, Alexandre
The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.
Reverse technology transfer; obtaining feedback from managers.
A.B. Carey; J.M. Calhoun; B. Dick; K. O' Halloran; L.S. Young; R.E. Bigley; S. Chan; C.A. Harrington; J.P. Hayes; J. Marzluff
1999-01-01
Forestry policy, planning, and practice have changed rapidly with implementation of ecosystem management by federal, state, tribal, and private organizations. Implementation entails new concepts, terminology, and management approaches. Yet there seems to have been little organized effort to obtain feedback from on-the-ground managers on the practicality of implementing...
Young, C; Douglass, J
2003-01-01
Objectives: To describe the implementation, use of, and outputs from an assault patient questionnaire (APQ) introduced in accident and emergency (A&E) departments to determine Crime & Disorder and Community Safety priorities on Merseyside, a metropolitan county in north west England, UK. Methods: Why and how the APQ was implemented, data collected, and information obtained. The subsequent incorporation of the APQ into the Torex Patient Administration System (PAS) at the Royal Liverpool University Hospital A&E department and its routine completion by trained reception staff. Results: Analysis is based upon anonymised data—for example, patient ID and date of birth information is suppressed. A summary of "baseline" information obtained from the data collected is provided. Conclusions: It is possible for the APQ to be implemented at no extra cost in a large A&E department in an acute general teaching hospital. Valuable intelligence can be obtained for Crime & Disorder Act and Community Safety processes. The APQ forms part of a medium to long term strategy to prevent and reduce violent assaults in the community that subsequently require treatment in an A&E department. Such incidents include assaults both inside and outside licensed premises, attacks by strangers on the street, and domestic violence. Emphasis is also placed upon the feedback of results to staff in A&E departments. PMID:12748137
Leykum, Luci K; Pugh, Jacqueline A; Lanham, Holly J; Harmon, Joel; McDaniel, Reuben R
2009-01-01
Background A gap continues to exist between what is known to be effective and what is actually delivered in the usual course of medical care. The goal of implementation research is to reduce this gap. However, a tension exists between the need to obtain generalizeable knowledge through implementation trials, and the inherent differences between healthcare organizations that make standard interventional approaches less likely to succeed. The purpose of this paper is to explore the integration of participatory action research and randomized controlled trial (RCT) study designs to suggest a new approach for studying interventions in healthcare settings. Discussion We summarize key elements of participatory action research, with particular attention to its collaborative, reflective approach. Elements of participatory action research and RCT study designs are discussed and contrasted, with a complex adaptive systems approach used to frame their integration. Summary The integration of participatory action research and RCT design results in a new approach that reflects not only the complex nature of healthcare organizations, but also the need to obtain generalizeable knowledge regarding the implementation process. PMID:19852784
NASA Technical Reports Server (NTRS)
Jackson, H. D.; Fiala, J.
1980-01-01
Developments which will reduce the costs associated with the distribution of satellite services are considered with emphasis on digital communication link implementation. A digitally implemented communications experiment (DICE) which demonstrates the flexibility and efficiency of digital transmission of television video and audio, telephone voice, and high-bit-rate data is described. The utilization of the DICE system in a full duplex teleconferencing mode is addressed. Demonstration teleconferencing results obtained during the conduct of two sessions of the 7th AIAA Communication Satellite Systems Conference are discussed. Finally, the results of link characterization tests conducted to determine (1) relationships between the Hermes channel 1 EIRP and DICE model performance and (2) channel spacing criteria for acceptable multichannel operation, are presented.
Dimeric spectra analysis in Microsoft Excel: a comparative study.
Gilani, A Ghanadzadeh; Moghadam, M; Zakerhamidi, M S
2011-11-01
The purpose of this work is to introduce the reader to an Add-in implementation, Decom. This implementation provides the whole processing requirements for analysis of dimeric spectra. General linear and nonlinear decomposition algorithms were integrated as an Excel Add-in for easy installation and usage. In this work, the results of several samples investigations were compared to those obtained by Datan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A FORTRAN version implementation of block adjustment of CCD frames and its preliminary application
NASA Astrophysics Data System (ADS)
Yu, Y.; Tang, Z.-H.; Li, J.-L.; Zhao, M.
2005-09-01
A FORTRAN version implementation of the block adjustment (BA) of overlapping CCD frames is developed and its flowchart is shown. The program is preliminarily applied to obtain the optical positions of four extragalactic radio sources. The results show that because of the increase in the number and sky coverage of reference stars the precision of optical positions with BA is improved compared with the single CCD frame adjustment.
Triple collinear emissions in parton showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Höche, Stefan; Prestel, Stefan
2017-10-01
A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing NLO splitting kernels is discussed as a first application. The correspondence between the Monte-Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks Pythia and Sherpa.
Salinas, Maria; López-Garrigós, Maite; Gutiérrez, Mercedes; Lugo, Javier; Sirvent, Jose Vicente; Uris, Joaquin
2010-01-01
Laboratory performance can be measured using a set of model key performance indicators (KPIs). The design and implementation of KPIs are important issues. KPI results from 7 years are reported and their implementation, monitoring, objectives, interventions, result reporting and delivery are analyzed. The KPIs of the entire laboratory process were obtained using Laboratory Information System (LIS) registers. These were collected automatically using a data warehouse application, spreadsheets and external quality program reports. Customer satisfaction was assessed using surveys. Nine model laboratory KPIs were proposed and measured. The results of some examples of KPIs used in our laboratory are reported. Their corrective measurements or the implementation of objectives led to improvement in the associated KPIs results. Measurement of laboratory performance using KPIs and a data warehouse application that continuously collects registers and calculates KPIs confirmed the reliability of indicators, indicator acceptability and usability for users, and continuous process improvement.
Turbine adapted maps for turbocharger engine matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tancrez, M.; Galindo, J.; Guardiola, C.
2011-01-15
This paper presents a new representation of the turbine performance maps oriented for turbocharger characterization. The aim of this plot is to provide a more compact and suited form to implement in engine simulation models and to interpolate data from turbocharger test bench. The new map is based on the use of conservative parameters as turbocharger power and turbine mass flow to describe the turbine performance in all VGT positions. The curves obtained are accurately fitted with quadratic polynomials and simple interpolation techniques give reliable results. Two turbochargers characterized in an steady flow rig were used for illustrating the representation.more » After being implemented in a turbocharger submodel, the results obtained with the model have been compared with success against turbine performance evaluated in engine tests cells. A practical application in turbocharger matching is also provided to show how this new map can be directly employed in engine design. (author)« less
Image transport through a disordered optical fibre mediated by transverse Anderson localization.
Karbasi, Salman; Frazier, Ryan J; Koch, Karl W; Hawkins, Thomas; Ballato, John; Mafi, Arash
2014-02-25
Transverse Anderson localization of light allows localized optical-beam-transport through a transversely disordered and longitudinally invariant medium. Its successful implementation in disordered optical fibres recently resulted in the propagation of localized beams of radii comparable to that of conventional optical fibres. Here we demonstrate optical image transport using transverse Anderson localization of light. The image transport quality obtained in the polymer disordered optical fibre is comparable to or better than some of the best commercially available multicore image fibres with less pixelation and higher contrast. It is argued that considerable improvement in image transport quality can be obtained in a disordered fibre made from a glass matrix with near wavelength-size randomly distributed air-holes with an air-hole fill-fraction of 50%. Our results open the way to device-level implementation of the transverse Anderson localization of light with potential applications in biological and medical imaging.
Implementation of the reduced charge state method of calculating impurity transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crume, E.C. Jr.; Arnurius, D.E.
1982-07-01
A recent review article by Hirshman and Sigmar includes expressions needed to calculate the parallel friction coefficients, the essential ingredients of the plateau-Pfirsch-Schluter transport coefficients, using the method of reduced charge states. These expressions have been collected and an expanded notation introduced in some cases to facilitate differentiation between reduced charge state and full charge state quantities. A form of the Coulomb logarithm relevant to the method of reduced charge states is introduced. This method of calculating the f/sub ij//sup ab/ has been implemented in the impurity transport simulation code IMPTAR and has resulted in an overall reduction in computationmore » time of approximately 25% for a typical simulation of impurity transport in the Impurity Study Experiment (ISX-B). Results obtained using this treatment are almost identical to those obtained using an earlier approximate theory of Hirshman.« less
Investigation of Nitride Morphology After Self-Aligned Contact Etch
NASA Technical Reports Server (NTRS)
Hwang, Helen H.; Keil, J.; Helmer, B. A.; Chien, T.; Gopaladasu, P.; Kim, J.; Shon, J.; Biegel, Bryan (Technical Monitor)
2001-01-01
Self-Aligned Contact (SAC) etch has emerged as a key enabling technology for the fabrication of very large-scale memory devices. However, this is also a very challenging technology to implement from an etch viewpoint. The issues that arise range from poor oxide etch selectivity to nitride to problems with post etch nitride surface morphology. Unfortunately, the mechanisms that drive nitride loss and surface behavior remain poorly understood. Using a simple langmuir site balance model, SAC nitride etch simulations have been performed and compared to actual etched results. This approach permits the study of various etch mechanisms that may play a role in determining nitride loss and surface morphology. Particle trajectories and fluxes are computed using Monte-Carlo techniques and initial data obtained from double Langmuir probe measurements. Etched surface advancement is implemented using a shock tracking algorithm. Sticking coefficients and etch yields are adjusted to obtain the best agreement between actual etched results and simulated profiles.
[The comparative evaluation of level of security culture in medical organizations].
Roitberg, G E; Kondratova, N V; Galanina, E V
2016-01-01
The study was carried out on the basis of clinic “Medicine” in 2014-2015 concerning security culture. The sampling included 465 filled HSPSC questionnaires. The comparative analysis of received was implemented. The “Zubovskaia district hospital” Having no accreditation according security standards and group of clinics from USA functioning for many years in the system of patient security support were selected as objects for comparison. The evaluation was implemented concerning dynamics of security culture in organization at implementation of strategies of security of patients during 5 years and comparison of obtained results with USA clinics was made. The study results demonstrated that in conditions of absence of implemented standards of security in medical organization total evaluation of security remains extremely low. The study of security culture using HSPSC questionnaire is an effective tool for evaluating implementation of various strategies of security ofpatient. The functioning in the system of international standards of quality, primarily JCI standards, permits during several years to achieve high indices of security culture.
Jurgens, Anneke; Anderson, Angelika; Moore, Dennis W
2012-01-01
To investigate the integrity with which parents and carers implement PECS in naturalistic settings, utilizing a sample of videos obtained from YouTube. Twenty-one YouTube videos meeting selection criteria were identified. The videos were reviewed for instances of seven implementer errors and, where appropriate, presence of a physical prompter. Forty-three per cent of videos and 61% of PECS exchanges contained errors in parent implementation of specific teaching strategies of the PECS training protocol. Vocal prompts, incorrect error correction and the absence of timely reinforcement occurred most frequently, while gestural prompts, insistence on speech, incorrect use of the open hand prompt and not waiting for the learner to initiate occurred less frequently. Results suggest that parents engage in vocal prompting and incorrect use of the 4-step error correction strategy when using PECS with their children, errors likely to result in prompt dependence.
Tablet PCs, Academic Results and Educational Inequalities
ERIC Educational Resources Information Center
Ferrer, Ferran; Belvis, Esther; Pamies, Jordi
2011-01-01
This article is the result of a study carried out in 2008 and 2009 by a team from the Autonomous University of Barcelona in order to evaluate the implementation of the Digital Whiteboard Program in public schools in the region of Aragon (Spain). The following pages present some of the results obtained during the study. More specifically, this…
Lizano-Díez, Irene; Modamio, Pilar; López-Calahorra, Pilar; Lastra, Cecilia F; Segú, Jose L; Gilabert-Perramon, Antoni; Mariño, Eduardo L
2014-01-01
Objectives To assess whether electronic prescribing is a comprehensive health management tool that may contribute to rational drug use, particularly in polymedicated patients receiving 16 or more medications in the public healthcare system in the Barcelona Health Region (BHR). Design 16 months of retrospective study followed by 12 months of prospective monitoring. Setting Primary healthcare in BHR, Catalonia, Spain. Participants All insured patients, especially those who are polymedicated in six basic health areas (BHA). Polymedicated patients were those with a consumption of ≥16 drugs/month. Interventions Monitoring demographic and consumption variables obtained from the records of prescriptions dispensed in pharmacies and charged to the public health system, as well as the resulting drug use indicators. Territorial variables related to implementation of electronic prescribing were also described and were obtained from the institutional data related to the deployment of the project. Main outcome measures Trend in drug use indicators (number of prescriptions per polymedicated user, total cost per polymedicated user and total cost per prescription) according to e-prescription implementation. Results There was a significant upward trend in the number of polymedicated users, number of prescriptions and total cost (p<0.05), which seemed independent from the implementation of electronic prescribing when comparing the preimplementation and postimplementation period. Prescriptions per user and cost per user showed a decrease between the preimplementation and postimplementation period, being significant in two BHAs (p<0.05). Conclusions Results suggest that after the implementation of electronic prescribing, the rationality of prescribing in polymedicated patients improved. In addition, this study provides a very valuable approach for future impact assessment. PMID:25377013
NASA Astrophysics Data System (ADS)
Ismail, Edy; Samsudi, Widjanarko, Dwi; Joyce, Peter; Stearns, Roman
2018-03-01
This model integrates project base learning by creating a product based on environmental needs. The Produktif Orientasi Lapangan 4 Tahap (POL4T) combines technical skills and entrepreneurial elements together in the learning process. This study is to implement the result of technopreneurship learning model development which is environment-oriented by combining technology and entrepreneurship components on Machining Skill Program. This study applies research and development design by optimizing experimental subject. Data were obtained from questionnaires, learning material validation, interpersonal, intrapersonal observation forms, skills, product, teachers and students' responses, and cognitive tasks. Expert validation and t-test calculation are applied to see how effective POL4T learning model. The result of the study is in the form of 4 steps learning model to enhance interpersonal and intrapersonal attitudes, develop practical products which orient to society and appropriate technology so that the products can have high selling value. The model is effective based on the students' post test result, which is better than the pre-test. The product obtained from POL4T model is proven to be better than the productive learning. POL4T model is recommended to be implemented for XI grade students. This is can develop entrepreneurial attitudes that are environment oriented, community needs and technical competencies students.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Scholte, Johannes B J; van Mook, Walther N K A; Linssen, Catharina F M; van Dessel, Helke A; Bergmans, Dennis C J J; Savelkoul, Paul H M; Roekaerts, Paul M H J
2014-10-01
To explore the extent of surveillance culture (SC) implementation underlying motives for obtaining SC and decision making based on the results. A questionnaire was distributed to Heads of Department (HODs) and microbiologists within all intensive care departments in the Netherlands. Response was provided by 75 (79%) of 95 HODs and 38 (64%) of 59 laboratories allied to an intensive care unit (ICU). Surveillance cultures were routinely obtained according to 55 (73%) of 75 HODs and 33 (87%) of 38 microbiologists. Surveillance cultures were obtained in more than 80% of higher-level ICUs and in 58% of lower-level ICUs (P < .05). Surveillance cultures were obtained twice weekly (88%) and sampled from trachea (87%), pharynx (74%), and rectum (68%). Thirty (58%) of 52 HODs obtained SC to optimize individual patient treatment. On suspicion of infection from an unknown source, microorganisms identified by SC were targeted according to 87%. One third of HODs targeted microorganisms identified by SC in the case of an infection not at the location where the SC was obtained. This was significantly more often than microbiologists in case of no infection (P = .02) or infection of unknown origin (P < .05). Surveillance culture implementation is common in Dutch ICUs to optimize individual patients' treatment. Consensus is lacking on how to deal with SC results when the focus of infection is not at the sampled site. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
Velocity-gauge real-time TDDFT within a numerical atomic orbital basis set
NASA Astrophysics Data System (ADS)
Pemmaraju, C. D.; Vila, F. D.; Kas, J. J.; Sato, S. A.; Rehr, J. J.; Yabana, K.; Prendergast, David
2018-05-01
The interaction of laser fields with solid-state systems can be modeled efficiently within the velocity-gauge formalism of real-time time dependent density functional theory (RT-TDDFT). In this article, we discuss the implementation of the velocity-gauge RT-TDDFT equations for electron dynamics within a linear combination of atomic orbitals (LCAO) basis set framework. Numerical results obtained from our LCAO implementation, for the electronic response of periodic systems to both weak and intense laser fields, are compared to those obtained from established real-space grid and Full-Potential Linearized Augmented Planewave approaches. Potential applications of the LCAO based scheme in the context of extreme ultra-violet and soft X-ray spectroscopies involving core-electronic excitations are discussed.
Implementing a GPU-based numerical algorithm for modelling dynamics of a high-speed train
NASA Astrophysics Data System (ADS)
Sytov, E. S.; Bratus, A. S.; Yurchenko, D.
2018-04-01
This paper discusses the initiative of implementing a GPU-based numerical algorithm for studying various phenomena associated with dynamics of a high-speed railway transport. The proposed numerical algorithm for calculating a critical speed of the bogie is based on the first Lyapunov number. Numerical algorithm is validated by analytical results, derived for a simple model. A dynamic model of a carriage connected to a new dual-wheelset flexible bogie is studied for linear and dry friction damping. Numerical results obtained by CPU, MPU and GPU approaches are compared and appropriateness of these methods is discussed.
Gallium arsenide processing elements for motion estimation full-search algorithm
NASA Astrophysics Data System (ADS)
Lopez, Jose F.; Cortes, P.; Lopez, S.; Sarmiento, Roberto
2001-11-01
The Block-Matching motion estimation algorithm (BMA) is the most popular method for motion-compensated coding of image sequence. Among the several possible searching methods to compute this algorithm, the full-search BMA (FBMA) has obtained great interest from the scientific community due to its regularity, optimal solution and low control overhead which simplifies its VLSI realization. On the other hand, its main drawback is the demand of an enormous amount of computation. There are different ways of overcoming this factor, being the use of advanced technologies, such as Gallium Arsenide (GaAs), the one adopted in this article together with different techniques to reduce area overhead. By exploiting GaAs properties, improvements can be obtained in the implementation of feasible systems for real time video compression architectures. Different primitives used in the implementation of processing elements (PE) for a FBMA scheme are presented. As a result, Pes running at 270 MHz have been developed in order to study its functionality and performance. From these results, an implementation for MPEG applications is proposed, leading to an architecture running at 145 MHz with a power dissipation of 3.48 W and an area of 11.5 mm2.
Use Dose Bricks Concept to Implement Nasopharyngeal Carcinoma Treatment Planning
Wu, Jia-Ming; Yu, Tsan-Jung; Yeh, Shyh-An; Chao, Pei-Ju; Huang, Chih-Jou
2014-01-01
Purpose. A “dose bricks” concept has been used to implement nasopharyngeal carcinoma treatment plan; this method specializes particularly in the case with bell shape nasopharyngeal carcinoma case. Materials and Methods. Five noncoplanar fields were used to accomplish the dose bricks technique treatment plan. These five fields include (a) right superior anterior oblique (RSAO), (b) left superior anterior oblique (LSAO), (c) right anterior oblique (RAO), (d) left anterior oblique (LAO), and (e) superior inferior vertex (SIV). Nondivergence collimator central axis planes were used to create different abutting field edge while normal organs were blocked by multileaf collimators in this technique. Results. The resulting 92% isodose curves encompassed the CTV, while maximum dose was about 115%. Approximately 50% volume of parotid glands obtained 10–15% of total dose and 50% volume of brain obtained less than 20% of total dose. Spinal cord receives only 5% from the scatter dose. Conclusions. Compared with IMRT, the expenditure of planning time and costing, “dose bricks” may after all be accepted as an optional implementation in nasopharyngeal carcinoma conformal treatment plan; furthermore, this method also fits the need of other nonhead and neck lesions if organ sparing and noncoplanar technique can be executed. PMID:24967395
The generalized scattering coefficient method for plane wave scattering in layered structures
NASA Astrophysics Data System (ADS)
Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song
2017-02-01
The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.
DEM Calibration Approach: design of experiment
NASA Astrophysics Data System (ADS)
Boikov, A. V.; Savelev, R. V.; Payor, V. A.
2018-05-01
The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.
NASA Technical Reports Server (NTRS)
Weiss, D. M.
1981-01-01
Error data obtained from two different software development environments are compared. To obtain data that was complete, accurate, and meaningful, a goal-directed data collection methodology was used. Changes made to software were monitored concurrently with its development. Similarities common to both environments are included: (1) the principal error was in the design and implementation of single routines; (2) few errors were the result of changes, required more than one attempt to correct, and resulted in other errors; (3) relatively few errors took more than a day to correct.
[Importance of clinical trial design and standardized implementation in ophthalmology].
Xu, Xun
2013-06-01
Clinical trial is an important medical research method, as well as the bridge of translational medicine. The results of scientific evidences are useful to make clinical practice guidelines. At present,much experience of carrying out ophthalmology clinical trials has been obtained and achieved, but there are still some scientific, practical and ethical problems to be solved,because of their impact on the authenticity and reliability of the results. Therefore, attaching great importance to design of the clinical research and implement of the standardization would be the goal and the development direction. Clinical trial design rely on objective, follow international design principles on the ethics,randomization, blinding and placebo setting. During the trial implementation, personnel training,project management and monitoring would help to reduce protocol deviation and ensure data authenticity.
NASA Astrophysics Data System (ADS)
Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.
2013-03-01
The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.
Quantum rotation gates with controlled nonadiabatic evolutions
NASA Astrophysics Data System (ADS)
Abdelrahim, Abdelrahman A. H.; Benmachiche, Abderrahim; Subhi Mahmoud, Gharib; Messikh, Azeddine
2018-04-01
Quantum gates can be implemented adiabatically and nonadiabatically. Many schemes used at least two sequentially implemented gates to obtain an arbitrary one-qubit gate. Recently, it has been shown that nonadiabatic gates can be realized by single-shot implementation. It has also been shown that quantum gates can be implemented with controlled adiabatic evolutions. In this paper, we combine the advantage of single-shot implementation with controlled adiabatic evolutions to obtain controlled nonadiabatic evolutions. We also investigate the robustness to different types of errors. We find that the fidelity is close to unity for realistic decoherence rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simakov, Andrei Nikolaevich; Molvig, Kim
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasmatransport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ion species plasmamore » made of deuterium, tritium, and gold. We find that these results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Fast implementation of the 1\\rightarrow3 orbital state quantum cloning machine
NASA Astrophysics Data System (ADS)
Lin, Jin-Zhong
2018-05-01
We present a scheme to implement a 1→3 orbital state quantum cloning machine assisted by quantum Zeno dynamics. By constructing shortcuts to adiabatic passage with transitionless quantum driving, we can complete this scheme effectively and quickly in one step. The effects of decoherence, including spontaneous emission and the decay of the cavity, are also discussed. The numerical simulation results show that high fidelity can be obtained and the feasibility analysis indicates that this can also be realized in experiments.
Comprehensive analysis of helicopters with bearingless rotors
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1988-01-01
A modified Galerkin method is developed to analyze the dynamic problems of multiple-load-path bearingless rotor blades. The development and selection of functions are quite parallel to CAMRAD procedures, greatly facilitating the implementation of the method into the CAMRAD program. A software is developed implementing the modified Galerkin method to determine free vibration characteristics of multiple-load-path rotor blades undergoing coupled flapwise bending, chordwise bending, twisting, and extensional motions. Results are in the process of being obtained by debugging the software.
Matching the Nagy-Soper parton shower at next-to-leading order
NASA Astrophysics Data System (ADS)
Czakon, M.; Hartanto, H. B.; Kraus, M.; Worek, M.
2015-06-01
We present an Mc@Nlo-like matching of next-to-leading order QCD calculations with the Nagy-Soper parton shower. An implementation of the algorithm within the Helac-Dipoles Monte Carlo generator is used to address the uncertainties and ambiguities of the matching scheme. First results obtained using the Nagy-Soper parton shower implementation in Deductor in conjunction with the Helac-Nlo framework are given for the process at the LHC with TeV. Effects of resummation are discussed for various observables.
Triple collinear emissions in parton showers
Hoche, Stefan; Prestel, Stefan
2017-10-17
A framework to include triple collinear splitting functions into parton showers is presented, and the implementation of flavor-changing next-to-leading-order (NLO) splitting kernels is discussed as a first application. The correspondence between the Monte Carlo integration and the analytic computation of NLO DGLAP evolution kernels is made explicit for both timelike and spacelike parton evolution. Finally, numerical simulation results are obtained with two independent implementations of the new algorithm, using the two independent event generation frameworks PYTHIA and SHERPA.
Ivanov, I V; Shvabsky, O R; Minulin, I B
2017-11-01
The article presents the analysis of the results of internal audits (self-rating) in medical organizations implemented on the basis of Proposals (practical guidelines) of the Roszdravnadzor concerning organization of inner control of quality and safety of medical activities in medical organization (hospital). The self-rating was implemented by the medical organizations themselves according the common criteria of the Proposals as provided the following plan: planning of self-rating, collection and processing of data, application of self-rating, analysis of obtained results, preparation of report. The article uses the results of self-rating of medical organizations corresponding to following criteria: profile of activity-multi-field hospital-number of beds more than 350-state property. The self-rating was implemented according to 11 basic parts of the Proposals. The criteria were developed for every part. The evaluation lists developed on the basis of the given Proposals permitted to medical organizations to independently establish problems in their activities. Within the framework of implemented self-rating medical organizations mentioned the directions of activity related to personnel management, identification of personality of patient, support of epidemiological and surgical safety as having significant discrepancies with the Proposals and requiring implementation of improvement measures.
Determinations of Vus using inclusive hadronic τ decay data
NASA Astrophysics Data System (ADS)
Maltman, Kim; Hudspith, Renwick James; Lewis, Randy; Izubuchi, Taku; Ohki, Hiroshi; Zanotti, James M.
2016-08-01
Two methods for determining |Vus| employing inclusive hadronic τ decay data are discussed. The first is the conventional flavor-breaking sum rule determination whose usual implementation produces results ˜ 3σ low compared to three-family unitary expectations. The second is a novel approach combining experimental strange hadronic τ distributions with lattice light-strange current-current two-point function data. Preliminary explorations of the latter show the method promises |Vus| determinations competitive with those from Kℓ3 and Γ[Kμ2]/Γ[πμ2]. For the former, systematic issues in the conventional implementation are investigated. Unphysical dependences of |Vus| on the choice of sum rule weight, w, and upper limit, s0, of the weighted experimental spectral integrals are observed, the source of these problems identified and a new implementation which overcomes these problems developed. Lattice results are shown to provide a tool for quantitatively assessing truncation uncertainties for the slowly converging D = 2 OPE series. The results for |Vus| from this new implementation are shown to be free of unphysical w- and s0-dependences, and ˜ 0.0020 higher than those produced by the conventional implementation. With preliminary new Kπ branching fraction results as input, we find |Vus| in excellent agreement with that obtained from Kℓ3, and compatible within errors with expectations from three-family unitarity.
HOS network-based classification of power quality events via regression algorithms
NASA Astrophysics Data System (ADS)
Palomares Salas, José Carlos; González de la Rosa, Juan José; Sierra Fernández, José María; Pérez, Agustín Agüera
2015-12-01
This work compares seven regression algorithms implemented in artificial neural networks (ANNs) supported by 14 power-quality features, which are based in higher-order statistics. Combining time and frequency domain estimators to deal with non-stationary measurement sequences, the final goal of the system is the implementation in the future smart grid to guarantee compatibility between all equipment connected. The principal results are based in spectral kurtosis measurements, which easily adapt to the impulsive nature of the power quality events. These results verify that the proposed technique is capable of offering interesting results for power quality (PQ) disturbance classification. The best results are obtained using radial basis networks, generalized regression, and multilayer perceptron, mainly due to the non-linear nature of data.
HOLONET: a network for training holography
NASA Astrophysics Data System (ADS)
Pombo, Pedro; Santos, Emanuel
2014-07-01
Holography is an optics technique based on wave physics and lasers with several applications at our day life. The production of holograms involves experimental work based on hands-on activities and creativity. All these elements can contribute to the promotion of experimental teaching of optics and training on holography. The hologram itself acting as a final result from a long process of research and study can enable the engagement of high school students on physics and promote the stimulus on optics learning. Taking these assumptions into account a network of schools working on holography was built involving thirty schools from all country. Holography systems were developed and several hands-on activities were constructed. During last sixteen years students are working on laser optics and holography producing different kinds of holograms. This study presents all holography labs implemented at schools and it will analyzed the holography systems and materials developed for students. Training strategy will be discussed and holograms obtained by students will be presented. Results obtained show us that holography can be implemented as a strategy for promoting the learning of optics and it is a particular way to involve students on experimental work and lab research. Results obtained during this study will be presented in detail and analyzed with focus on students performance. Educational results, teachers training, prizes and other positive outcomes will be discussed and compared.
Polk, Deborah E; Nolan, Beth A D; Shah, Nilesh H; Weyant, Robert J
2016-01-01
The aim of this study was to determine the degree to which dental schools in the United States have policies and procedures in place that facilitate the implementation of evidence-based clinical guidelines. The authors sent surveys to all 65 U.S. dental schools in 2014; responses were obtained from 38 (58%). The results showed that, of the nine policies and procedures examined, only two were fully implemented by 50% or more of the responding schools: guidelines supported through clinical faculty education or available chairside (50%), and students informed of guidelines in both the classroom and clinic (65.8%). Although 92% of the respondents reported having an electronic health record, 80% of those were not using it to track compliance with guidelines. Five schools reported implementing more policies than the rest of the schools. The study found that the approach to implementing guidelines at most of the responding schools did not follow best practices although five schools had an exemplary set of policies and procedures to support guideline implementation. These results suggest that most dental schools are currently not implementing guidelines effectively and efficiently, but that the goal of schools' having a comprehensive implementation program for clinical guidelines is achievable since some are doing so. Future studies should determine whether interventions to improve implementation in dental schools are needed.
Prakosa, A.; Malamas, P.; Zhang, S.; Pashakhanloo, F.; Arevalo, H.; Herzka, D. A.; Lardo, A.; Halperin, H.; McVeigh, E.; Trayanova, N.; Vadakkumpadan, F.
2014-01-01
Patient-specific modeling of ventricular electrophysiology requires an interpolated reconstruction of the 3-dimensional (3D) geometry of the patient ventricles from the low-resolution (Lo-res) clinical images. The goal of this study was to implement a processing pipeline for obtaining the interpolated reconstruction, and thoroughly evaluate the efficacy of this pipeline in comparison with alternative methods. The pipeline implemented here involves contouring the epi- and endocardial boundaries in Lo-res images, interpolating the contours using the variational implicit functions method, and merging the interpolation results to obtain the ventricular reconstruction. Five alternative interpolation methods, namely linear, cubic spline, spherical harmonics, cylindrical harmonics, and shape-based interpolation were implemented for comparison. In the thorough evaluation of the processing pipeline, Hi-res magnetic resonance (MR), computed tomography (CT), and diffusion tensor (DT) MR images from numerous hearts were used. Reconstructions obtained from the Hi-res images were compared with the reconstructions computed by each of the interpolation methods from a sparse sample of the Hi-res contours, which mimicked Lo-res clinical images. Qualitative and quantitative comparison of these ventricular geometry reconstructions showed that the variational implicit functions approach performed better than others. Additionally, the outcomes of electrophysiological simulations (sinus rhythm activation maps and pseudo-ECGs) conducted using models based on the various reconstructions were compared. These electrophysiological simulations demonstrated that our implementation of the variational implicit functions-based method had the best accuracy. PMID:25148771
NASA Astrophysics Data System (ADS)
Sambeka, Yana; Nahadi, Sriyati, Siti
2017-05-01
The study aimed to obtain the scientific information about increase of student's concept mastering in project based learning that used authentic assessment. The research was conducted in May 2016 at one of junior high school in Bandung in the academic year of 2015/2016. The research method was weak experiment with the one-group pretest-posttest design. The sample was taken by random cluster sampling technique and the sample was 24 students. Data collected through instruments, i.e. written test, observation sheet, and questionnaire sheet. Student's concept mastering test obtained N-Gain of 0.236 with the low category. Based on the result of paired sample t-test showed that implementation of authentic assessment in the project based learning increased student's concept mastering significantly, (sig<0.05).
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
An innovative recycling process to obtain pure polyethylene and polypropylene from household waste.
Serranti, Silvia; Luciani, Valentina; Bonifazi, Giuseppe; Hu, Bin; Rem, Peter C
2015-01-01
An innovative recycling process, based on magnetic density separation (MDS) and hyperspectral imaging (HSI), to obtain high quality polypropylene and polyethylene as secondary raw materials, is presented. More in details, MDS was applied to two different polyolefin mixtures coming from household waste. The quality of the two separated PP and PE streams, in terms of purity, was evaluated by a classification procedure based on HSI working in the near infrared range (1000-1700 nm). The classification model was built using known PE and PP samples as training set. The results obtained by HSI were compared with those obtained by classical density analysis carried in laboratory on the same polymers. The results obtained by MDS and the quality assessment of the plastic products by HSI showed that the combined action of these two technologies is a valid solution that can be implemented at industrial level. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lestariani, Ida; Sujadi, Imam; Pramudya, Ikrar
2018-05-01
Portfolio assessment can shows the development of the ability of learners in a period through the work so that can be seen progress monitored learning of each learner. The purpose of research to describe and know the implementation of portfolio assessment on the mathematics learning process with the Senior High school math teacher class X as the subject because of the importance of applying the assessment for the progress of learning outcomes of learners. This research includes descriptive qualitative research type. Techniques of data collecting is done by observation method, interview and documentation. Data collection then validated using triangulation technique that is observation technique, interview and documentation. Data analysis technique is done by data reduction, data presentation and conclusion. The results showed that the steps taken by teachers in applying portfolio assessment obtained focused on learning outcomes. Student learning outcomes include homework and daily tests. Based on the results of research can be concluded that the implementation of portfolio assessment is the form of learning results are scored. Teachers have not yet implemented other portfolio assessment techniques such as student work.
Modeling of Nitrogen Oxides Emissions from CFB Combustion
NASA Astrophysics Data System (ADS)
Kallio, S.; Keinonen, M.
In this work, a simplified description of combustion and nitrogen oxides chemistry was implemented in a 1.5D model framework with the aim to compare the results with ones earlier obtained with a detailed reaction scheme. The simplified chemistry was written using 12 chemical components. Heterogeneous chemistry is given by the same models as in the earlier work but the homogeneous and catalytic reactions have been altered. The models have been taken from the literature. The paper describes the numerical model with emphasis on the chemistry submodels. A simulation of combustion of bituminous coal in the Chalmers 12 MW boiler is conducted and the results are compared with the results obtained earlier with the detailed chemistry description. The results are also compared with measured O2, CO, NO and N2O profiles. The simplified reaction scheme produces equally good results as earlier obtained with the more elaborate chemistry description.
[Analysis of the results of the SEIMC External Quality Control Program. Year 2013].
de Gopegui Bordes, Enrique Ruiz; Orta Mira, Nieves; Del Remedio Guna Serrano, M; Medina González, Rafael; Rosario Ovies, María; Poveda, Marta; Gimeno Cardona, Concepción
2015-07-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) include controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology, molecular microbiology and HIV-1, HCV and HBV viral loads. This manuscript presents the analysis of results obtained of the participants from the 2013 SEIMC External Quality Control Programme, except viral loads controls, that they are summarized in a manuscript abroad. As a whole, the results obtained in 2013 confirm the excellent skill and good technical standards found in previous editions. However, erroneous results can be obtained in any laboratory and in clinically relevant determinations. Once again, the results of this program highlighted the need to implement both internal and external controls in order to assure the maximal quality of the microbiological tests. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe
2017-01-01
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788
Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian
2017-07-18
Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.
STEM-based workbook: Enhancing students' STEM competencies on lever system
NASA Astrophysics Data System (ADS)
Sejati, Binar Kasih; Firman, Harry; Kaniawati, Ida
2017-05-01
Twenty-first century is a century of technology, a rapid development of scientific studies and technology make them relied heavily on each other. This research investigated about the effect of STEM-based workbook in enhancing students' STEM competencies in terms of knowledge understanding, problem solving skill, innovative abilities, and responsibility. The workbook was tried on 24 students that applied engineering design processes together with mathematics and science knowledge to design and create an egg cracker. The result showed that the implementation of STEM-based workbook on lever system in human body is effective to improve students' STEM competencies, it can be proven by students' result on their knowledge understanding improvement which can be seen from normalized gain (
Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.
Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S
2016-12-01
A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Urriza, Isidro; Barragan, Luis A.; Artigas, Jose I.; Garcia, Jose I.; Navarro, Denis
1997-11-01
Image compression plays an important role in the archiving and transmission of medical images. Discrete cosine transform (DCT)-based compression methods are not suitable for medical images because of block-like image artifacts that could mask or be mistaken for pathology. Wavelet transforms (WTs) are used to overcome this problem. When implementing WTs in hardware, finite precision arithmetic introduces quantization errors. However, lossless compression is usually required in the medical image field. Thus, the hardware designer must look for the optimum register length that, while ensuring the lossless accuracy criteria, will also lead to a high-speed implementation with small chip area. In addition, wavelet choice is a critical issue that affects image quality as well as system design. We analyze the filters best suited to image compression that appear in the literature. For them, we obtain the maximum quantization errors produced in the calculation of the WT components. Thus, we deduce the minimum word length required for the reconstructed image to be numerically identical to the original image. The theoretical results are compared with experimental results obtained from algorithm simulations on random test images. These results enable us to compare the hardware implementation cost of the different filter banks. Moreover, to reduce the word length, we have analyzed the case of increasing the integer part of the numbers while maintaining constant the word length when the scale increases.
Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun
2018-01-01
To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.
Sensor fusion for antipersonnel landmine detection: a case study
NASA Astrophysics Data System (ADS)
den Breejen, Eric; Schutte, Klamer; Cremer, Frank
1999-08-01
In this paper the multi sensor fusion results obtained within the European research project GEODE are presented. The layout of the test lane and the individual sensors used are described. The implementation of the SCOOP algorithm improves the ROC curves, as the false alarm surface and the number of false alarms both are taken into account. The confidence grids, as produced by the sensor manufacturers, of the sensors are used as input for the different sensor fusion methods implemented. The multisensor fusion methods implemented are Bayes, Dempster-Shafer, fuzzy probabilities and rules. The mapping of the confidence grids to the input parameters for fusion methods is an important step. Due to limited amount of the available data the entire test lane is used for training and evaluation. All four sensor fusion methods provide better detection results than the individual sensors.
Using the RE-AIM framework to evaluate a school-based municipal programme tripling time spent on PE.
Nielsen, Jonas Vestergaard; Skovgaard, Thomas; Bredahl, Thomas Viskum Gjelstrup; Bugge, Anna; Wedderkopp, Niels; Klakk, Heidi
2018-06-01
Documenting the implementation of effective real-world programmes is considered an important step to support the translation of evidence into practice. Thus, the aim of this study was to identify factors influencing the adoption, implementation and maintenance of the Svendborgproject (SP) - an effective real-world programme comprising schools to implement triple the amount of physical education (PE) in pre-school to sixth grade in six primary schools in the municipality of Svendborg, Denmark. SP has been maintained for ten years and scaled up to all municipal schools since it was initiated in 2008. The Reach, Effectiveness, Adoption, Implementation and Maintenance framework (RE-AIM) was applied as an analytic tool through a convergent mixed method triangulation design. Results show that SP has been implemented with high fidelity and become an established part of the municipality and school identity. The successful implementation and dissemination of the programme has been enabled through the introduction of a predominantly bottom-up approach combined with simple non-negotiable requirements. The results show that this combination has led to a better fit of programmes to the individual school context while still obtaining high implementation fidelity. Finally, the early integration of research has legitimated and benefitted the programme. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sundvor, Ingrid; López-Aparicio, Susana
2014-10-15
This study shows the results obtained from emission and air dispersion modelling of acetaldehyde in the city of Oslo and associated with the circulation of bioethanol vehicles. Two scenarios of bioethanol implementation, both realistic and hypothetical, have been considered under winter conditions; 1) realistic baseline scenario, which corresponds to the current situation in Oslo where one bus line is running with bioethanol (E95; 95% ethanol-5% petrol) among petrol and diesel vehicles; and 2) a hypothetical scenario characterized by a full implementation of high-blend bioethanol (i.e. E85) as fuel for transportation, and thus an entire bioethanol fleet. The results indicate that a full implementation of bioethanol will have a certain impact on urban air quality due to direct emissions of acetaldehyde. Acetaldehyde emissions are estimated to increase by 233% and concentration levels increase up to 650% with regard to the baseline. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Discrete-time stability of continuous-time controller designs for large space structures
NASA Technical Reports Server (NTRS)
Balas, M. J.
1982-01-01
In most of the stable control designs for flexible structures, continuous time is assumed. However, in view of the implementation of the controllers by on-line digital computers, the discrete-time stability of such controllers is an important consideration. In the case of direct-velocity feedback (DVFB), involving negative feedback from collocated force actuators and velocity sensors, it is not immediately apparent how much delay due to digital implementation of DVFB can be tolerated without loss of stability. The present investigation is concerned with such questions. A study is conducted of the discrete-time stability of DVFB, taking into account an employment of Euler's method of approximation of the time derivative. The obtained result gives an indication of the acceptable time-step size for stable digital implementation of DVFB. A result derived in connection with the consideration of the discrete-time stability of stable continuous-time systems provides a general condition under which digital implementation of such a system will remain stable.
Collective Bargaining As an Instrument of Change.
ERIC Educational Resources Information Center
Ayers, Steven V.
1998-01-01
The Hilton Central School District, New York, utilized the collective bargaining process to create a financial incentive that would motivate teachers to achieve a baseline level of technological competency. Describes the negotiated agreement, results obtained during the initial year of implementation, and future plans. (MLF)
ERIC Educational Resources Information Center
Troy, Thomas D.; Schwaab, Karl E.
1981-01-01
Legal aspects of field trips are addressed, with special attention on planning and implementation aspects which warrant legal consideration. Suggestions are based on information obtained from studies which reviewed and analyzed court cases, with recommendations geared to lessen the likelihood that negligence suits will result if students sustain…
40 CFR 51.213 - Transportation control measures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Transportation control measures. 51.213... Transportation control measures. (a) The plan must contain procedures for obtaining and maintaining data on actual emissions reductions achieved as a result of implementing transportation control measures. (b) In...
40 CFR 51.213 - Transportation control measures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Transportation control measures. 51.213... Transportation control measures. (a) The plan must contain procedures for obtaining and maintaining data on actual emissions reductions achieved as a result of implementing transportation control measures. (b) In...
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-03-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-11-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Collision avoidance in TV white spaces: a cross-layer design approach for cognitive radio networks
NASA Astrophysics Data System (ADS)
Foukalas, Fotis; Karetsos, George T.
2015-07-01
One of the most promising applications of cognitive radio networks (CRNs) is the efficient exploitation of TV white spaces (TVWSs) for enhancing the performance of wireless networks. In this paper, we propose a cross-layer design (CLD) of carrier sense multiple access with collision avoidance (CSMA/CA) mechanism at the medium access control (MAC) layer with spectrum sensing (SpSe) at the physical layer, for identifying the occupancy status of TV bands. The proposed CLD relies on a Markov chain model with a state pair containing both the SpSe and the CSMA/CA from which we derive the collision probability and the achievable throughput. Analytical and simulation results are obtained for different collision avoidance and SpSe implementation scenarios by varying the contention window, back off stage and probability of detection. The obtained results depict the achievable throughput under different collision avoidance and SpSe implementation scenarios indicating thereby the performance of collision avoidance in TVWSs-based CRNs.
NASA Astrophysics Data System (ADS)
San-Blas, A. A.; Roca, J. M.; Cogollos, S.; Morro, J. V.; Boria, V. E.; Gimeno, B.
2016-06-01
In this work, a full-wave tool for the accurate analysis and design of compensated E-plane multiport junctions is proposed. The implemented tool is capable of evaluating the undesired effects related to the use of low-cost manufacturing techniques, which are mostly due to the introduction of rounded corners in the cross section of the rectangular waveguides of the device. The obtained results show that, although stringent mechanical effects are imposed, it is possible to compensate for the impact of the cited low-cost manufacturing techniques by redesigning the matching elements considered in the original device. Several new designs concerning a great variety of E-plane components (such as right-angled bends, T-junctions and magic-Ts) are presented, and useful design guidelines are provided. The implemented tool, which is mainly based on the boundary integral-resonant mode expansion technique, has been successfully validated by comparing the obtained results to simulated data provided by a commercial software based on the finite element method.
NASA Technical Reports Server (NTRS)
Gilyard, G. B.; Edwards, J. W.
1983-01-01
Flight flutter-test results of the first aeroelastic research wing (ARW-1) of NASA's drones for aerodynamic and structural testing program are presented. The flight-test operation and the implementation of the active flutter-suppression system are described as well as the software techniques used to obtain real-time damping estimates and the actual flutter testing procedure. Real-time analysis of fast-frequency aileron excitation sweeps provided reliable damping estimates. The open-loop flutter boundary was well defined at two altitudes; a maximum Mach number of 0.91 was obtained. Both open-loop and closed-loop data were of exceptionally high quality. Although the flutter-suppression system provided augmented damping at speeds below the flutter boundary, an error in the implementation of the system resulted in the system being less stable than predicted. The vehicle encountered system-on flutter shortly after crossing the open-loop flutter boundary on the third flight and was lost. The aircraft was rebuilt. Changes made in real-time test techniques are included.
The use of quizStar application for online examination in basic physics course
NASA Astrophysics Data System (ADS)
Kustijono, R.; Budiningarti, H.
2018-03-01
The purpose of the study is to produce an online Basic Physics exam system using the QuizStar application. This is a research and development with ADDIE model. The steps are: 1) analysis; 2) design; 3) development; 4) implementation; 5) evaluation. System feasibility is reviewed for its validity, practicality, and effectiveness. The subjects of research are 60 Physics Department students of Universitas Negeri Surabaya. The data analysis used is a descriptive statistic. The validity, practicality, and effectiveness scores are measured using a Likert scale. Criteria feasible if the total score of all aspects obtained is ≥ 61%. The results obtained from the online test system by using QuizStar developed are 1) conceptually feasible to use; 2) the system can be implemented in the Basic Physics assessment process, and the existing constraints can be overcome; 3) student's response to system usage is in a good category. The results conclude that QuizStar application is eligible to be used for online Basic Physics exam system.
Parallel mutual information estimation for inferring gene regulatory networks on GPUs
2011-01-01
Background Mutual information is a measure of similarity between two variables. It has been widely used in various application domains including computational biology, machine learning, statistics, image processing, and financial computing. Previously used simple histogram based mutual information estimators lack the precision in quality compared to kernel based methods. The recently introduced B-spline function based mutual information estimation method is competitive to the kernel based methods in terms of quality but at a lower computational complexity. Results We present a new approach to accelerate the B-spline function based mutual information estimation algorithm with commodity graphics hardware. To derive an efficient mapping onto this type of architecture, we have used the Compute Unified Device Architecture (CUDA) programming model to design and implement a new parallel algorithm. Our implementation, called CUDA-MI, can achieve speedups of up to 82 using double precision on a single GPU compared to a multi-threaded implementation on a quad-core CPU for large microarray datasets. We have used the results obtained by CUDA-MI to infer gene regulatory networks (GRNs) from microarray data. The comparisons to existing methods including ARACNE and TINGe show that CUDA-MI produces GRNs of higher quality in less time. Conclusions CUDA-MI is publicly available open-source software, written in CUDA and C++ programming languages. It obtains significant speedup over sequential multi-threaded implementation by fully exploiting the compute capability of commonly used CUDA-enabled low-cost GPUs. PMID:21672264
Design of the low area monotonic trim DAC in 40 nm CMOS technology for pixel readout chips
NASA Astrophysics Data System (ADS)
Drozd, A.; Szczygiel, R.; Maj, P.; Satlawa, T.; Grybos, P.
2014-12-01
The recent research in hybrid pixel detectors working in single photon counting mode focuses on nanometer or 3D technologies which allow making pixels smaller and implementing more complex solutions in each of the pixels. Usually single pixel in readout electronics for X-ray detection comprises of charge amplifier, shaper and discriminator that allow classification of events occurring at the detector as true or false hits by comparing amplitude of the signal obtained with threshold voltage, which minimizes the influence of noise effects. However, making the pixel size smaller often causes problems with pixel to pixel uniformity and additional effects like charge sharing become more visible. To improve channel-to-channel uniformity or implement an algorithm for charge sharing effect minimization, small area trimming DACs working in each pixel independently are necessary. However, meeting the requirement of small area often results in poor linearity and even non-monotonicity. In this paper we present a novel low-area thermometer coded 6-bit DAC implemented in 40 nm CMOS technology. Monte Carlo simulations were performed on the described design proving that under all conditions designed DAC is inherently monotonic. Presented DAC was implemented in the prototype readout chip with 432 pixels working in single photon counting mode, with two trimming DACs in each pixel. Each DAC occupies the area of 8 μm × 18.5 μm. Measurements and chips' tests were performed to obtain reliable statistical results.
A Cubic Radial Basis Function in the MLPG Method for Beam Problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Phillips, D. R.
2002-01-01
A non-compactly supported cubic radial basis function implementation of the MLPG method for beam problems is presented. The evaluation of the derivatives of the shape functions obtained from the radial basis function interpolation is much simpler than the evaluation of the moving least squares shape function derivatives. The radial basis MLPG yields results as accurate or better than those obtained by the conventional MLPG method for problems with discontinuous and other complex loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.
2000-05-08
The Krieger, Li, and Iafrate approximation to the optimized effective potential including the self-interaction correction for density functional theory has been implemented in a molecular code, NWChem, that uses Gaussian functions to represent the Kohn and Sham spin-orbitals. The differences between the implementation of the self-interaction correction in codes where planewaves are used with an optimized effective potential are discussed. The importance of the localization of the spin-orbitals to maximize the exchange-correlation of the self-interaction correction is discussed. We carried out exchange-only calculations to compare the results obtained with these approximations, and those obtained with the local spin density approximation,more » the generalized gradient approximation and Hartree-Fock theory. Interesting results for the energy difference (GAP) between the highest occupied molecular orbital, HOMO, and the lowest unoccupied molecular orbital, LUMO, (spin-orbital energies of closed shell atoms and molecules) using the optimized effective potential and the self-interaction correction have been obtained. The effect of the diffuse character of the basis set on the HOMO and LUMO eigenvalues at the various levels is discussed. Total energies obtained with the optimized effective potential and the self-interaction correction show that the exchange energy with these approximations is overestimated and this will be an important topic for future work. (c) 2000 American Institute of Physics.« less
[Wireless digital radiography detectors in the emergency area: an efficacious solution].
Garrido Blázquez, M; Agulla Otero, M; Rodríguez Recio, F J; Torres Cabrera, R; Hernando González, I
2013-01-01
To evaluate the implementation of a flat panel digital radiolography (DR) system with WiFi technology in an emergency radiology area in which a computed radiography (CR) system was previously used. We analyzed aspects related to image quality, radiation dose, workflow, and ergonomics. We analyzed the results obtained with the CR and WiFi DR systems related with the quality of images analyzed in images obtained using a phantom and after radiologists' evaluation of radiological images obtained in real patients. We also analyzed the time required for image acquisition and the workflow with the two technological systems. Finally, we analyzed the data related to the dose of radiation in patients before and after the implementation of the new equipment. Image quality improved in both the tests carried out with a phantom and in radiological images obtained in patients, which increased from 3 to 4.5 on a 5-point scale. The average time required for image acquisition decreased by 25 seconds per image. The flat panel required less radiation to be delivered in practically all the techniques carried out using automatic dosimetry, although statistically significant differences were found in only some of the techniques (chest, thoracic spine, and lumbar spine). Implementing the WiFi DR system has brought benefits. Image quality has improved and the dose of radiation to patients has decreased. The new system also has advantages in terms of functionality, ergonomics, and performance. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Simulating immersed particle collisions: the Devil's in the details
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2015-11-01
Simulating densely-packed particle-laden flows with any degree of confidence requires accurate modeling of particle-particle collisions. To this end, we investigate a few collision models from the fluids and granular flow communities using sphere-wall collisions, which have been studied by a number of experimental groups. These collisions involve enough complexities--gravity, particle-wall lubrication forces, particle-wall contact stresses, particle-wake interactions--to challenge any collision model. Evaluating the successes and shortcomings of the collision models, we seek improvements in order to obtain more consistent results. We will highlight several implementation details that are crucial for obtaining accurate results.
Active vibration control with model correction on a flexible laboratory grid structure
NASA Technical Reports Server (NTRS)
Schamel, George C., II; Haftka, Raphael T.
1991-01-01
This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.
Optical Fiber Sensors for Advanced Civil Structures
NASA Astrophysics Data System (ADS)
de Vries, Marten Johannes Cornelius
1995-01-01
The objective of this dissertation is to develop, analyze, and implement optical fiber-based sensors for the nondestructive quantitative evaluation of advanced civil structures. Based on a comparative evaluation of optical fiber sensors that may be used to obtain quantitative information related to physical perturbations in the civil structure, the extrinsic Fabry-Perot interferometric (EFPI) optical fiber sensor is selected as the most attractive sensor. The operation of the EFPI sensor is explained using the Kirchhoff diffraction approach. As is shown in this dissertation, this approach better predicts the signal-to-noise ratio as a function of gap length than methods employed previously. The performance of the optical fiber sensor is demonstrated in three different implementations. In the first implementation, performed with researchers in the Civil Engineering Department at the University of Southern California in Los Angeles, optical fiber sensors were used to obtain quantitative strain information from reinforced concrete interior and exterior column-to-beam connections. The second implementation, performed in cooperation with researchers at the United States Bureau of Mines in Spokane, Washington, used optical fiber sensors to monitor the performance of roof bolts used in mines. The last implementation, performed in cooperation with researchers at the Turner-Fairbanks Federal Highway Administration Research Center in McLean, Virginia, used optical fiber sensors, attached to composite prestressing strands used for reinforcing concrete, to obtain absolute strain information. Multiplexing techniques including time, frequency and wavelength division multiplexing are briefly discussed, whereas the principles of operation of spread spectrum and optical time domain reflectometery (OTDR) are discussed in greater detail. Results demonstrating that spread spectrum and OTDR techniques can be used to multiplex optical fiber sensors are presented. Finally, practical considerations that have to be taken into account when implementing optical fiber sensors into a civil structure environment are discussed, and possible solutions to some of these problems are proposed.
Ferreira, J; Seoane, F; Lindecrantz, K
2013-01-01
Personalised Health Systems (PHS) that could benefit the life quality of the patients as well as decreasing the health care costs for society among other factors are arisen. The purpose of this paper is to study the capabilities of the System-on-Chip Impedance Network Analyser AD5933 performing high speed single frequency continuous bioimpedance measurements. From a theoretical analysis, the minimum continuous impedance estimation time was determined, and the AD5933 with a custom 4-Electrode Analog Front-End (AFE) was used to experimentally determine the maximum continuous impedance estimation frequency as well as the system impedance estimation error when measuring a 2R1C electrical circuit model. Transthoracic Electrical Bioimpedance (TEB) measurements in a healthy subject were obtained using 3M gel electrodes in a tetrapolar lateral spot electrode configuration. The obtained TEB raw signal was filtered in MATLAB to obtain the respiration and cardiogenic signals, and from the cardiogenic signal the impedance derivative signal (dZ/dt) was also calculated. The results have shown that the maximum continuous impedance estimation rate was approximately 550 measurements per second with a magnitude estimation error below 1% on 2R1C-parallel bridge measurements. The displayed respiration and cardiac signals exhibited good performance, and they could be used to obtain valuable information in some plethysmography monitoring applications. The obtained results suggest that the AD5933-based monitor could be used for the implementation of a portable and wearable Bioimpedance plethysmograph that could be used in applications such as Impedance Cardiography. These results combined with the research done in functional garments and textile electrodes might enable the implementation of PHS applications in a relatively short time from now.
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
Mu, Lin; Wang, Junping; Ye, Xiu
2017-07-06
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
Effective implementation of the weak Galerkin finite element methods for the biharmonic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
The weak Galerkin (WG) methods have been introduced in [11, 12, 17] for solving the biharmonic equation. The purpose of this paper is to develop an algorithm to implement the WG methods effectively. This can be achieved by eliminating local unknowns to obtain a global system with significant reduction of size. In fact this reduced global system is equivalent to the Schur complements of the WG methods. The unknowns of the Schur complement of the WG method are those defined on the element boundaries. The equivalence of theWG method and its Schur complement is established. The numerical results demonstrate themore » effectiveness of this new implementation technique.« less
Collective Framework and Performance Optimizations to Open MPI for Cray XT Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ladd, Joshua S; Gorentla Venkata, Manjunath; Shamis, Pavel
2011-01-01
The performance and scalability of collective operations plays a key role in the performance and scalability of many scientific applications. Within the Open MPI code base we have developed a general purpose hierarchical collective operations framework called Cheetah, and applied it at large scale on the Oak Ridge Leadership Computing Facility's Jaguar (OLCF) platform, obtaining better performance and scalability than the native MPI implementation. This paper discuss Cheetah's design and implementation, and optimizations to the framework for Cray XT 5 platforms. Our results show that the Cheetah's Broadcast and Barrier perform better than the native MPI implementation. For medium data,more » the Cheetah's Broadcast outperforms the native MPI implementation by 93% for 49,152 processes problem size. For small and large data, it out performs the native MPI implementation by 10% and 9%, respectively, at 24,576 processes problem size. The Cheetah's Barrier performs 10% better than the native MPI implementation for 12,288 processes problem size.« less
Sheridan, Beth A; MacDonald, Douglas A; Donlon, Mark; Kuhn, Beth; McGovern, Katie; Friedman, Harris
2011-04-01
Using a sample of 647 Canadian children in kindergarten to Grade 3 (325 boys, 322 girls), the present study evaluated the perceived effectiveness of Skillstreaming (McGinnis & Goldstein, 2003), a widely known social skills program implemented to target the development of four skill sets, i.e., listening, following directions, problem-solving, and knowing when to tell. Results indicated significant postprogram improvements in all skills as well as in ratings of overall prosociality obtained from both classroom teachers and mental health staff, with medium to large effect sizes obtained from teachers' and mental health professionals' ratings, respectively. Additional analyses yielded significant but weak moderator effects of grade and preprogram prosocial functioning for teacher ratings but no consistent moderator effects for children's sex or school location (i.e., urban versus rural) regardless of rater.
Low, Sabina; Van Ryzin, Mark J; Brown, Eric C; Smith, Brian H; Haggerty, Kevin P
2014-04-01
Steps to Respect: A Bullying Prevention Program (STR) relies on a social-ecological model of prevention to increase school staff awareness and responsiveness, foster socially responsible beliefs among students, and teach social-emotional skills to students to reduce bullying behavior. As part of a school-randomized controlled trial of STR, we examined predictors and outcomes associated with classroom curriculum implementation in intervention schools. Data on classroom implementation (adherence and engagement) were collected from a sample of teachers using a weekly on-line Teacher Implementation Checklist system. Pre-post data related to school bullying-related outcomes were collected from 1,424 students and archival school demographic data were obtained from the National Center for Education Statistics. Results of multilevel analyses indicated that higher levels of program engagement were influenced by school-level percentage of students receiving free/reduced lunch, as well as classroom-level climate indicators. Results also suggest that higher levels of program engagement were related to lower levels of school bullying problems, enhanced school climate and attitudes less supportive of bullying. Predictors and outcomes related to program fidelity (i.e., adherence) were largely nonsignificant. Results suggest that student engagement is a key element of program impact, though implementation is influenced by both school-level demographics and classroom contexts.
Simulation of isoelectro focusing processes. [stationary electrolysis of charged species
NASA Technical Reports Server (NTRS)
Palusinski, O. A.
1980-01-01
This paper presents the computer implementation of a model for the stationary electrolysis of two or more charged species. This has specific application to the technique of isoelectric focussing, in which the stationary electrolysis of ampholytes is used to generate a pH gradient useful for the separation of proteins, peptides and other biomolecules. The fundamental equations describing the process are given. These equations are transformed to a form suitable for digital computer implementation. Some results of computer simulation are described and compared to data obtained in the laboratory.
Implementation on a nonlinear concrete cracking algorithm in NASTRAN
NASA Technical Reports Server (NTRS)
Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.
1976-01-01
A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.
Non-local classical optical correlation and implementing analogy of quantum teleportation
Sun, Yifan; Song, Xinbing; Qin, Hongwei; Zhang, Xiong; Yang, Zhenwei; Zhang, Xiangdong
2015-01-01
This study reports an experimental realization of non-local classical optical correlation from the Bell's measurement used in tests of quantum non-locality. Based on such a classical Einstein–Podolsky–Rosen optical correlation, a classical analogy has been implemented to the true meaning of quantum teleportation. In the experimental teleportation protocol, the initial teleported information can be unknown to anyone and the information transfer can happen over arbitrary distances. The obtained results give novel insight into quantum physics and may open a new field of applications in quantum information. PMID:25779977
Chang, S; Wong, K W; Zhang, W; Zhang, Y
1999-08-10
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Astrophysics Data System (ADS)
Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin
1999-08-01
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
NASA Technical Reports Server (NTRS)
Sun, D. C.; Yuan, Qin
1995-01-01
The geometrical parameters for a wormgear intended to be used as the transmission in advanced helicopters are finalized. The resulting contact pattern of the meshing tooth surfaces is suitable for the implementation of hydrostatic lubrication Fluid film lubrication of the contact is formulated considering external pressurization as well as hydrodynamic wedge and squeeze actions. The lubrication analysis is aimed at obtaining the oil supply pressure needed to separate the worm and gear surfaces by a prescribed minimum film thickness. The procedure of solving the mathematical problem is outlined.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with themore » fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.« less
Determinations of Vus using inclusive hadronic τ decay data
Maltman, Kim; Hudspith, Renwick James; Lewis, Randy; ...
2016-08-30
Two methods for determining |V us| employing inclusive hadronic ττ decay data are discussed. The first is the conventional flavor-breaking sum rule determination whose usual implementation produces results ~3σ low compared to three-family unitary expectations. The second is a novel approach combining experimental strange hadronic ττ distributions with lattice light-strange current–current two-point function data. In preliminary explorations of the latter show the method promises |V us| determinations are competitive with those from K ℓ3 and Γ[π μ2]/Γ[π μ2]. For the former, systematic issues in the conventional implementation are investigated. Unphysical dependences of |V us| on the choice of sum rulemore » weight, w, and upper limit, s 0, of the weighted experimental spectral integrals are observed, the source of these problems identified and a new implementation which overcomes these problems developed. The lattice results are shown to provide a tool for quantitatively assessing truncation uncertainties for the slowly converging D=2 OPE series. Our results for |V us| from this new implementation are shown to be free of unphysical w- and s0-dependences, and ~0.0020 higher than those produced by the conventional implementation. With preliminary new Kπ branching fraction results as input, we find |V us| in excellent agreement with that obtained from K ℓ3, and compatible within errors with expectations from three-family unitarity.« less
2013-01-01
Background Two of the current methodological barriers to implementation science efforts are the lack of agreement regarding constructs hypothesized to affect implementation success and identifiable measures of these constructs. In order to address these gaps, the main goals of this paper were to identify a multi-level framework that captures the predominant factors that impact implementation outcomes, conduct a systematic review of available measures assessing constructs subsumed within these primary factors, and determine the criterion validity of these measures in the search articles. Method We conducted a systematic literature review to identify articles reporting the use or development of measures designed to assess constructs that predict the implementation of evidence-based health innovations. Articles published through 12 August 2012 were identified through MEDLINE, CINAHL, PsycINFO and the journal Implementation Science. We then utilized a modified five-factor framework in order to code whether each measure contained items that assess constructs representing structural, organizational, provider, patient, and innovation level factors. Further, we coded the criterion validity of each measure within the search articles obtained. Results Our review identified 62 measures. Results indicate that organization, provider, and innovation-level constructs have the greatest number of measures available for use, whereas structural and patient-level constructs have the least. Additionally, relatively few measures demonstrated criterion validity, or reliable association with an implementation outcome (e.g., fidelity). Discussion In light of these findings, our discussion centers on strategies that researchers can utilize in order to identify, adapt, and improve extant measures for use in their own implementation research. In total, our literature review and resulting measures compendium increases the capacity of researchers to conceptualize and measure implementation-related constructs in their ongoing and future research. PMID:23414420
On the convergence of a discrete Kirchhoff triangle method valid for shells of arbitrary shape
NASA Astrophysics Data System (ADS)
Bernadou, Michel; Eiroa, Pilar Mato; Trouve, Pascal
1994-10-01
In a recent paper by the same authors, we have thoroughly described how to extend to the case of general shells the well known DKT (discrete Kirchhoff triangle) methods which are now classically used to solve plate problems. In that paper we have also detailed how to realize the implementation and reported some numerical results obtained for classical benchmarks. The aim of this paper is to prove the convergence of a closely related method and to obtain corresponding error estimates.
Kaehler, G; Wagner, A J
2013-06-01
Current implementations of fluctuating ideal-gas descriptions with the lattice Boltzmann methods are based on a fluctuation dissipation theorem, which, while greatly simplifying the implementation, strictly holds only for zero mean velocity and small fluctuations. We show how to derive the fluctuation dissipation theorem for all k, which was done only for k=0 in previous derivations. The consistent derivation requires, in principle, locally velocity-dependent multirelaxation time transforms. Such an implementation is computationally prohibitively expensive but, with a small computational trick, it is feasible to reproduce the correct FDT without overhead in computation time. It is then shown that the previous standard implementations perform poorly for non vanishing mean velocity as indicated by violations of Galilean invariance of measured structure factors. Results obtained with the method introduced here show a significant reduction of the Galilean invariance violations.
Fire fighters as basic life support responders: A study of successful implementation
Høyer, Christian Bjerre; Christensen, Erika Frischknecht
2009-01-01
Background First responders are recommended as a supplement to the Emergency Medical Services (EMS) in order to achieve early defibrillation. Practical and organisational aspects are essential when trying to implement new parts in the "Chain of Survival"; areas to address include minimizing dispatch time, ensuring efficient and quick communication, and choosing areas with appropriate driving distances. The aim of this study was to implement a system using Basic Life Support (BLS) responders equipped with an automatic external defibrillator in an area with relatively short emergency medical services' response times. Success criteria for implementation was defined as arrival of the BLS responders before the EMS, attachment (and use) of the AED, and successful defibrillation. Methods This was a prospective observational study from September 1, 2005 to December 31, 2007 (28 months) in the city of Aarhus, Denmark. The BLS responder system was implemented in an area up to three kilometres (driving distance) from the central fire station, encompassing approximately 81,500 inhabitants. The team trained on each shift and response times were reduced by choice of area and by sending the alarm directly to the fire brigade dispatcher. Results The BLS responders had 1076 patient contacts. The median response time was 3.5 minutes (25th percentile 2.75, 75th percentile 4.25). The BLS responders arrived before EMS in 789 of the 1076 patient contacts (73%). Cardiac arrest was diagnosed in 53 cases, the AED was attached in 29 cases, and a shockable rhythm was detected in nine cases. Eight were defibrillated using an AED. Seven of the eight obtained return of spontaneous circulation (ROSC). Six of the seven obtaining ROSC survived more than 30 days. Conclusion In this study, the implementation of BLS responders may have resulted in successful resuscitations. On basis of the close corporation between all participants in the chain of survival this project contributed to the first link: short response time and trained personnel to ensure early defibrillation. PMID:19341457
McElhinny, Mary Louise; Hooper, Christine
2008-01-01
A nurse-driven performance improvement project designed to reduce the incidence of hospital-acquired ulcers of the heel in an acute care setting was evaluated. This was a descriptive evaluative study using secondary data analysis. Data were collected in 2004, prior to implementation of the prevention project and compared to results obtained in 2006, after the project was implemented. Data were collected in a 172-bed, not-for-profit inpatient acute care facility in North Central California. All medical-surgical inpatients aged 18 years and older were included in the samples. Data were collected on 113 inpatients prior to implementation of the project in 2004. Data were also collected on a sample of 124 inpatients in 2006. The prevalence and incidence of heel pressure ulcers were obtained through skin surveys prior to implementation of the prevention program and following its implementation. Results from 2004 were compared to data collected in 2006 after introduction of the Braden Scale for Predicting Pressure Sore Risk. Heel pressure ulcers were staged using the National Pressure Ulcer Advisory Panel (NPUAP) staging system and recommendations provided by the Agency for Health Care Quality Research (AHRQ) clinical practice guidelines. The incidence of hospital-acquired heel pressure ulcers in 2004 was 13.5% (4 of 37 patients). After implementation of the program in 2006, the incidence of hospital-acquired heel pressure ulcers was 13.8% (5 of 36 patients). The intervention did not appear to receive adequate staff nurse support needed to make the project successful. Factors that influenced the lack of support may have included: (1) educational method used, (2) lack of organization approved, evidenced-based standardized protocols for prevention and treatment of heel ulcers, and (3) failure of facility management to convey the importance as well as their support for the project.
2013-01-01
Background The Implementation Research Institute (IRI) provides two years of training in mental health implementation science for 10 new fellows each year. The IRI is supported by a National Institute of Mental Health (NIMH) R25 grant and the Department of Veterans Affairs (VA). Fellows attend two annual week-long trainings at Washington University in St. Louis. Training is provided through a rigorous curriculum, local and national mentoring, a ‘learning site visit’ to a federally funded implementation research project, pilot research, and grant writing. Methods This paper describes the rationale, components, outcomes to date, and participant experiences with IRI. Results IRI outcomes include 31 newly trained implementation researchers, their new grant proposals, contributions to other national dissemination and implementation research training, and publications in implementation science authored by the Core Faculty and fellows. Former fellows have obtained independent research funding in implementation science and are beginning to serve as mentors for more junior investigators. Conclusions Based on the number of implementation research grant proposals and papers produced by fellows to date, the IRI is proving successful in preparing new researchers who can inform the process of making evidence-based mental healthcare more available through real-world settings of care and who are advancing the field of implementation science. PMID:24007290
A Two-Stage Procedure Toward the Efficient Implementation of PANS and Other Hybrid Turbulence Models
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Girimaji, Sharath S.
2004-01-01
The main objective of this article is to introduce and to show the implementation of a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for Partial Averaged Navier-Stokes (PANS) and other hybrid models. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The first step is to solve the unsteady or steady Reynolds Averaged Navier-Stokes (URANS/RANS) equations. From this preprocessing step, the turbulence length-scale field is obtained. This is then used to compute the characteristic length-scale ratio between the turbulence scale and the grid spacing. Based on this ratio, we can assess the finest scale resolution that a given grid for a given flow can support. Along with other additional criteria, we are able to analytically identify the appropriate hybrid solver resolution for different regions of the flow. This procedure removes the grid dependency issue that affects the results produced by different hybrid procedures in solving unsteady flows. The formulation, implementation methodology, and validation example are presented. We implemented this capability in a production Computational Fluid Dynamics (CFD) code, PAB3D, for the simulation of unsteady flows.
ICT Security Curriculum or How to Respond to Current Global Challenges
ERIC Educational Resources Information Center
Poboroniuc, Marian Silviu; Naaji, Antoanela; Ligusova, Jana; Grout, Ian; Popescu, Dorin; Ward, Tony; Grindei, Laura; Ruseva, Yoana; Bencheva, Nina; Jackson, Noel
2017-01-01
The paper presents some results obtained through the implementation of the Erasmus LLP "SALEIE" (Strategic Alignment of Electrical and Information Engineering in European Higher Education Institutions). The aim of the project was to bring together experts from European universities to enhance the competitiveness of Electrical and…
Expectation Effects in Organizational Change
ERIC Educational Resources Information Center
King, Albert S.
1974-01-01
The experiment reported here was conducted during a 12-month period at four plants owned by the same company. Managers were given artificial reports about previous findings obtained in implementing job enlargement and job rotation programs. Led to expect higher productivity as a result of these organizational innovations, the managers increased…
Quantum Statistical Mechanics on a Quantum Computer
NASA Astrophysics Data System (ADS)
Raedt, H. D.; Hams, A. H.; Michielsen, K.; Miyashita, S.; Saito, K.
We describe a quantum algorithm to compute the density of states and thermal equilibrium properties of quantum many-body systems. We present results obtained by running this algorithm on a software implementation of a 21-qubit quantum computer for the case of an antiferromagnetic Heisenberg model on triangular lattices of different size.
Comparing video and avatar technology for a health education application for deaf people.
Chiriac, Ionuţ Adrian; Stoicu-Tivadar, Lăcrămioara; Podoleanu, Elena
2015-01-01
The article describes the steps and results of a parallel research investigating e-health systems design and implementation for deaf people both in avatar and video technology. The application translates medical knowledge and concepts in deaf sign language for impaired users through an avatar. Two types of avatar technologies are taken into consideration: Video Avatar with recorded humans interface and Animated Avatar with animated figure interface. The comparative study investigates the data collection, design, implementation and the impact study. The comparative analysis of video and animated technology for data collection shows that the video format editing requires fewer skills and results are obtained easier, quicker and less expensive. The video technology supports an easier to design and implement architecture. The impact study for 2 deaf students communities is under development and for the time being the video avatar is better perceived.
Control programme for cystic echinococcosis in Uruguay.
Irabedra, Pilar; Ferreira, Ciro; Sayes, Julio; Elola, Susana; Rodríguez, Miriam; Morel, Noelia; Segura, Sebastian; Santos, Estela Dos; Guisantes, Jorge A
2016-05-24
Cystic echinococcosis is a highly endemic parasitic zoonosis that is present in the Southern Cone countries of America. For several decades, various prevention and control programmes have been implemented in different countries and regions, with varying results. In Uruguay, a new control programme was implemented in 2006 that employed new strategies for canine diagnosis and treatment, dog population control, diagnosis in humans, epidemiological surveillance, and health education, including community participation. The control programme in Uruguay addresses the control and surveillance of the disease from a holistic perspective based on Primary Health Care, which has strengthened the community's participation in developing and coordinating activities in an interdisciplinary manner. Similarly, the control programme that is currently implemented is based on a risk-focused approach. The surveillance and control measures were focused on small villages and extremely poor urban areas. In this study, the strategies used and the results obtained from 2008-2013 are analysed and discussed.
A Multi-Level Parallelization Concept for High-Fidelity Multi-Block Solvers
NASA Technical Reports Server (NTRS)
Hatay, Ferhat F.; Jespersen, Dennis C.; Guruswamy, Guru P.; Rizk, Yehia M.; Byun, Chansup; Gee, Ken; VanDalsem, William R. (Technical Monitor)
1997-01-01
The integration of high-fidelity Computational Fluid Dynamics (CFD) analysis tools with the industrial design process benefits greatly from the robust implementations that are transportable across a wide range of computer architectures. In the present work, a hybrid domain-decomposition and parallelization concept was developed and implemented into the widely-used NASA multi-block Computational Fluid Dynamics (CFD) packages implemented in ENSAERO and OVERFLOW. The new parallel solver concept, PENS (Parallel Euler Navier-Stokes Solver), employs both fine and coarse granularity in data partitioning as well as data coalescing to obtain the desired load-balance characteristics on the available computer platforms. This multi-level parallelism implementation itself introduces no changes to the numerical results, hence the original fidelity of the packages are identically preserved. The present implementation uses the Message Passing Interface (MPI) library for interprocessor message passing and memory accessing. By choosing an appropriate combination of the available partitioning and coalescing capabilities only during the execution stage, the PENS solver becomes adaptable to different computer architectures from shared-memory to distributed-memory platforms with varying degrees of parallelism. The PENS implementation on the IBM SP2 distributed memory environment at the NASA Ames Research Center obtains 85 percent scalable parallel performance using fine-grain partitioning of single-block CFD domains using up to 128 wide computational nodes. Multi-block CFD simulations of complete aircraft simulations achieve 75 percent perfect load-balanced executions using data coalescing and the two levels of parallelism. SGI PowerChallenge, SGI Origin 2000, and a cluster of workstations are the other platforms where the robustness of the implementation is tested. The performance behavior on the other computer platforms with a variety of realistic problems will be included as this on-going study progresses.
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
Circuit model for single-energy-level trap centers in FETs
NASA Astrophysics Data System (ADS)
Albahrani, Sayed Ali; Parker, Anthony; Heimlich, Michael
2016-12-01
A circuit implementation of a single-energy-level trap center in an FET is presented. When included in transistor models it explains the temperature-potential-dependent time constants seen in the circuit manifestations of charge trapping, being gate lag and drain overshoot. The implementation is suitable for both time-domain and harmonic-balance simulations. The proposed model is based on the Shockley-Read-Hall (SRH) statistics of the trapping process. The results of isothermal pulse measurements performed on a GaN HEMT are presented. These measurement allow characterizing charge trapping in isolation from the effect of self-heating. These results are used to obtain the parameters of the proposed model.
[Analysis of the results of the SEIMC External Quality Control Program. Year 2012].
de Gopegui Bordes, Enrique Ruiz; Guna Serrano, M del Remedio; Orta Mira, Nieves; Ovies, María Rosario; Poveda, Marta; Gimeno Cardona, Concepción
2014-02-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) include controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology and molecular microbiology. This article presents the most relevant conclusions and lessons from the 2012 controls. As a whole, the results obtained in 2012 confirm the excellent skill and good technical standards found in previous editions. However, erroneous results can be obtained in any laboratory and in clinically relevant determinations. Once again, the results of this program highlighted the need to implement both internal and external controls in order to assure the maximal quality of the microbiological tests. Copyright © 2014 Elsevier España, S.L. All rights reserved.
Introduction to study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
1992-01-01
During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.
Ruiz de Gopegui Bordes, Enrique; Serrano, M del Remedio Guna; Orta Mira, Nieves; Ovies, María Rosario; Poveda, Marta; Cardona, Concepción Gimeno
2011-12-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology includes controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology and molecular microbiology. This article presents the most important conclusions and lessons of the 2010 controls. As a whole, the results obtained in 2010 confirm the excellent skill and good technical standards found in previous years. However, erroneous results can be obtained in any laboratory and in clinically relevant determinations. The results of this program highlight the need to implement both internal and external controls to ensure maximal quality of microbiological tests(1). Copyright © 2011 Elsevier España S.L. All rights reserved.
[Analysis of the results of the SEIMC External Quality Control Program. Year 2014].
Gopegui Bordes, Enrique Ruiz de; Guna Serrano, M Del Remedio; Orta Mira, Nieves; Medina González, Rafael; Rosario Ovies, María; Poveda, Marta; Gimeno Cardona, Concepción
2016-07-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) include controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology and molecular microbiology. This article presents the most relevant conclusions and lessons from the 2014 controls. As a whole, the results obtained in 2014 confirm the excellent skill and good technical standards found in previous editions. However, erroneous results can be obtained in any laboratory and in clinically relevant determinations. Once again, the results of the SEIMC program highlighted the need to implement both internal and external controls in order to assure the maximal quality of the microbiological tests. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simakov, Andrei N., E-mail: simakov@lanl.gov; Molvig, Kim
2016-03-15
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas 23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasma transport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ionmore » species plasma made of deuterium, tritium, and gold. These results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Simakov, Andrei Nikolaevich; Molvig, Kim
2016-03-17
Paper I [A. N. Simakov and K. Molvig, Phys. Plasmas23, 032115 (2016)] obtained a fluid description for an unmagnetized collisional plasma with multiple ion species. To evaluate collisional plasmatransport fluxes, required for such a description, two linear systems of equations need to be solved to obtain corresponding transport coefficients. In general, this should be done numerically. Herein, the general formalism is used to obtain analytical expressions for such fluxes for several specific cases of interest: a deuterium-tritium plasma; a plasma containing two ion species with strongly disparate masses, which agrees with previously obtained results; and a three ion species plasmamore » made of deuterium, tritium, and gold. We find that these results can be used for understanding the behavior of the aforementioned plasmas, or for verifying a code implementation of the general multi-ion formalism.« less
Implementation of Satellite Techniques in the Air Transport
NASA Astrophysics Data System (ADS)
Fellner, Andrzej; Jafernik, Henryk
2016-06-01
The article shows process of the implementation satellite systems in Polish aviation which contributed to accomplishment Performance-Based Navigation (PBN) concept. Since 1991 authors have introduced Satellite Navigation Equipment in Polish Air Forces. The studies and researches provide to the Polish Air Force alternative approaches, modernize their navigation and landing systems and achieve compatibility with systems of the North Atlantic Treaty Organization (NATO) and International Civil Aviation Organization (ICAO). Acquired experience, conducted military tests and obtained results enabled to take up work scientifically - research in the environment of the civil aviation. Therefore in 2008 there has been launched cooperation with Polish Air Navigation Services Agency (PANSA). Thanks to cooperation, there have been compiled and fulfilled three fundamental international projects: EGNOS APV MIELEC (EGNOS Introduction in European Eastern Region - APV Mielec), HEDGE (Helicopters Deploy GNSS in Europe), SHERPA (Support ad-Hoc to Eastern Region Pre-operational in GNSS). The successful completion of these projects enabled implementation 21 procedures of the RNAV GNSS final approach at Polish airports, contributing to the implementation of PBN in Poland as well as ICAO resolution A37-11. Results of conducted research which served for the implementation of satellite techniques in the air transport constitute the meaning of this material.
A Game-Based Approach to Learning the Idea of Chemical Elements and Their Periodic Classification
ERIC Educational Resources Information Center
Franco-Mariscal, Antonio Joaquín; Oliva-Martínez, José María; Blanco-López, Ángel; España-Ramos, Enrique
2016-01-01
In this paper, the characteristics and results of a teaching unit based on the use of educational games to learn the idea of chemical elements and their periodic classification in secondary education are analyzed. The method is aimed at Spanish students aged 15-16 and consists of 24 1-h sessions. The results obtained on implementing the teaching…
Caudle, Kelly E.; Dunnenberger, Henry M.; Freimuth, Robert R.; Peterson, Josh F.; Burlison, Jonathan D.; Whirl-Carrillo, Michelle; Scott, Stuart A.; Rehm, Heidi L.; Williams, Marc S.; Klein, Teri E.; Relling, Mary V.; Hoffman, James M.
2017-01-01
Introduction: Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Materials and methods: Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Results: Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. Discussion: The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support. Genet Med 19 2, 215–223. PMID:27441996
Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2015-10-01
A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.
NASA Astrophysics Data System (ADS)
Ismail, Nurul Syuhada; Arifin, Norihan Md.; Bachok, Norfifah; Mahiddin, Norhasimah
2017-01-01
A numerical study is performed to evaluate the problem of stagnation - point flow towards a shrinking sheet with homogeneous - heterogeneous reaction effects. By using non-similar transformation, the governing equations be able to reduced to an ordinary differential equation. Then, results of the equations can be obtained numerically by shooting method with maple implementation. Based on the numerical results obtained, the velocity ratio parameter λ< 0, the dual solutions do exist. Then, the stability analysis is carried out to determine which solution is more stable between both of the solutions by bvp4c solver in Matlab.
Kannan, Ravishekar; Guo, Peng; Przekwas, Andrzej
2016-06-01
This paper is the first in a series wherein efficient computational methods are developed and implemented to accurately quantify the transport, deposition, and clearance of the microsized particles (range of interest: 2 to 10 µm) in the human respiratory tract. In particular, this paper (part I) deals with (i) development of a detailed 3D computational finite volume mesh comprising of the NOPL (nasal, oral, pharyngeal and larynx), trachea and several airway generations; (ii) use of CFD Research Corporation's finite volume Computational Biology (CoBi) flow solver to obtain the flow physics for an oral inhalation simulation; (iii) implement a novel and accurate nodal inverse distance weighted Eulerian-Lagrangian formulation to accurately obtain the deposition, and (iv) development of Wind-Kessel boundary condition algorithm. This new Wind-Kessel boundary condition algorithm allows the 'escaped' particles to reenter the airway through the outlets, thereby to an extent accounting for the drawbacks of having a finite number of lung generations in the computational mesh. The deposition rates in the NOPL, trachea, the first and second bifurcation were computed, and they were in reasonable accord with the Typical Path Length model. The quantitatively validated results indicate that these developments will be useful for (i) obtaining depositions in diseased lungs (because of asthma and COPD), for which there are no empirical models, and (ii) obtaining the secondary clearance (mucociliary clearance) of the deposited particles. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Comparative study on the customization of natural language interfaces to databases.
Pazos R, Rodolfo A; Aguirre L, Marco A; González B, Juan J; Martínez F, José A; Pérez O, Joaquín; Verástegui O, Andrés A
2016-01-01
In the last decades the popularity of natural language interfaces to databases (NLIDBs) has increased, because in many cases information obtained from them is used for making important business decisions. Unfortunately, the complexity of their customization by database administrators make them difficult to use. In order for a NLIDB to obtain a high percentage of correctly translated queries, it is necessary that it is correctly customized for the database to be queried. In most cases the performance reported in NLIDB literature is the highest possible; i.e., the performance obtained when the interfaces were customized by the implementers. However, for end users it is more important the performance that the interface can yield when the NLIDB is customized by someone different from the implementers. Unfortunately, there exist very few articles that report NLIDB performance when the NLIDBs are not customized by the implementers. This article presents a semantically-enriched data dictionary (which permits solving many of the problems that occur when translating from natural language to SQL) and an experiment in which two groups of undergraduate students customized our NLIDB and English language frontend (ELF), considered one of the best available commercial NLIDBs. The experimental results show that, when customized by the first group, our NLIDB obtained a 44.69 % of correctly answered queries and ELF 11.83 % for the ATIS database, and when customized by the second group, our NLIDB attained 77.05 % and ELF 13.48 %. The performance attained by our NLIDB, when customized by ourselves was 90 %.
Understanding change and curriculum implementation
NASA Astrophysics Data System (ADS)
de Jong, Gayle Marie
2000-10-01
This dissertation is a qualitative case study that examined perceptions of teachers in 2 schools about the process of change used in the implementation of a hands-on science program. Many change initiatives have failed in their implementation, and it may not necessarily be attributed to their quality. A countless number of promising programs have been derailed by a poor understanding of the process of change. This study looks first at the history of science reform to illustrate first the importance of hands-on inquiry as an effective instructional strategy. Then the process of change and its relationship to the implementation of a hands-on science curriculum was examined. The Hands on Science Program (HASP) is modular based and relies heavily on inquiry teaching. The project had been underway in these schools for about 5 years, and the districts are ready to evaluate its success. An interview with the original Project Director and information obtained from a summative evaluation helped explain the HASP. The Project Director shared the thinking that was involved in the program's inception, and the evaluation report served as a summary of the project's progress. Two schools were selected to examine the status of the program. The Organizational Climate Description Questionnaire and the Organizational Health Inventory developed by Hoy and Tarter (1997) were used to enrich the description of the school. Five teachers from each school, who have had leading roles in the implementation, were interviewed in an attempt to understand the insider's view of the change process used in the implementation of the HASP in their schools. Achievement data from the Stanford Achievement Test-9 was also used to provide some additional information. Interviews were used to understand teacher perceptions in each school and then compared in a cross-ease analysis. The results of this study could be used as planning suggestions for educational leaders designing change initiatives, although it should be understood that the results obtained from these 2 schools may not be generalized to others. Efforts to implement new curriculums will fail without sufficient study, planning, and understanding of the process of change.
A vision-based approach for the direct measurement of displacements in vibrating systems
NASA Astrophysics Data System (ADS)
Mazen Wahbeh, A.; Caffrey, John P.; Masri, Sami F.
2003-10-01
This paper reports the results of an analytical and experimental study to develop, calibrate, implement and evaluate the feasibility of a novel vision-based approach for obtaining direct measurements of the absolute displacement time history at selectable locations of dispersed civil infrastructure systems such as long-span bridges. The measurements were obtained using a highly accurate camera in conjunction with a laser tracking reference. Calibration of the vision system was conducted in the lab to establish performance envelopes and data processing algorithms to extract the needed information from the captured vision scene. Subsequently, the monitoring apparatus was installed in the vicinity of the Vincent Thomas Bridge in the metropolitan Los Angeles region. This allowed the deployment of the instrumentation system under realistic conditions so as to determine field implementation issues that need to be addressed. It is shown that the proposed approach has the potential of leading to an economical and robust system for obtaining direct, simultaneous, measurements at several locations of the displacement time histories of realistic infrastructure systems undergoing complex three-dimensional deformations.
NASA Astrophysics Data System (ADS)
Marie-Magdeleine, A.; Fortes-Patella, R.; Lemoine, N.; Marchand, N.
2012-11-01
This study concerns the simulation of the implementation of the Kinetic Differential Pressure (KDP) method used for the unsteady mass flow rate evaluation in order to identify the dynamic transfer matrix of a cavitatingVenturi. Firstly, the equations of the IZ code used for this simulation are introduced. Next, the methodology for evaluating unsteady pressures and mass flow rates at the inlet and the outlet of the cavitatingVenturi and for identifying the dynamic transfer matrix is presented. Later, the robustness of the method towards measurement uncertainties implemented as a Gaussian white noise is studied. The results of the numerical simulations let us estimate the system linearity domain and to perform the Empirical Transfer Function Evaluation on the inlet frequency per frequency signal and on the chirp signal tests. Then the pressure data obtained with the KDP method is taken and the identification procedure by ETFE and by the user-made Auto-Recursive Moving-Average eXogenous algorithms is performed and the obtained transfer matrix coefficients are compared with those obtained from the simulated input and output data.
NASA Astrophysics Data System (ADS)
Rangaswamy, T.; Vidhyashankar, S.; Madhusudan, M.; Bharath Shekar, H. R.
2015-04-01
The current trends of engineering follow the basic rule of innovation in mechanical engineering aspects. For the engineers to be efficient, problem solving aspects need to be viewed in a multidimensional perspective. One such methodology implemented is the fusion of technologies from other disciplines in order to solve the problems. This paper mainly deals with the application of Neural Networks in order to analyze the performance parameters of an XD3P Peugeot engine (used in Ministry of Defence). The basic propaganda of the work is divided into two main working stages. In the former stage, experimentation of an IC engine is carried out in order to obtain the primary data. In the latter stage the primary database formed is used to design and implement a predictive neural network in order to analyze the output parameters variation with respect to each other. A mathematical governing equation for the neural network is obtained. The obtained polynomial equation describes the characteristic behavior of the built neural network system. Finally, a comparative study of the results is carried out.
Fast solution of elliptic partial differential equations using linear combinations of plane waves.
Pérez-Jordá, José M
2016-02-01
Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.
NASA Astrophysics Data System (ADS)
Sivasubramaniam, Kiruba
This thesis makes advances in three dimensional finite element analysis of electrical machines and the quantification of their parameters and performance. The principal objectives of the thesis are: (1)the development of a stable and accurate method of nonlinear three-dimensional field computation and application to electrical machinery and devices; and (2)improvement in the accuracy of determination of performance parameters, particularly forces and torque computed from finite elements. Contributions are made in two general areas: a more efficient formulation for three dimensional finite element analysis which saves time and improves accuracy, and new post-processing techniques to calculate flux density values from a given finite element solution. A novel three-dimensional magnetostatic solution based on a modified scalar potential method is implemented. This method has significant advantages over the traditional total scalar, reduced scalar or vector potential methods. The new method is applied to a 3D geometry of an iron core inductor and a permanent magnet motor. The results obtained are compared with those obtained from traditional methods, in terms of accuracy and speed of computation. A technique which has been observed to improve force computation in two dimensional analysis using a local solution of Laplace's equation in the airgap of machines is investigated and a similar method is implemented in the three dimensional analysis of electromagnetic devices. A new integral formulation to improve force calculation from a smoother flux-density profile is also explored and implemented. Comparisons are made and conclusions drawn as to how much improvement is obtained and at what cost. This thesis also demonstrates the use of finite element analysis to analyze torque ripples due to rotor eccentricity in permanent magnet BLDC motors. A new method for analyzing torque harmonics based on data obtained from a time stepping finite element analysis of the machine is explored and implemented.
Design and experimental evaluation of robust controllers for a two-wheeled robot
NASA Astrophysics Data System (ADS)
Kralev, J.; Slavov, Ts.; Petkov, P.
2016-11-01
The paper presents the design and experimental evaluation of two alternative μ-controllers for robust vertical stabilisation of a two-wheeled self-balancing robot. The controllers design is based on models derived by identification from closed-loop experimental data. In the first design, a signal-based uncertainty representation obtained directly from the identification procedure is used, which leads to a controller of order 29. In the second design the signal uncertainty is approximated by an input multiplicative uncertainty, which leads to a controller of order 50, subsequently reduced to 30. The performance of the two μ-controllers is compared with the performance of a conventional linear quadratic controller with 17th-order Kalman filter. A proportional-integral controller of the rotational motion around the vertical axis is implemented as well. The control code is generated using Simulink® controller models and is embedded in a digital signal processor. Results from the simulation of the closed-loop system as well as experimental results obtained during the real-time implementation of the designed controllers are given. The theoretical investigation and experimental results confirm that the closed-loop system achieves robust performance in respect to the uncertainties related to the identified robot model.
Caudle, Kelly E; Dunnenberger, Henry M; Freimuth, Robert R; Peterson, Josh F; Burlison, Jonathan D; Whirl-Carrillo, Michelle; Scott, Stuart A; Rehm, Heidi L; Williams, Marc S; Klein, Teri E; Relling, Mary V; Hoffman, James M
2017-02-01
Reporting and sharing pharmacogenetic test results across clinical laboratories and electronic health records is a crucial step toward the implementation of clinical pharmacogenetics, but allele function and phenotype terms are not standardized. Our goal was to develop terms that can be broadly applied to characterize pharmacogenetic allele function and inferred phenotypes. Terms currently used by genetic testing laboratories and in the literature were identified. The Clinical Pharmacogenetics Implementation Consortium (CPIC) used the Delphi method to obtain a consensus and agree on uniform terms among pharmacogenetic experts. Experts with diverse involvement in at least one area of pharmacogenetics (clinicians, researchers, genetic testing laboratorians, pharmacogenetics implementers, and clinical informaticians; n = 58) participated. After completion of five surveys, a consensus (>70%) was reached with 90% of experts agreeing to the final sets of pharmacogenetic terms. The proposed standardized pharmacogenetic terms will improve the understanding and interpretation of pharmacogenetic tests and reduce confusion by maintaining consistent nomenclature. These standard terms can also facilitate pharmacogenetic data sharing across diverse electronic health care record systems with clinical decision support.Genet Med 19 2, 215-223.
NASA Astrophysics Data System (ADS)
Czerepicki, A.; Koniak, M.
2017-06-01
The paper presents a method of modelling the processes of aging lithium-ion batteries, its implementation as a computer application and results for battery state estimation. Authors use previously developed behavioural battery model, which was built using battery operating characteristics obtained from the experiment. This model was implemented in the form of a computer program using a database to store battery characteristics. Batteries aging process is a new extended functionality of the model. Algorithm of computer simulation uses a real measurements of battery capacity as a function of the battery charge and discharge cycles number. Simulation allows to take into account the incomplete cycles of charge or discharge battery, which are characteristic for transport powered by electricity. The developed model was used to simulate the battery state estimation for different load profiles, obtained by measuring the movement of the selected means of transport.
Tracker implementation for the orbiter Ku-band communications antenna
NASA Technical Reports Server (NTRS)
Rudnicki, J. F.; Lindsey, J. F.
1976-01-01
Possible implementations and recommendations for the Space Shuttle Ku-Band integrated communications/radar antenna tracking system were evaluated. Communication aspects involving the Tracking Data Relay Satellite (TDRS)/Orbiter Ku-Band link are emphasized. Detailed analysis of antenna sizes, gains and signal-to-noise ratios shows the desirability of using maximum size 36-inch diameter dish and a triple channel monopulse. The use of the original baselined 20 inch dish is found to result in excessive acquisition time since the despread signal would be used in the tracking loop. An evaluation of scan procedures which includes vehicle dynamics, designation error, time for acquisition and probability of acquisition shows that the conical scan is preferred since the time for lock-on for relatively slow look angle rates will be significantly shorter than the raster scan. Significant improvement in spherical coverage may be obtained by reorienting the antenna gimbal to obtain maximum blockage overlap.
Ali, Abdulbaset; Hu, Bing; Ramahi, Omar
2015-05-15
This work presents a real life experiment of implementing an artificial intelligence model for detecting sub-millimeter cracks in metallic surfaces on a dataset obtained from a waveguide sensor loaded with metamaterial elements. Crack detection using microwave sensors is typically based on human observation of change in the sensor's signal (pattern) depicted on a high-resolution screen of the test equipment. However, as demonstrated in this work, implementing artificial intelligence to classify cracked from non-cracked surfaces has appreciable impact in terms of sensing sensitivity, cost, and automation. Furthermore, applying artificial intelligence for post-processing data collected from microwave sensors is a cornerstone for handheld test equipment that can outperform rack equipment with large screens and sophisticated plotting features. The proposed method was tested on a metallic plate with different cracks and the obtained experimental results showed good crack classification accuracy rates.
NASA Astrophysics Data System (ADS)
Skouteris, D.; Barone, V.
2014-06-01
We report the main features of a new general implementation of the Gaussian Multi-Configuration Time-Dependent Hartree model. The code allows effective computations of time-dependent phenomena, including calculation of vibronic spectra (in one or more electronic states), relative state populations, etc. Moreover, by expressing the Dirac-Frenkel variational principle in terms of an effective Hamiltonian, we are able to provide a new reliable estimate of the representation error. After validating the code on simple one-dimensional systems, we analyze the harmonic and anharmonic vibrational spectra of water and glycine showing that reliable and converged energy levels can be obtained with reasonable computing resources. The data obtained on water and glycine are compared with results of previous calculations using the vibrational second-order perturbation theory method. Additional features and perspectives are also shortly discussed.
A new method for automatic discontinuity traces sampling on rock mass 3D model
NASA Astrophysics Data System (ADS)
Umili, G.; Ferrero, A.; Einstein, H. H.
2013-02-01
A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.
Jordanian dentists' knowledge and implementation of eco-friendly dental office strategies.
Al Shatrat, Sabha M; Shuman, Deanne; Darby, Michele L; Jeng, Hueiwang A
2013-06-01
To investigate the implementation of eco-friendly dental office strategies by Jordanian dentists. Self-designed questionnaires were provided to 150 dentists working in private dental practices in the city of Amman, the capital of Jordan. Dentists' names and addresses were obtained from the Jordanian Dental Association. Overall, the level of knowledge about eco-friendly dental offices strategies was high for amalgam, radiology, paper waste, infection control and energy and water conservation. In terms of implementation, the majority of Jordanian dentists apply few eco-friendly dental offices strategies. The most frequently identified barriers to implementation of eco-friendly dental offices strategies were cost and lack of incentives from the government. Most Jordanian dental practices are not eco-friendly. A continued focus on the impact of dental practices on the environment is needed through formal and continuing dental education. Results of this study can guide policy development to encourage implementation of eco-friendly strategies. © 2013 FDI World Dental Federation.
NASA Astrophysics Data System (ADS)
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2012-02-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
NASA Astrophysics Data System (ADS)
Pricop, Emil; Zamfir, Florin; Paraschiv, Nicolae
2015-11-01
Process control is a challenging research topic for both academia and industry for a long time. Controllers evolved from the classical SISO approach to modern fuzzy or neuro-fuzzy embedded devices with networking capabilities, however PID algorithms are still used in the most industrial control loops. In this paper, we focus on the implementation of a PID controller using mbed NXP LPC1768 development board. This board integrates a powerful ARM Cortex- M3 core and has networking capabilities. The implemented controller can be remotely operated by using an Internet connection and a standard Web browser. The main advantages of the proposed embedded system are customizability, easy operation and very low power consumption. The experimental results obtained by using a simulated process are analysed and shows that the implementation can be done with success in industrial applications.
Obtaining correct compile results by absorbing mismatches between data types representations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less
Challenges experienced by nurses in the implementation of a healthcare reform plan in Iran
Salarvand, Shahin; Azizimalekabadi, Maryam; Jebeli, Azadeh Akbari; Nazer, Mohamadreza
2017-01-01
Introduction The Healthcare Reform Plan is counted as a plan for improving healthcare services in Iran. Undoubtedly pros and cons can be seen either in plan or implementation. This study was conducted to describe nurses’ challenges in implementing healthcare reform in Iran. Methods A qualitative method centered upon conventional content analysis was applied. We used purposive sampling and data saturation was obtained by 30 participants. Data were analyzed using MAXQDA software. Results Challenges experienced by nurses in the implementation of this reform include; unsuitable infrastructure, unfavorable vision, a complicated challenge, the necessity of monitoring, control plan outcomes, the impact on nurses, people’s misconceptions and solutions. Conclusions The Healthcare Reform Plan in Iran is a solution to establish equality in the health system, however, to eliminate these challenges, revision and appropriate foundation of infrastructures is called for. PMID:28607646
A 7-Step Strategy for the Implementation of Worksite Lifestyle Interventions: Helpful or Not?
Wierenga, Debbie; Engbers, Luuk H; Van Empelen, Pepjin; van Mechelen, Willem
2016-05-01
The aim of this study was to evaluate the use of and adherence to a 7-step strategy for the development, implementation, and continuation of a comprehensive, multicomponent lifestyle program. Strategy use and adherence was assessed with 12 performance indicators. Data were collected by combining onsite monitoring with semi-structured interviews at baseline and follow-up (6, 12, and 18 months). Not all performance indicators were met so partial strategy adherence was obtained. The strategy could be improved on the following aspects: support among management, project structure, adaptation to needs of employees, planning, and maintenance. The results of this evaluation indicate that strategy adherence facilitated structured development and implementation. On the basis of the qualitative data, this study suggests that when improvements will be made on both the content and performance, the 7-step strategy could be an effective tool to successfully implement a multicomponent WHPP.
VLSI circuits implementing computational models of neocortical circuits.
Wijekoon, Jayawan H B; Dudek, Piotr
2012-09-15
This paper overviews the design and implementation of three neuromorphic integrated circuits developed for the COLAMN ("Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex") project. The circuits are implemented in a standard 0.35 μm CMOS technology and include spiking and bursting neuron models, and synapses with short-term (facilitating/depressing) and long-term (STDP and dopamine-modulated STDP) dynamics. They enable execution of complex nonlinear models in accelerated-time, as compared with biology, and with low power consumption. The neural dynamics are implemented using analogue circuit techniques, with digital asynchronous event-based input and output. The circuits provide configurable hardware blocks that can be used to simulate a variety of neural networks. The paper presents experimental results obtained from the fabricated devices, and discusses the advantages and disadvantages of the analogue circuit approach to computational neural modelling. Copyright © 2012 Elsevier B.V. All rights reserved.
Problem Solving and Collaboration Using Mobile Serious Games
ERIC Educational Resources Information Center
Sanchez, Jaime; Olivares, Ruby
2011-01-01
This paper presents the results obtained with the implementation of a series of learning activities based on Mobile Serious Games (MSGs) for the development of problem solving and collaborative skills in Chilean 8th grade students. Three MSGs were developed and played by teams of four students in order to solve problems collaboratively. A…
Green's function solution to radiative heat transfer between longitudinal gray fins
NASA Technical Reports Server (NTRS)
Frankel, J. I.; Silvestri, J. J.
1991-01-01
A demonstration is presented of the applicability and versatility of a pure integral formulation for radiative-conductive heat-transfer problems. Preliminary results have been obtained which indicate that this formulation allows an accurate, fast, and stable computation procedure to be implemented. Attention is given to the accessory problem defining Green's function.
Logistic Map for Cancellable Biometrics
NASA Astrophysics Data System (ADS)
Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr
2017-08-01
This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.
DOT National Transportation Integrated Search
1976-01-01
The Fairfax ASAP, one of 35 federally funded alcohol countermeasure projects designed to attack the problem of drunken drivers on the highways, was implemented at the community level in January 1972. This report summarizes the results of data obtaine...
Microlens array processor with programmable weight mask and direct optical input
NASA Astrophysics Data System (ADS)
Schmid, Volker R.; Lueder, Ernst H.; Bader, Gerhard; Maier, Gert; Siegordner, Jochen
1999-03-01
We present an optical feature extraction system with a microlens array processor. The system is suitable for online implementation of a variety of transforms such as the Walsh transform and DCT. Operating with incoherent light, our processor accepts direct optical input. Employing a sandwich- like architecture, we obtain a very compact design of the optical system. The key elements of the microlens array processor are a square array of 15 X 15 spherical microlenses on acrylic substrate and a spatial light modulator as transmissive mask. The light distribution behind the mask is imaged onto the pixels of a customized a-Si image sensor with adjustable gain. We obtain one output sample for each microlens image and its corresponding weight mask area as summation of the transmitted intensity within one sensor pixel. The resulting architecture is very compact and robust like a conventional camera lens while incorporating a high degree of parallelism. We successfully demonstrate a Walsh transform into the spatial frequency domain as well as the implementation of a discrete cosine transform with digitized gray values. We provide results showing the transformation performance for both synthetic image patterns and images of natural texture samples. The extracted frequency features are suitable for neural classification of the input image. Other transforms and correlations can be implemented in real-time allowing adaptive optical signal processing.
Electron electric dipole moment and hyperfine interaction constants for ThO
NASA Astrophysics Data System (ADS)
Fleig, Timo; Nayak, Malaya K.
2014-06-01
A recently implemented relativistic four-component configuration interaction approach to study P- and T-odd interaction constants in atoms and molecules is employed to determine the electron electric dipole moment effective electric field in the Ω=1 first excited state of the ThO molecule. We obtain a value of Eeff=75.2GV/cm with an estimated error bar of 3% and 10% smaller than a previously reported result (Skripnikov et al., 2013). Using the same wavefunction model we obtain an excitation energy of TvΩ=1=5410 (cm), in accord with the experimental value within 2%. In addition, we report the implementation of the magnetic hyperfine interaction constant A|| as an expectation value, resulting in A||=-1339 (MHz) for the Ω=1 state in ThO. The smaller effective electric field increases the previously determined upper bound (Baron et al., 2014) on the electron electric dipole moment to |de|<9.7×10-29e cm and thus mildly mitigates constraints to possible extensions of the Standard Model of particle physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pathak, Himadri, E-mail: hmdrpthk@gmail.com; Sasmal, Sudip, E-mail: sudipsasmal.chem@gmail.com; Vaval, Nayana
2016-08-21
The open-shell reference relativistic equation-of-motion coupled-cluster method within its four-component description is successfully implemented with the consideration of single- and double- excitation approximations using the Dirac-Coulomb Hamiltonian. At the first attempt, the implemented method is employed to calculate ionization potential value of heavy atomic (Ag, Cs, Au, Fr, and Lr) and molecular (HgH and PbF) systems, where the effect of relativity does really matter to obtain highly accurate results. Not only the relativistic effect but also the effect of electron correlation is crucial in these heavy atomic and molecular systems. To justify the fact, we have taken two further approximationsmore » in the four-component relativistic equation-of-motion framework to quantify how the effect of electron correlation plays a role in the calculated values at different levels of theory. All these calculated results are compared with the available experimental data as well as with other theoretically calculated values to judge the extent of accuracy obtained in our calculations.« less
Anani, Nadim; Mazya, Michael V; Chen, Rong; Prazeres Moreira, Tiago; Bill, Olivier; Ahmed, Niaz; Wahlgren, Nils; Koch, Sabine
2017-01-10
Interoperability standards intend to standardise health information, clinical practice guidelines intend to standardise care procedures, and patient data registries are vital for monitoring quality of care and for clinical research. This study combines all three: it uses interoperability specifications to model guideline knowledge and applies the result to registry data. We applied the openEHR Guideline Definition Language (GDL) to data from 18,400 European patients in the Safe Implementation of Treatments in Stroke (SITS) registry to retrospectively check their compliance with European recommendations for acute stroke treatment. Comparing compliance rates obtained with GDL to those obtained by conventional statistical data analysis yielded a complete match, suggesting that GDL technology is reliable for guideline compliance checking. The successful application of a standard guideline formalism to a large patient registry dataset is an important step toward widespread implementation of computer-interpretable guidelines in clinical practice and registry-based research. Application of the methodology gave important results on the evolution of stroke care in Europe, important both for quality of care monitoring and clinical research.
Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin
2015-08-01
Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.
Semiempirical UNO-CAS and UNO-CI: method and applications in nanoelectronics.
Dral, Pavlo O; Clark, Timothy
2011-10-20
Unrestricted Natural Orbital-Complete Active Space Configuration Interaction, abbreviated as UNO-CAS, has been implemented for NDDO-based semiempirical molecular-orbital (MO) theory. A computationally more economic technique, UNO-CIS, in which we use a configuration interaction (CI) calculation with only single excitations (CIS) to calculate excited states, has also been implemented and tested. The class of techniques in which unrestricted natural orbitals (UNOs) are used as the reference for CI calculations is denoted UNO-CI. Semiempirical UNO-CI gives good results for the optical band gaps of organic semiconductors such as polyynes and polyacenes, which are promising materials for nanoelectronics. The results of these semiempirical UNO-CI techniques are generally in better agreement with experiment than those obtained with the corresponding conventional semiempirical CI methods and comparable to or better than those obtained with far more computationally expensive methods such as time-dependent density-functional theory. We also show that symmetry breaking in semiempirical UHF calculations is very useful for predicting the diradical character of organic compounds in the singlet spin state.
Principals as Change Agents: Their Role in the Curriculum Implementation Process.
ERIC Educational Resources Information Center
Binda, K. P.
Findings from a study that examined ways in which principals implement new or revised curricula are presented in this paper, which focuses on how personal constructs influence the curriculum implementation process. Data about principals' implementation styles were obtained from interviews with 10 principals and 10 female teachers, inschool…
Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs
NASA Astrophysics Data System (ADS)
Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.
2017-08-01
In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.
Implementing the Workforce Investment Act of 1998. A White Paper.
ERIC Educational Resources Information Center
Employment and Training Administration (DOL), Washington, DC.
The Workforce Investment Act represents a total customer-driven overhaul of the U.S. job training system that will help employers obtain needed workers and empower job seekers to obtain the training needed for the jobs they want. The Department of Labor will implement the Workforce Investment Act in cooperation with the Department of Education.…
Design and implementation of Skype USB user gateway software
NASA Astrophysics Data System (ADS)
Qi, Yang
2017-08-01
With the widespread application of VoIP, the client with private protocol becomes more and more popular. Skype is one of the representatives. How to connect Skype with PSTN just by Skype client has gradually become hot. This paper design and implement the software based on a kind of USB User Gateway. With the software Skype user can freely communicate with PSTN phone. FSM is designed as the core of the software, and Skype control is separated by the USB Gateway control. In this way, the communication becomes more flexible and efficient. In the actual user testing, the software obtains good results.
Sensory System for Implementing a Human—Computer Interface Based on Electrooculography
Barea, Rafael; Boquete, Luciano; Rodriguez-Ascariz, Jose Manuel; Ortega, Sergio; López, Elena
2011-01-01
This paper describes a sensory system for implementing a human–computer interface based on electrooculography. An acquisition system captures electrooculograms and transmits them via the ZigBee protocol. The data acquired are analysed in real time using a microcontroller-based platform running the Linux operating system. The continuous wavelet transform and neural network are used to process and analyse the signals to obtain highly reliable results in real time. To enhance system usability, the graphical interface is projected onto special eyewear, which is also used to position the signal-capturing electrodes. PMID:22346579
A resolution of the inclusive flavor-breaking τ |Vus| puzzle
NASA Astrophysics Data System (ADS)
Hudspith, Renwick J.; Lewis, Randy; Maltman, Kim; Zanotti, James
2018-06-01
We revisit the puzzle of |Vus | values obtained from the conventional implementation of hadronic-τ- decay-based flavor-breaking finite-energy sum rules lying > 3 σ below the expectations of three-family unitarity. Significant unphysical dependences of |Vus | on the choice of weight, w, and upper limit, s0, of the experimental spectral integrals entering the analysis are confirmed, and a breakdown of assumptions made in estimating higher dimension, D > 4, OPE contributions identified as the main source of these problems. A combination of continuum and lattice results is shown to suggest a new implementation of the flavor-breaking sum rule approach in which not only |Vus |, but also D > 4 effective condensates, are fit to data. Lattice results are also used to clarify how to reliably treat the slowly converging D = 2 OPE series. The new sum rule implementation is shown to cure the problems of the unphysical w- and s0-dependence of |Vus | and to produce results ∼0.0020 higher than those of the conventional implementation employing the same data. With B-factory input, and using, in addition, dispersively constrained results for the Kπ branching fractions, we find |Vus | = 0.2231(27)exp(4)th, in excellent agreement with the result from Kℓ3, and compatible within errors with the expectations of three-family unitarity, thus resolving the long-standing inclusive τ |Vus | puzzle.
Li, Jianwei; Zhang, Weimin; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi
2017-01-01
Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing.
Li, Jianwei; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi
2017-01-01
Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing. PMID:29145500
2003-11-01
are used in the ITOP [8], these are a “V&V cases” concept, a “claim-argument-evidence” structure, and a “ levels ” concept for the classification of M&S...obtained from the V&V effort. A levels concept assists in communication and understanding between parties in discussion. It also provides a convenient...piece of evidence obtained from the V&V effort. 13-9 Concepts behind the V&V ITOP document • A levels concept assists in communication and understanding
Heavy quarkonium in a holographic basis
Li, Yang; Maris, Pieter; Zhao, Xingbo; ...
2016-05-04
Here, we study the heavy quarkonium within the basis light-front quantization approach. We implement the one-gluon exchange interaction and a confining potential inspired by light-front holography. We adopt the holographic light-front wavefunction (LFWF) as our basis function and solve the non-perturbative dynamics by diagonalizing the Hamiltonian matrix. We obtain the mass spectrum for charmonium and bottomonium. With the obtained LFWFs, we also compute the decay constants and the charge form factors for selected eigenstates. The results are compared with the experimental measurements and with other established methods.
Estimating Durability of Reinforced Concrete
NASA Astrophysics Data System (ADS)
Varlamov, A. A.; Shapovalov, E. L.; Gavrilov, V. B.
2017-11-01
In this article we propose to use the methods of fracture mechanics to evaluate concrete durability. To evaluate concrete crack resistance characteristics of concrete directly in the structure in order to implement the methods of fracture mechanics, we have developed special methods. Various experimental studies have been carried out to determine the crack resistance characteristics and the concrete modulus of elasticity during its operating. A comparison was carried out for the results obtained with the use of the proposed methods and those obtained with the standard methods for determining the concrete crack resistance characteristics.
NASA Technical Reports Server (NTRS)
Shirinzadeh, B.; Gregory, Ray W.
1994-01-01
A rugged, easy to implement, line-of-sight absorption instrument which utilizes a low pressure water vapor microwave discharge cell as the light source, has been developed to make simultaneous measurements of the OH concentration and temperature at 10 spatial positions. The design, theory, and capability of the instrument are discussed. Results of the measurements obtained on a methane/air flat flame burner are compared with those obtained using a single-frequency, tunable dye laser system.
Quality Improvement Implementation in the Nursing Home
Berlowitz, Dan R; Young, Gary J; Hickey, Elaine C; Saliba, Debra; Mittman, Brian S; Czarnowski, Elaine; Simon, Barbara; Anderson, Jennifer J; Ash, Arlene S; Rubenstein, Lisa V; Moskowitz, Mark A
2003-01-01
Objective To examine quality improvement (QI) implementation in nursing homes, its association with organizational culture, and its effects on pressure ulcer care. Data Sources/Study Settings Primary data were collected from staff at 35 nursing homes maintained by the Department of Veterans Affairs (VA) on measures related to QI implementation and organizational culture. These data were combined with information obtained from abstractions of medical records and analyses of an existing database. Study Design A cross-sectional analysis of the association among the different measures was performed. Data Collection/Extraction Methods Completed surveys containing information on QI implementation, organizational culture, employee satisfaction, and perceived adoption of guidelines were obtained from 1,065 nursing home staff. Adherence to best practices related to pressure ulcer prevention was abstracted from medical records. Risk-adjusted rates of pressure ulcer development were calculated from an administrative database. Principal Findings Nursing homes differed significantly (p<.001) in their extent of QI implementation with scores on this 1 to 5 scale ranging from 2.98 to 4.08. Quality improvement implementation was greater in those nursing homes with an organizational culture that emphasizes innovation and teamwork. Employees of nursing homes with a greater degree of QI implementation were more satisfied with their jobs (a 1-point increase in QI score was associated with a 0.83 increase on the 5-point satisfaction scale, p<.001) and were more likely to report adoption of pressure ulcer clinical guidelines (a 1-point increase in QI score was associated with a 28 percent increase in number of staff reporting adoption, p<.001). No significant association was found, though, between QI implementation and either adherence to guideline recommendations as abstracted from records or the rate of pressure ulcer development. Conclusions Quality improvement implementation is most likely to be successful in those VA nursing homes with an underlying culture that promotes innovation. While QI implementation may result in staff who are more satisfied with their jobs and who believe they are providing better care, associations with improved care are uncertain. PMID:12650381
HIV Pre-exposure Prophylaxis Program Implementation Using Intervention Mapping.
Flash, Charlene A; Frost, Elizabeth L T; Giordano, Thomas P; Amico, K Rivet; Cully, Jeffrey A; Markham, Christine M
2018-04-01
HIV pre-exposure prophylaxis has been proven to be an effective tool in HIV prevention. However, numerous barriers still exist in pre-exposure prophylaxis implementation. The framework of Intervention Mapping was used from August 2016 to October 2017 to describe the process of adoption, implementation, and maintenance of an HIV prevention program from 2012 through 2017 in Houston, Texas, that is nested within a county health system HIV clinic. Using the tasks outlined in the Intervention Mapping framework, potential program implementers were identified, outcomes and performance objectives established, matrices of change objectives created, and methods and practical applications formed. Results include the formation of three matrices that document program outcomes, change agents involved in the process, and the determinants needed to facilitate program adoption, implementation, and maintenance. Key features that facilitated successful program adoption and implementation were obtaining leadership buy-in, leveraging existing resources, systematic evaluation of operations, ongoing education for both clinical and nonclinical staff, and attention to emergent issues during launch. The utilization of Intervention Mapping to delineate the program planning steps can provide a model for pre-exposure prophylaxis implementation in other settings. Copyright © 2018. Published by Elsevier Inc.
Implementation of compressive sensing for preclinical cine-MRI
NASA Astrophysics Data System (ADS)
Tan, Elliot; Yang, Ming; Ma, Lixin; Zheng, Yahong Rosa
2014-03-01
This paper presents a practical implementation of Compressive Sensing (CS) for a preclinical MRI machine to acquire randomly undersampled k-space data in cardiac function imaging applications. First, random undersampling masks were generated based on Gaussian, Cauchy, wrapped Cauchy and von Mises probability distribution functions by the inverse transform method. The best masks for undersampling ratios of 0.3, 0.4 and 0.5 were chosen for animal experimentation, and were programmed into a Bruker Avance III BioSpec 7.0T MRI system through method programming in ParaVision. Three undersampled mouse heart datasets were obtained using a fast low angle shot (FLASH) sequence, along with a control undersampled phantom dataset. ECG and respiratory gating was used to obtain high quality images. After CS reconstructions were applied to all acquired data, resulting images were quantitatively analyzed using the performance metrics of reconstruction error and Structural Similarity Index (SSIM). The comparative analysis indicated that CS reconstructed images from MRI machine undersampled data were indeed comparable to CS reconstructed images from retrospective undersampled data, and that CS techniques are practical in a preclinical setting. The implementation achieved 2 to 4 times acceleration for image acquisition and satisfactory quality of image reconstruction.
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems.
Liu, Hung-Huan; Liu, Chun
2017-12-21
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms.
Implementation of Wi-Fi Signal Sampling on an Android Smartphone for Indoor Positioning Systems
Liu, Chun
2017-01-01
Collecting and maintaining radio fingerprint for wireless indoor positioning systems involves considerable time and labor. We have proposed the quick radio fingerprint collection (QRFC) algorithm which employed the built-in accelerometer of Android smartphones to implement step detection in order to assist in collecting radio fingerprints. In the present study, we divided the algorithm into moving sampling (MS) and stepped MS (SMS), and describe the implementation of both algorithms and their comparison. Technical details and common errors concerning the use of Android smartphones to collect Wi-Fi radio beacons were surveyed and discussed. The results of signal sampling experiments performed in a hallway measuring 54 m in length showed that in terms of the amount of time required to complete collection of access point (AP) signals, static sampling (SS; a traditional procedure for collecting Wi-Fi signals) took at least 2 h, whereas MS and SMS took approximately 150 and 300 s, respectively. Notably, AP signals obtained through MS and SMS were comparable to those obtained through SS in terms of the distribution of received signal strength indicator (RSSI) and positioning accuracy. Therefore, MS and SMS are recommended instead of SS as signal sampling procedures for indoor positioning algorithms. PMID:29267234
Cos, Oriol; Ramon, Ramon; Montesinos, José Luis; Valero, Francisco
2006-09-05
A predictive control algorithm coupled with a PI feedback controller has been satisfactorily implemented in the heterologous Rhizopus oryzae lipase production by Pichia pastoris methanol utilization slow (Mut(s)) phenotype. This control algorithm has allowed the study of the effect of methanol concentration, ranging from 0.5 to 1.75 g/L, on heterologous protein production. The maximal lipolytic activity (490 UA/mL), specific yield (11,236 UA/g(biomass)), productivity (4,901 UA/L . h), and specific productivity (112 UA/g(biomass)h were reached for a methanol concentration of 1 g/L. These parameters are almost double than those obtained with a manual control at a similar methanol set-point. The study of the specific growth, consumption, and production rates showed different patterns for these rates depending on the methanol concentration set-point. Results obtained have shown the need of implementing a robust control scheme when reproducible quality and productivity are sought. It has been demonstrated that the model-based control proposed here is a very efficient, robust, and easy-to-implement strategy from an industrial application point of view. (c) 2006 Wiley Periodicals, Inc.
Using the Whole School, Whole Community, Whole Child Model: Implications for Practice
Rooney, Laura E; Videto, Donna M; Birch, David A
2015-01-01
BACKGROUND Schools, school districts, and communities seeking to implement the Whole School, Whole Community, Whole Child (WSCC) model should carefully and deliberately select planning, implementation, and evaluation strategies. METHODS In this article, we identify strategies, steps, and resources within each phase that can be integrated into existing processes that help improve health outcomes and academic achievement. Implementation practices may vary across districts depending upon available resources and time commitments. RESULTS Obtaining and maintaining administrative support at the beginning of the planning phase is imperative for identifying and implementing strategies and sustaining efforts to improve student health and academic outcomes. Strategy selection hinges on priority needs, community assets, and resources identified through the planning process. Determining the results of implementing the WSCC is based upon a comprehensive evaluation that begins during the planning phase. Evaluation guides success in attaining goals and objectives, assesses strengths and weaknesses, provides direction for program adjustment, revision, and future planning, and informs stakeholders of the effect of WSCC, including the effect on academic indicators. CONCLUSIONS With careful planning, implementation, and evaluation efforts, use of the WSCC model has the potential of focusing family, community, and school education and health resources to increase the likelihood of better health and academic success for students and improve school and community life in the present and in the future. PMID:26440824
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Microprocessor utilization in search and rescue missions
NASA Technical Reports Server (NTRS)
Schwartz, M.; Bashkow, T.
1978-01-01
The position of an emergency transmitter may be determined by measuring the Doppler shift of the distress signal as received by an orbiting satellite. This requires the computation of an initial estimate and refinement of this estimate through an iterative, nonlinear, least squares estimation. A version of the algorithm was implemented and tested by locating a transmitter on the premises and obtaining observations from a satellite. The computer used was an IBM 360/95. The position was determined within the desired 10 km radius accuracy. The feasibility of performing the same task in real time using microprocessor technology, was determined. The least squares algorithm was implemented on an Intel 8080 microprocessor. The results indicate that a microprocessor can easily match the IBM implementation in accuracy and be performed inside the time limitations set.
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline
2015-01-01
An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.
A multi-threshold sampling method for TOF-PET signal processing
NASA Astrophysics Data System (ADS)
Kim, H.; Kao, C. M.; Xie, Q.; Chen, C. T.; Zhou, L.; Tang, F.; Frisch, H.; Moses, W. W.; Choong, W. S.
2009-04-01
As an approach to realizing all-digital data acquisition for positron emission tomography (PET), we have previously proposed and studied a multi-threshold sampling method to generate samples of a PET event waveform with respect to a few user-defined amplitudes. In this sampling scheme, one can extract both the energy and timing information for an event. In this paper, we report our prototype implementation of this sampling method and the performance results obtained with this prototype. The prototype consists of two multi-threshold discriminator boards and a time-to-digital converter (TDC) board. Each of the multi-threshold discriminator boards takes one input and provides up to eight threshold levels, which can be defined by users, for sampling the input signal. The TDC board employs the CERN HPTDC chip that determines the digitized times of the leading and falling edges of the discriminator output pulses. We connect our prototype electronics to the outputs of two Hamamatsu R9800 photomultiplier tubes (PMTs) that are individually coupled to a 6.25×6.25×25 mm3 LSO crystal. By analyzing waveform samples generated by using four thresholds, we obtain a coincidence timing resolution of about 340 ps and an ˜18% energy resolution at 511 keV. We are also able to estimate the decay-time constant from the resulting samples and obtain a mean value of 44 ns with an ˜9 ns FWHM. In comparison, using digitized waveforms obtained at a 20 GSps sampling rate for the same LSO/PMT modules we obtain ˜300 ps coincidence timing resolution, ˜14% energy resolution at 511 keV, and ˜5 ns FWHM for the estimated decay-time constant. Details of the results on the timing and energy resolutions by using the multi-threshold method indicate that it is a promising approach for implementing digital PET data acquisition.
Plasma mirror implementation on LFEX laser for ion and fast electron fast ignition
NASA Astrophysics Data System (ADS)
Morace, A.; Kojima, S.; Arikawa, Y.; Fujioka, S.; Yogo, A.; Tosaki, S.; Sakata, S.; Abe, Y.; Lee, S. H.; Matsuo, K.; Sagisaka, A.; Kondo, K.; Pirozhkov, A. S.; Norimatsu, T.; Jitsuno, T.; Miyanaga, N.; Shiraga, H.; Nakai, M.; Nishimura, H.; Azechi, H.
2017-12-01
In this work we report the successful implementation of plasma mirror (PM) technology on an LFEX laser facility at the Institute of Laser Engineering, Osaka University. The LFEX laser pulse was successfully refocused at the target chamber center (TCC) by means of a spherical plasma mirror, resulting in 5 × 1018 W cm-2 laser intensity, with 45% reflectivity at a laser flux of about 90 J cm-2 on the PM. Experimental results show stable focusing and pointing of the LFEX pulse after PM refocusing. The contrast improvement was demonstrated by both cooler fast electron slope temperature distribution as well as by the ability to shoot sub-µm plastic foils obtaining proton beams with maximum energy exceeding 20 MeV. Experimental results are qualitatively reproduced by 2D particle in cell simulations.
Merging Digital Surface Models Implementing Bayesian Approaches
NASA Astrophysics Data System (ADS)
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-01-01
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303
Navarro, Juan-José; Lara, Laura
2017-01-01
Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed.
Navarro, Juan-José; Lara, Laura
2017-01-01
Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed. PMID:28243215
Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek
2014-02-12
This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.
Brady, Samuel L.; Moore, Bria M.; Yee, Brian S.; Kaufman, Robert A.
2015-01-01
Purpose To determine a comprehensive method for the implementation of adaptive statistical iterative reconstruction (ASIR) for maximal radiation dose reduction in pediatric computed tomography (CT) without changing the magnitude of noise in the reconstructed image or the contrast-to-noise ratio (CNR) in the patient. Materials and Methods The institutional review board waived the need to obtain informed consent for this HIPAA-compliant quality analysis. Chest and abdominopelvic CT images obtained before ASIR implementation (183 patient examinations; mean patient age, 8.8 years ± 6.2 [standard deviation]; range, 1 month to 27 years) were analyzed for image noise and CNR. These measurements were used in conjunction with noise models derived from anthropomorphic phantoms to establish new beam current–modulated CT parameters to implement 40% ASIR at 120 and 100 kVp without changing noise texture or magnitude. Image noise was assessed in images obtained after ASIR implementation (492 patient examinations; mean patient age, 7.6 years ± 5.4; range, 2 months to 28 years) the same way it was assessed in the pre-ASIR analysis. Dose reduction was determined by comparing size-specific dose estimates in the pre- and post-ASIR patient cohorts. Data were analyzed with paired t tests. Results With 40% ASIR implementation, the average relative dose reduction for chest CT was 39% (2.7/4.4 mGy), with a maximum reduction of 72% (5.3/18.8 mGy). The average relative dose reduction for abdominopelvic CT was 29% (4.8/6.8 mGy), with a maximum reduction of 64% (7.6/20.9 mGy). Beam current modulation was unnecessary for patients weighing 40 kg or less. The difference between 0% and 40% ASIR noise magnitude was less than 1 HU, with statistically nonsignificant increases in patient CNR at 100 kVp of 8% (15.3/14.2; P = .41) for chest CT and 13% (7.8/6.8; P = .40) for abdominopelvic CT. Conclusion Radiation dose reduction at pediatric CT was achieved when 40% ASIR was implemented as a dose reduction tool only; no net change to the magnitude of noise in the reconstructed image or the patient CNR occurred. PMID:23901128
Landes, Sara J; Matthieu, Monica M; Smith, Brandy N; Trent, Lindsay R; Rodriguez, Allison L; Kemp, Janet; Thompson, Caitlin
2016-08-01
Little is known about nonresearch training experiences of providers who implement evidence-based psychotherapies for suicidal behaviors among veterans. This national program evaluation identified the history of training, training needs, and desired resources of clinicians who work with at-risk veterans in a national health care system. This sequential mixed methods national program evaluation used a post-only survey design to obtain needs assessment data from clinical sites (N = 59) within Veterans Health Administration (VHA) facilities that implemented dialectical behavior therapy (DBT). Data were also collected on resources preferred to support ongoing use of DBT. While only 33% of clinical sites within VHA facilities reported that staff attended a formal DBT intensive training workshop, nearly 97% of participating sites reported having staff who completed self-study using DBT manuals. Mobile apps for therapists and clients and templates for documentation in the electronic health records to support measurement-based care were desired clinical resources. Results indicate that less-intensive training models can aid staff in implementing DBT in real-world health care settings. While more training is requested, a number of VHA facilities have successfully implemented DBT into the continuum of care for veterans at risk for suicide. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
ERIC Educational Resources Information Center
Swanson, Richard A.; Sleezer, Catherine M.
The AMSCO culture survey instrument was developed to obtain specific information about the changing employee values resulting from the implementation of the new quality programs at AMSCO, Rice Lake, Wisconsin. The culture dimensions measured in the survey included job evaluation/job satisfaction, work efficiency, training and development,…
Knowledge and Attitudes of Secondary School Teachers regarding Sexual Health Education in England
ERIC Educational Resources Information Center
Westwood, Jo; Mullan, Barbara
2007-01-01
Objective: To assess the sexual health knowledge of teachers who contribute to secondary school sexual health education in order to determine whether teachers are adequately prepared to implement present government education and public health policies. Design: Results were obtained from a questionnaire as part of a two-phase intervention study.…
ERIC Educational Resources Information Center
Hall, Gene E.
2013-01-01
Purpose: In far too many cases the initiatives to change schools by introducing new programs, processes and reforms has not resulted in obtainment of the desired outcomes. A major reason for limited outcomes suggested in this paper is that there has been a failure to learn from and apply constructs and measures related to understanding,…
The Child Well-Being Scales as a Clinical Tool and a Management Information System.
ERIC Educational Resources Information Center
Lyons, Peter; Doueck, Howard J.; Koster, Andrew J.; Witzky, Melissa K.; Kelly, Patricia L.
1999-01-01
Describes implementation of a computerized version of the Child Welfare League of America's Child Well-Being Scales by a family services agency in southern Ontario. Reviews results obtained from 172 families to illustrate the potential for using computerized risk assessment as an aid in clinical, supervisory, and management decision-making…
Clinical supervision: what's going on? Results of a questionnaire.
Bishop, V
This paper presents data obtained from a questionnaire sent to trust nurse executives in England and Scotland. While the data indicates a great deal of enthusiasm for clinical supervision, some concern must be shown for the lack of preparation and support for those involved in its implementation, a fact that will undoubtedly reflect badly in any evaluation exercise.
Do and Understand: The Effectiveness of Experiential Education
ERIC Educational Resources Information Center
Gama, Claudia; Fernández, Cristina
2009-01-01
This paper shares the results of a study on the benefits which an experiential education program has on students in the K-12 range. The study was carried out at a bilingual North American-style college preparatory school located in Colombia, South America. Research was based on experiences obtained through the coordination and implementation of an…
ERIC Educational Resources Information Center
Moore, John W.; Mitchem, Cheryl E.
2004-01-01
This paper provides a model of course-embedded assessment for use in an undergraduate Accounting Information Systems course, and reports the results obtained from implementation. The profession's educational objectives are mapped to specific computer skills and assignments, to provide direct evidence of learning outcomes. Indirect evidence of…
76 FR 80754 - Approval and Promulgation of Implementation Plans; State of Kansas: Regional Haze
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-27
... available in the docket for this rulemaking. The supplemental dispersion modeling provided by the State was... determined by using SNCR costs obtained from Jeffrey Unit 1, and scaling the dollar amount using heat input... impact of Kansas BART sources on all Class I areas impacted. NPCA says that the modeling results...
NASA Astrophysics Data System (ADS)
Andini, S.; Fitriana, L.; Budiyono
2018-04-01
This research is aimed to describe the process and to get product development of learning material using flipbook. The learning material is developed in geometry, especially quadrilateral. This research belongs to Research and Development (R&D). The procedure includes the steps of Budiyono Model such as conducting preliminary research, planning and developing a theoretical and prototype product, and determining product quality (validity, practicality, and effectiveness). The average assessment result of the theoretical product by the experts gets 4,54, while validity result of prototype product by the experts is 4,62. Practicability is obtained by the implementation of flipbook prototype in each meeting of limited-scale try out based on learning observation, with the average score of 4,10 and increasing of 4,50 in wide-scale try out. The effectiveness of the prototype product is obtained by the result from pretest and posttest on a limited-scale and a wide-scale try out. The limited-scale pre-test result showed a significant increase in average score of wide-scale pre-test of 25,2, and there is an increase in the average score of posttest on limited-scale try out and wide-scale try out is 8,16. The result of product quality can be concluded that flipbook media can be used in the geometry learning in elementary school which implemented curriculum 2013.
Zhan, J X; Ikehata, M; Mayuzumi, M; Koizumi, E; Kawaguchi, Y; Hashimoto, T
2013-01-01
A feedforward-feedback aeration control strategy based on online oxygen requirements (OR) estimation is proposed for oxidation ditch (OD) processes, and it is further developed for intermittent aeration OD processes, which are the most popular type in Japan. For calculating OR, concentrations of influent biochemical oxygen demand (BOD) and total Kjeldahl nitrogen (TKN) are estimated online by the measurement of suspended solids (SS) and sometimes TKN is estimated by NH4-N. Mixed liquor suspended solids (MLSS) and temperature are used to estimate the required oxygen for endogenous respiration. A straightforward parameter named aeration coefficient, Ka, is introduced as the only parameter that can be tuned automatically by feedback control or manually by the operators. Simulation with an activated sludge model was performed in comparison to fixed-interval aeration and satisfying result of OR control strategy was obtained. The OR control strategy has been implemented at seven full-scale OD plants and improvements in nitrogen removal are obtained in all these plants. Among them, the results obtained in Yumoto wastewater treatment plant were presented, in which continuous aeration was applied previously. After implementing intermittent OR control, the total nitrogen concentration was reduced from more than 5 mg/L to under 2 mg/L, and the electricity consumption was reduced by 61.2% for aeration or 21.5% for the whole plant.
Impact of PACS on dictation turnaround time and productivity.
Lepanto, Luigi; Paré, Guy; Aubry, David; Robillard, Pierre; Lesage, Jacques
2006-03-01
This study was conducted to measure the impact of PACS on dictation turnaround time and productivity. The radiology information system (RIS) database was interrogated to calculate the time interval between image production and dictation for every exam performed during three 90-day periods (the 3 months preceding PACS implementation, the 3 months immediately following PACS deployment, and a 3-month period 1 year after PACS implementation). Data were obtained for three exam types: chest radiographs, abdominal CT, and spine MRI. The mean dictation turnaround times obtained during the different pre- and post-PACS periods were compared using analysis of variance (ANOVA). Productivity was also determined for each period and for each exam type, and was expressed as the number of studies interpreted per full-time equivalent (FTE) radiologist. In the immediate post-PACS period, dictation turnaround time decreased 20% (p < 0.001) for radiography, but increased 13% (ns) for CT and 28% (p < 0.001) for MRI. One year after PACS was implemented, dictation turnaround time decreased 45% (p < 0.001) for radiography and 36% (p < 0.001) for MRI. For CT, 1 year post-PACS, turnaround times returned to pre-PACS levels. Productivity in the immediate post-PACS period increased 3% and 38% for radiography and CT, respectively, whereas a 6% decrease was observed for MRI. One year after implementation, productivity increased 27%, 98%, and 19% in radiography, CT, and MRI, respectively. PACS benefits, namely, shortened dictation turnaround time and increased productivity, are evident 1 year after PACS implementation. In the immediate post-PACS period, results vary with the different imaging modalities.
Computer-based visual communication in aphasia.
Steele, R D; Weinrich, M; Wertz, R T; Kleczewska, M K; Carlson, G S
1989-01-01
The authors describe their recently developed Computer-aided VIsual Communication (C-VIC) system, and report results of single-subject experimental designs probing its use with five chronic, severely impaired aphasic individuals. Studies replicate earlier results obtained with a non-computerized system, demonstrate patient competence with the computer implementation, extend the system's utility, and identify promising areas of application. Results of the single-subject experimental designs clarify patients' learning, generalization, and retention patterns, and highlight areas of performance difficulties. Future directions for the project are indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiu, V.; Makaruk, H.E.
1997-09-01
The starting points of this paper are two size-optimal solutions: (1) one for implementing arbitrary Boolean functions; and (2) another one for implementing certain subclasses of Boolean functions. Because VLSI implementations do not cope well with highly interconnected nets -- the area of a chip grows with the cube of the fan-in -- this paper will analyze the influence of limited fan-in on the size optimality for the two solutions mentioned. First, the authors will extend a result from Horne and Hush valid for fan-in {Delta} = 2 to arbitrary fan-in. Second, they will prove that size-optimal solutions are obtainedmore » for small constant fan-ins for both constructions, while relative minimum size solutions can be obtained for fan-ins strictly lower that linear. These results are in agreement with similar ones proving that for small constant fan-ins ({Delta} = 6...9) there exist VLSI-optimal (i.e., minimizing AT{sup 2}) solutions, while there are similar small constants relating to the capacity of processing information.« less
Earthquake behavior of steel cushion-implemented reinforced concrete frames
NASA Astrophysics Data System (ADS)
Özkaynak, Hasan
2018-04-01
The earthquake performance of vulnerable structures can be increased by the implementation of supplementary energy-dissipative metallic elements. The main aim of this paper is to describe the earthquake behavior of steel cushion-implemented reinforced concrete frames (SCI-RCFR) in terms of displacement demands and energy components. Several quasi-static experiments were performed on steel cushions (SC) installed in reinforced concrete (RC) frames. The test results served as the basis of the analytical models of SCs and a bare reinforced concrete frame (B-RCFR). These models were integrated in order to obtain the resulting analytical model of the SCI-RCFR. Nonlinear-time history analyses (NTHA) were performed on the SCI-RCFR under the effects of the selected earthquake data set. According to the NTHA, SC application is an effective technique for increasing the seismic performance of RC structures. The main portion of the earthquake input energy was dissipated through SCs. SCs succeeded in decreasing the plastic energy demand on structural elements by almost 50% at distinct drift levels.
Mateos-Pérez, José María; Soto-Montenegro, María Luisa; Peña-Zalbidea, Santiago; Desco, Manuel; Vaquero, Juan José
2016-02-01
We present a novel segmentation algorithm for dynamic PET studies that groups pixels according to the similarity of their time-activity curves. Sixteen mice bearing a human tumor cell line xenograft (CH-157MN) were imaged with three different (68)Ga-DOTA-peptides (DOTANOC, DOTATATE, DOTATOC) using a small animal PET-CT scanner. Regional activities (input function and tumor) were obtained after manual delineation of regions of interest over the image. The algorithm was implemented under the jClustering framework and used to extract the same regional activities as in the manual approach. The volume of distribution in the tumor was computed using the Logan linear method. A Kruskal-Wallis test was used to investigate significant differences between the manually and automatically obtained volumes of distribution. The algorithm successfully segmented all the studies. No significant differences were found for the same tracer across different segmentation methods. Manual delineation revealed significant differences between DOTANOC and the other two tracers (DOTANOC - DOTATATE, p=0.020; DOTANOC - DOTATOC, p=0.033). Similar differences were found using the leader-follower algorithm. An open implementation of a novel segmentation method for dynamic PET studies is presented and validated in rodent studies. It successfully replicated the manual results obtained in small-animal studies, thus making it a reliable substitute for this task and, potentially, for other dynamic segmentation procedures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modelling the degree of porosity of the ceramic surface intended for implants.
Stach, Sebastian; Kędzia, Olga; Garczyk, Żaneta; Wróbel, Zygmunt
2018-05-18
The main goal of the study was to develop a model of the degree of surface porosity of a biomaterial intended for implants. The model was implemented using MATLAB. A computer simulation was carried out based on the developed model, which resulted in a two-dimensional image of the modelled surface. Then, an algorithm for computerised image analysis of the surface of the actual oxide bioceramic layer was developed, which enabled determining its degree of porosity. In order to obtain the confocal micrographs of a few areas of the biomaterial, measurements were performed using the LEXT OLS4000 confocal laser microscope. The image analysis was carried out using MountainsMap Premium and SPIP. The obtained results allowed determining the input parameters of the program, on the basis of which porous biomaterial surface images were generated. The last part of the study involved verification of the developed model. The modelling method was tested by comparing the obtained results with the experimental data obtained from the analysis of surface images of the test material.
Puig, V; Cembrano, G; Romera, J; Quevedo, J; Aznar, B; Ramón, G; Cabot, J
2009-01-01
This paper deals with the global control of the Riera Blanca catchment in the Barcelona sewer network using a predictive optimal control approach. This catchment has been modelled using a conceptual modelling approach based on decomposing the catchments in subcatchments and representing them as virtual tanks. This conceptual modelling approach allows real-time model calibration and control of the sewer network. The global control problem of the Riera Blanca catchment is solved using a optimal/predictive control algorithm. To implement the predictive optimal control of the Riera Blanca catchment, a software tool named CORAL is used. The on-line control is simulated by interfacing CORAL with a high fidelity simulator of sewer networks (MOUSE). CORAL interchanges readings from the limnimeters and gate commands with MOUSE as if it was connected with the real SCADA system. Finally, the global control results obtained using the predictive optimal control are presented and compared against the results obtained using current local control system. The results obtained using the global control are very satisfactory compared to those obtained using the local control.
2011-01-01
Background Audit and feedback is an established strategy for improving maternal, neonatal and child health. The Perinatal Problem Identification Programme (PPIP), implemented in South African public hospitals in the late 1990s, measures perinatal mortality rates and identifies avoidable factors associated with each death. The aim of this study was to elucidate the processes involved in the implementation and sustainability of this programme. Methods Clinicians' experiences of the implementation and maintenance of PPIP were explored qualitatively in two workshop sessions. An analytical framework comprising six stages of change, divided into three phases, was used: pre-implementation (create awareness, commit to implementation); implementation (prepare to implement, implement) and institutionalisation (integrate into routine practice, sustain new practices). Results Four essential factors emerged as important for the successful implementation and sustainability of an audit system throughout the different stages of change: 1) drivers (agents of change) and team work, 2) clinical outreach visits and supervisory activities, 3) institutional perinatal review and feedback meetings, and 4) communication and networking between health system levels, health care facilities and different role-players. During the pre-implementation phase high perinatal mortality rates highlighted the problem and indicated the need to implement an audit programme (stage 1). Commitment to implementing the programme was achieved by obtaining buy-in from management, administration and health care practitioners (stage 2). Preparations in the implementation phase included the procurement and installation of software and training in its use (stage 3). Implementation began with the collection of data, followed by feedback at perinatal review meetings (stage 4). The institutionalisation phase was reached when the results of the audit were integrated into routine practice (stage 5) and when data collection had been sustained for a longer period (stage 6). Conclusion Insights into the factors necessary for the successful implementation and maintenance of an audit programme and the process of change involved may also be transferable to similar low- and middle-income public health settings where the reduction of the neonatal mortality rate is a key objective in reaching Millennium Development Goal 4. A tool for reflecting on the implementation and maintenance of an audit programme is also proposed. PMID:21958353
Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.
2011-10-01
We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of interpolated quantities in the solution of the Riemann Problem. The results of our tests demonstrate that GSPH is in fact a highly promising hydrodynamic scheme, also to be coupled to an N-body solver, for astrophysical and cosmological applications.
Implementation of WirelessHART in the NS-2 Simulator and Validation of Its Correctness
Zand, Pouria; Mathews, Emi; Havinga, Paul; Stojanovski, Spase; Sisinni, Emiliano; Ferrari, Paolo
2014-01-01
One of the first standards in the wireless sensor networks domain, WirelessHART (HART (Highway Addressable Remote Transducer)), was introduced to address industrial process automation and control requirements. This standard can be used as a reference point to evaluate other wireless protocols in the domain of industrial monitoring and control. This makes it worthwhile to set up a reliable WirelessHART simulator in order to achieve that reference point in a relatively easy manner. Moreover, it offers an alternative to expensive testbeds for testing and evaluating the performance of WirelessHART. This paper explains our implementation of WirelessHART in the NS-2 network simulator. According to our knowledge, this is the first implementation that supports the WirelessHART network manager, as well as the whole stack (all OSI (Open Systems Interconnection model) layers) of the WirelessHART standard. It also explains our effort to validate the correctness of our implementation, namely through the validation of the implementation of the WirelessHART stack protocol and of the network manager. We use sniffed traffic from a real WirelessHART testbed installed in the Idrolab plant for these validations. This confirms the validity of our simulator. Empirical analysis shows that the simulated results are nearly comparable to the results obtained from real networks. We also demonstrate the versatility and usability of our implementation by providing some further evaluation results in diverse scenarios. For example, we evaluate the performance of the WirelessHART network by applying incremental interference in a multi-hop network. PMID:24841245
NASA Astrophysics Data System (ADS)
Wulandari, Winny; Purwasasmita, Mubiar; Sanwani, Edy; Pixelina, Adinda Asri; Maulidan, Agus
2017-01-01
This paper reports a study that implements reverse flotation method to separate silica from West Kalimantan bauxite ores. The study is aimed to find the good process condition to obtain low-silica bauxite as the feed for the Bayer process. The experiments were carried out in a 1 L of flotation cell tank. Dodecylamine was used as the collector, starch as the depressant, and MIBC as the frother. The varied parameters were solid content to solution (15-30% w/w), and pH (6 - 10). The results of XRF of products show that in all reverse flotation experiments, the ratio of alumina to silica (Al/Si) are increased from 7 up to 14. The increase of solid percentage in the flotation gives a good result for Al/Si ratio as well as alumina and silica recovery in concentrate, with 30% w/w solid percentage to solution increases Al/Si ratio to 14.38, with silica recovery of 20%. The good separation with variation of depressants is obtained with depressant concentration of 400 g/ton bauxite, with Al/Si ratio in concentrate 15 and ratio in tailing 7. For the pH variation, the good condition is obtained at pH 8, while for collector concentration, the good condition is obtained at 200 g/ton bauxite. XRD analysis of the feed indicates that bauxite ore consists of gibbsite, diaspore, kaolinite, halloysite, quartz, boehmite, hematite and rutile. It is found that the concentrate has similar minerals, but halloysite became very minor or classified as a trace.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
[Analysis of the results of the SEIMC External Quality Control Program. Year 2011].
Ruiz de Gopegui Bordes, Enrique; Guna Serrano, M del Remedio; Orta Mira, Nieves; Ovies, María Rosario; Poveda, Marta; Gimeno Cardona, Concepción
2013-02-01
The External Quality Control Program of the Spanish Society of Infectious Diseases and Clinical Microbiology (Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica [SEIMC]) includes controls for bacteriology, serology, mycology, parasitology, mycobacteria, virology, and molecular microbiology. This article presents the most relevant conclusions and lessons from the 2011 controls. Overall, the results obtained in 2011 confirm the excellent skill and good technical standards found in previous years. Nevertheless, erroneous results can be obtained in any laboratory and in clinically relevant determinations. The results of this program highlight the need to implement both internal and external controls, such as those offered by the SEIMC program, in order to ensure maximal quality of microbiological tests. Copyright © 2013 Elsevier España, S.L. All rights reserved.
Quality Assurance of Samples and Processes in the Spanish Renal Research Network (REDinREN) Biobank.
Calleros-Basilio, Laura; Cortés, María Alicia; García-Jerez, Andrea; Luengo-Rodríguez, Alicia; Orozco-Agudo, Ana; Valdivielso, José Manuel; Rodríguez-Puyol, Diego; Rodríguez-Puyol, Manuel
2016-12-01
Biobanks are useful platforms to build bridges between basic, translational, and clinical research and clinical care. They are repositories of high-quality human biological samples ideal for evaluating their histological characteristics and also their genome, transcriptome, and proteome. The Spanish Renal Research Network Biobank contains more than 76,500 well-preserved frozen samples of a wide variety of kidney diseases, collected from 5450 patients seen by over 70 nephrology services throughout the Spanish territory. To determine and to report the results of the quality control of samples and processes conducted in our biobank, implemented in accordance with the requirements of the ISO 9001:2008 international standard. Two types of quality controls were performed: (1) systematic, that is, measurement of viable peripheral blood mononuclear cells (PBMCs) obtained and purity of nucleic acids and (2) ad-hoc, that is, viability of thawed PBMC, DNA extraction process reproducibility, and the integrity and functionality of nucleic acids, implemented on a routine basis. PBMC isolation by Ficoll yielded reproducible results and its cryopreserved viability was >90%. Acceptable A260/A280 ratios were obtained for the vast majority of the DNA (n = 2328) and RNA (n = 78) samples analyzed. DNA integrity was demonstrated by agarose gels and by β-globulin gene polymerase chain reaction (PCR) amplification of 1327 and 989 bp fragments. DNA of acceptable quality had at least three bands of β-globulin amplified obtained (n = 26/30). RNA integrity number (RIN) determinations obtained RIN numbers ≥7 (n = 87/96). The amplifiability of nucleic acids was confirmed by qPCR and RT-qPCR of β-actin and GAPDH genes. Long storage or delayed processing time did not affect the quality of the samples analyzed. The processes of DNA extraction also yielded reproducible results. These results clearly indicate that our PBMC, DNA, and RNA stored samples meet the required quality standards to be used for biomedical research, ensuring their long-term preservation.
Professional development of Russian HEIs' management and faculty in CDIO standards application
NASA Astrophysics Data System (ADS)
Chuchalin, Alexander; Malmqvist, Johan; Tayurskaya, Marina
2016-07-01
The paper presents the approach to complex training of managers and faculty staff for system modernisation of Russian engineering education. As a methodological basis of design and implementation of the faculty development programme, the CDIO (Conceive-Design-Implement-Operate) Approach was chosen due to compliance of its concept to the purposes and tasks of engineering education development in Russia. The authors describe the structure, the content and implementation technology of the programme designed by Tomsk Polytechnic University and Skolkovo Institute of Science and Technology with the assistance of Chalmers University of Technology and KTH Royal Institute of Technology and other members of the CDIO Initiative. The programme evaluation based on the questionnaire results showed that the programme content is relevant, has high practical value and high level of novelty for all categories of participants. Therefore, the CDIO approach was recommended for implementation to improve various elements of the engineering programme such as learning outcomes, content and structure, teaching, learning and assessment methods. Besides, the feedback results obtained through programme participants' survey contribute to identification of problems preventing development of engineering education in Russia and thus serve as milestones for further development of the programme.
Application of AWE for RCS Frequency Response Calculations Using Method of Moments
NASA Technical Reports Server (NTRS)
Reddy, C. J.; Deshpande, M. D.
1996-01-01
An implementation of the Asymptotic Waveform Evaluation (AWE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFIE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current, thus obtained, is expanded in a Taylor series around the frequency of interest. The coefficients of the Taylor series (called 'moments') are obtained using the frequency derivatives of the EFIE. Using the moments, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. A good agreement between AWE and the exact solution over the bandwidth is observed.
Perera, Ajith; Gauss, Jürgen; Verma, Prakash; Morales, Jorge A
2017-04-28
We present a parallel implementation to compute electron spin resonance g-tensors at the coupled-cluster singles and doubles (CCSD) level which employs the ACES III domain-specific software tools for scalable parallel programming, i.e., the super instruction architecture language and processor (SIAL and SIP), respectively. A unique feature of the present implementation is the exact (not approximated) inclusion of the five one- and two-particle contributions to the g-tensor [i.e., the mass correction, one- and two-particle paramagnetic spin-orbit, and one- and two-particle diamagnetic spin-orbit terms]. Like in a previous implementation with effective one-electron operators [J. Gauss et al., J. Phys. Chem. A 113, 11541-11549 (2009)], our implementation utilizes analytic CC second derivatives and, therefore, classifies as a true CC linear-response treatment. Therefore, our implementation can unambiguously appraise the accuracy of less costly effective one-particle schemes and provide a rationale for their widespread use. We have considered a large selection of radicals used previously for benchmarking purposes including those studied in earlier work and conclude that at the CCSD level, the effective one-particle scheme satisfactorily captures the two-particle effects less costly than the rigorous two-particle scheme. With respect to the performance of density functional theory (DFT), we note that results obtained with the B3LYP functional exhibit the best agreement with our CCSD results. However, in general, the CCSD results agree better with the experimental data than the best DFT/B3LYP results, although in most cases within the rather large experimental error bars.
Dohn, A O; Jónsson, E Ö; Levi, G; Mortensen, J J; Lopez-Acevedo, O; Thygesen, K S; Jacobsen, K W; Ulstrup, J; Henriksen, N E; Møller, K B; Jónsson, H
2017-12-12
A multiscale density functional theory-quantum mechanics/molecular mechanics (DFT-QM/MM) scheme is presented, based on an efficient electrostatic coupling between the electronic density obtained from a grid-based projector augmented wave (GPAW) implementation of density functional theory and a classical potential energy function. The scheme is implemented in a general fashion and can be used with various choices for the descriptions of the QM or MM regions. Tests on H 2 O clusters, ranging from dimer to decamer show that no systematic energy errors are introduced by the coupling that exceeds the differences in the QM and MM descriptions. Over 1 ns of liquid water, Born-Oppenheimer QM/MM molecular dynamics (MD) are sampled combining 10 parallel simulations, showing consistent liquid water structure over the QM/MM border. The method is applied in extensive parallel MD simulations of an aqueous solution of the diplatinum [Pt 2 (P 2 O 5 H 2 ) 4 ] 4- complex (PtPOP), spanning a total time period of roughly half a nanosecond. An average Pt-Pt distance deviating only 0.01 Å from experimental results, and a ground-state Pt-Pt oscillation frequency deviating by <2% from experimental results were obtained. The simulations highlight a remarkable harmonicity of the Pt-Pt oscillation, while also showing clear signs of Pt-H hydrogen bonding and directional coordination of water molecules along the Pt-Pt axis of the complex.
An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.
Ilias, Miroslav; Saue, Trond
2007-02-14
The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.
Microreactor Cells for High-Throughput X-ray Absorption Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beesley, Angela; Tsapatsaris, Nikolaos; Weiher, Norbert
2007-01-19
High-throughput experimentation has been applied to X-ray Absorption spectroscopy as a novel route for increasing research productivity in the catalysis community. Suitable instrumentation has been developed for the rapid determination of the local structure in the metal component of precursors for supported catalysts. An automated analytical workflow was implemented that is much faster than traditional individual spectrum analysis. It allows the generation of structural data in quasi-real time. We describe initial results obtained from the automated high throughput (HT) data reduction and analysis of a sample library implemented through the 96 well-plate industrial standard. The results show that a fullymore » automated HT-XAS technology based on existing industry standards is feasible and useful for the rapid elucidation of geometric and electronic structure of materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartman, J.S.; Gordon, R.L.; Lessor, D.L.
1980-09-01
The application of reflective Nomarski differential interference contrast microscopy for the determination of quantitative sample topography data is presented. The discussion includes a review of key theoretical results presented previously plus the experimental implementation of the concepts using a commercial Momarski microscope. The experimental work included the modification and characterization of a commercial microscope to allow its use for obtaining quantitative sample topography data. System usage for the measurement of slopes on flat planar samples is also discussed. The discussion has been designed to provide the theoretical basis, a physical insight, and a cookbook procedure for implementation to allow thesemore » results to be of value to both those interested in the microscope theory and its practical usage in the metallography laboratory.« less
Matrix multiplication on the Intel Touchstone Delta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huss-Lederman, S.; Jacobson, E.M.; Tsao, A.
1993-12-31
Matrix multiplication is a key primitive in block matrix algorithms such as those found in LAPACK. We present results from our study of matrix multiplication algorithms on the Intel Touchstone Delta, a distributed memory message-passing architecture with a two-dimensional mesh topology. We obtain an implementation that uses communication primitives highly suited to the Delta and exploits the single node assembly-coded matrix multiplication. Our algorithm is completely general, able to deal with arbitrary mesh aspect ratios and matrix dimensions, and has achieved parallel efficiency of 86% with overall peak performance in excess of 8 Gflops on 256 nodes for an 8800more » {times} 8800 matrix. We describe our algorithm design and implementation, and present performance results that demonstrate scalability and robust behavior over varying mesh topologies.« less
Nansel, Tonja R.; Huang, Terry T.K.; Rovner, Alisha J.; Sanders-Butler, Yvonne
2009-01-01
Objective: The purpose of this analysis was to examine secular trends in school performance indicators in relationship to the implementation of a program targeting the school food and physical activity environment. Design: Data on available school performance indicators were obtained; retrospective analyses were conducted to assess trends in indicators in association with program implementation; each outcome was regressed on year, beginning with the year prior to program implementation. Setting: The Healthy Kids, Smart Kids program was a grass-roots effort to enhance the school food and physical activity environment in the Browns Mill Elementary School in Georgia. Subjects: Data included publicly available school records from the years 1995 to 2006. Results: The number of nurse, counseling, and disciplinary referrals per 100 students demonstrated a downward trend, while standardized test scores demonstrated an upward trend beginning the year of program implementation. School year was a significant predictor of all indicators. Conclusions: Promoting nutrition and physical activity within the school environment may be a promising approach for enhancing both student health and educational outcomes. PMID:19454125
Comparative homology agreement search: An effective combination of homology-search methods
Alam, Intikhab; Dress, Andreas; Rehmsmeier, Marc; Fuellen, Georg
2004-01-01
Many methods have been developed to search for homologous members of a protein family in databases, and the reliability of results and conclusions may be compromised if only one method is used, neglecting the others. Here we introduce a general scheme for combining such methods. Based on this scheme, we implemented a tool called comparative homology agreement search (chase) that integrates different search strategies to obtain a combined “E value.” Our results show that a consensus method integrating distinct strategies easily outperforms any of its component algorithms. More specifically, an evaluation based on the Structural Classification of Proteins database reveals that, on average, a coverage of 47% can be obtained in searches for distantly related homologues (i.e., members of the same superfamily but not the same family, which is a very difficult task), accepting only 10 false positives, whereas the individual methods obtain a coverage of 28–38%. PMID:15367730
Using Tutte polynomials to analyze the structure of the benzodiazepines
NASA Astrophysics Data System (ADS)
Cadavid Muñoz, Juan José
2014-05-01
Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fusella, M; Loi, G; Fiandra, C
Purpose: To investigate the accuracy and robustness, against image noise and artifacts (typical of CBCT images), of a commercial algorithm for deformable image registration (DIR), to propagate regions of interest (ROIs) in computational phantoms based on real prostate patient images. Methods: The Anaconda DIR algorithm, implemented in RayStation was tested. Two specific Deformation Vector Fields (DVFs) were applied to the reference data set (CTref) using the ImSimQA software, obtaining two deformed CTs. For each dataset twenty-four different level of noise and/or capping artifacts were applied to simulate CBCT images. DIR was performed between CTref and each deformed CTs and CBCTs.more » In order to investigate the relationship between image quality parameters and the DIR results (expressed by a logit transform of the Dice Index) a bilinear regression was defined. Results: More than 550 DIR-mapped ROIs were analyzed. The Statistical analysis states that deformation strenght and artifacts were significant prognostic factors of DIR performances, while noise appeared to have a minor role in DIR process as implemented in RayStation as expected by the image similarity metric built in the registration algorithm. Capping artifacts reveals a determinant role for the accuracy of DIR results. Two optimal values for capping artifacts were found to obtain acceptable DIR results (DICE> 075/ 0.85). Various clinical CBCT acquisition protocol were reported to evaluate the significance of the study. Conclusion: This work illustrates the impact of image quality on DIR performance. Clinical issues like Adaptive Radiation Therapy (ART) and Dose Accumulation need accurate and robust DIR software. The RayStation DIR algorithm resulted robust against noise, but sensitive to image artifacts. This result highlights the need of robustness quality assurance against image noise and artifacts in the commissioning of a DIR commercial system and underlines the importance to adopt optimized protocols for CBCT image acquisitions in ART clinical implementation.« less
Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep
2013-12-16
The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.
Photocatalytic treatment of bioaerosols: impact of the reactor design.
Josset, Sébastien; Taranto, Jérôme; Keller, Nicolas; Keller, Valérie; Lett, Marie-Claire
2010-04-01
Comparing the UV-A photocatalytic treatment of bioaerosols contaminated with different airborne microorganisms such as L. pneumophila bacteria, T2 bacteriophage viruses and B. atrophaeus bacterial spores, pointed out a decontamination sensitivity following the bacteria > virus > bacterial spore ranking order, differing from that obtained for liquid-phase or surface UV-A photocatalytic disinfection. First-principles CFD investigation applied to a model annular photoreactor evidenced that larger the microorganism size, higher the hit probability with the photocatalytic surfaces. Applied to a commercial photocatalytic purifier case-study, the CFD calculations showed that the performances of the studied purifier could strongly benefit from rational reactor design engineering. The results obtained highlighted the required necessity to specifically investigate the removal of airborne microorganisms in terms of reactor design, and not to simply transpose the results obtained from studies performed toward chemical pollutants, especially for a successful commercial implementation of air decontamination photoreactors. This illustrated the importance of the aerodynamics in air decontamination, directly resulting from the microorganism morphology.
Are We Using Abdominal Radiographs Appropriately in the Management of Pediatric Constipation?
Beinvogl, Beate; Sabharwal, Sabina; McSweeney, Maireade; Nurko, Samuel
2017-12-01
To identify the reasons why pediatric gastroenterologists obtain abdominal radiographs in the management of pediatric constipation. This was a prospective study surveying providers regarding their rationale, interpretation, resultant change, and confidence in their management before and after obtaining KUBs in patients seen for suspected constipation. Demographics and clinical findings were obtained from medical records. A total of 24 providers were surveyed after 72 patient encounters. Reasons for obtaining an abdominal radiograph included evaluation of stool burden (70%), need for a clean out (35%), fecal impaction (27%), cause of abdominal pain (24%), demonstration of stool burden to families (14%), assessment of response to therapy (13%), or encopresis (10%). The plan was changed in 47.6% of cases based on radiographic findings. In cases in which a plan was outlined before obtaining the radiograph (69%), the initial plan was implemented on average in 52.5%. In cases with no plans before obtaining the radiograph, previously unconsidered plans were implemented in 8.7%. Provider confidence in the management plan increased from 2.4 ± 2.7 to 4.1 ± 1.8 (P < .05) after the abdominal radiograph. Abdominal radiographs commonly are obtained by pediatric gastroenterologists in the evaluation and management of constipation. The majority used it to make a diagnosis, and nearly one-half changed their management based on the imaging findings. Overall, they reported an improved confidence in their management plan, despite evidence that radiographic findings poorly correlate with clinical severity. This study highlights the need for further provider education regarding the recommendations delineated in existing constipation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.
PG4KDS: A Model for the Clinical Implementation of Pre-emptive Pharmacogenetics
Hoffman, James M.; Haidar, Cyrine E.; Wilkinson, Mark R.; Crews, Kristine R.; Baker, Donald K.; Kornegay, Nancy M.; Yang, Wenjian; Pui, Ching-Hon; Reiss, Ulrike M.; Gaur, Aditya H.; Howard, Scott C.; Evans, William E.; Broeckel, Ulrich; Relling, Mary V.
2014-01-01
Pharmacogenetics is frequently cited as an area for initial focus of the clinical implementation of genomics. Through the PG4KDS protocol, St. Jude Children’s Research Hospital pre-emptively genotypes patients for 230 genes using the Affymetrix Drug Metabolizing Enzymes and Transporters (DMET) Plus array supplemented with a CYP2D6 copy number assay. The PG4KDS protocol provides a rational, stepwise process for implementing gene/drug pairs, organizing data, and obtaining consent from patients and families. Through August 2013, 1559 patients have been enrolled, and 4 gene tests have been released into the electronic health record (EHR) for clinical implementation: TPMT, CYP2D6, SLCO1B1, and CYP2C19. These genes are coupled to 12 high-risk drugs. Of the 1016 patients with genotype test results available, 78% of them had at least one high-risk (i.e., actionable) genotype result placed in their EHR. Each diplotype result released to the EHR is coupled with an interpretive consult that is created in a concise, standardized format. To support-gene based prescribing at the point of care, 55 interruptive clinical decision support (CDS) alerts were developed. Patients are informed of their genotyping result and its relevance to their medication use through a letter. Key elements necessary for our successful implementation have included strong institutional support, a knowledgeable clinical laboratory, a process to manage any incidental findings, a strategy to educate clinicians and patients, a process to return results, and extensive use of informatics, especially CDS. Our approach to pre-emptive clinical pharmacogenetics has proven feasible, clinically useful, and scalable. PMID:24619595
PG4KDS: a model for the clinical implementation of pre-emptive pharmacogenetics.
Hoffman, James M; Haidar, Cyrine E; Wilkinson, Mark R; Crews, Kristine R; Baker, Donald K; Kornegay, Nancy M; Yang, Wenjian; Pui, Ching-Hon; Reiss, Ulrike M; Gaur, Aditya H; Howard, Scott C; Evans, William E; Broeckel, Ulrich; Relling, Mary V
2014-03-01
Pharmacogenetics is frequently cited as an area for initial focus of the clinical implementation of genomics. Through the PG4KDS protocol, St. Jude Children's Research Hospital pre-emptively genotypes patients for 230 genes using the Affymetrix Drug Metabolizing Enzymes and Transporters (DMET) Plus array supplemented with a CYP2D6 copy number assay. The PG4KDS protocol provides a rational, stepwise process for implementing gene/drug pairs, organizing data, and obtaining consent from patients and families. Through August 2013, 1,559 patients have been enrolled, and four gene tests have been released into the electronic health record (EHR) for clinical implementation: TPMT, CYP2D6, SLCO1B1, and CYP2C19. These genes are coupled to 12 high-risk drugs. Of the 1,016 patients with genotype test results available, 78% of them had at least one high-risk (i.e., actionable) genotype result placed in their EHR. Each diplotype result released to the EHR is coupled with an interpretive consult that is created in a concise, standardized format. To support-gene based prescribing at the point of care, 55 interruptive clinical decision support (CDS) alerts were developed. Patients are informed of their genotyping result and its relevance to their medication use through a letter. Key elements necessary for our successful implementation have included strong institutional support, a knowledgeable clinical laboratory, a process to manage any incidental findings, a strategy to educate clinicians and patients, a process to return results, and extensive use of informatics, especially CDS. Our approach to pre-emptive clinical pharmacogenetics has proven feasible, clinically useful, and scalable. © 2014 Wiley Periodicals, Inc.
Changing personnel behavior to promote quality care practices in an intensive care unit
Cooper, Dominic; Farmery, Keith; Johnson, Martin; Harper, Christine; Clarke, Fiona L; Holton, Phillip; Wilson, Susan; Rayson, Paul; Bence, Hugh
2005-01-01
The delivery of safe high quality patient care is a major issue in clinical settings. However, the implementation of evidence-based practice and educational interventions are not always effective at improving performance. A staff-led behavioral management process was implemented in a large single-site acute (secondary and tertiary) hospital in the North of England for 26 weeks. A quasi-experimental, repeated-measures, within-groups design was used. Measurement focused on quality care behaviors (ie, documentation, charting, hand washing). The results demonstrate the efficacy of a staff-led behavioral management approach for improving quality-care practices. Significant behavioral change (F [6, 19] = 5.37, p < 0.01) was observed. Correspondingly, statistically significant (t-test [t] = 3.49, df = 25, p < 0.01) reductions in methicillin-resistant Staphylococcus aureus (MRSA) were obtained. Discussion focuses on implementation issues. PMID:18360574
Approaching mathematical model of the immune network based DNA Strand Displacement system.
Mardian, Rizki; Sekiyama, Kosuke; Fukuda, Toshio
2013-12-01
One biggest obstacle in molecular programming is that there is still no direct method to compile any existed mathematical model into biochemical reaction in order to solve a computational problem. In this paper, the implementation of DNA Strand Displacement system based on nature-inspired computation is observed. By using the Immune Network Theory and Chemical Reaction Network, the compilation of DNA-based operation is defined and the formulation of its mathematical model is derived. Furthermore, the implementation on this system is compared with the conventional implementation by using silicon-based programming. From the obtained results, we can see a positive correlation between both. One possible application from this DNA-based model is for a decision making scheme of intelligent computer or molecular robot. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Force and Moment Approach for Achievable Dynamics Using Nonlinear Dynamic Inversion
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.; Bacon, Barton J.
1999-01-01
This paper describes a general form of nonlinear dynamic inversion control for use in a generic nonlinear simulation to evaluate candidate augmented aircraft dynamics. The implementation is specifically tailored to the task of quickly assessing an aircraft's control power requirements and defining the achievable dynamic set. The achievable set is evaluated while undergoing complex mission maneuvers, and perfect tracking will be accomplished when the desired dynamics are achievable. Variables are extracted directly from the simulation model each iteration, so robustness is not an issue. Included in this paper is a description of the implementation of the forces and moments from simulation variables, the calculation of control effectiveness coefficients, methods for implementing different types of aerodynamic and thrust vectoring controls, adjustments for control effector failures, and the allocation approach used. A few examples illustrate the perfect tracking results obtained.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco
2016-01-01
Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643
Implementation of space satellite remote sensing programs in developing countries (Ecuador)
NASA Technical Reports Server (NTRS)
Segovia, A.
1982-01-01
The current state of space satellite remote sensing programs in developing countries is discussed. Sensors being utilized and results obtained are described. Requirements are presented for the research of resources in developing countries. It is recommended that a work procedure be developed for the use of satellite remote sensing data tailored to the necessities of the different countries.
Supporting Faculty Efforts to Obtain Research Funding: Successful Practices and Lessons Learned
ERIC Educational Resources Information Center
Reiser, Robert A.; Moore, Alison L.; Bradley, Terra W.; Walker, Reddick; Zhao, Weinan
2015-01-01
Faculty members face increasing pressure to secure external research funding, and as a result, there is a critical need for professional development in this area. This paper describes a series of tools and services that have been designed and implemented by a College of Education Office of Research at a southeastern university in order to help…
ERIC Educational Resources Information Center
Lofstrom, Erika; Nevgi, Anne
2007-01-01
This paper reports the results of a study on strategic planning and implementation of information and communication technology (ICT) in teaching and describes the level of quality awareness in web-based teaching at the University of Helsinki. Questionnaire survey data obtained from deans and institutional leaders, ICT support staff, teachers and…
Approximated analytical solution to an Ebola optimal control problem
NASA Astrophysics Data System (ADS)
Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.
2016-11-01
An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.
ERIC Educational Resources Information Center
Belanger, Kenneth D.
2009-01-01
Inquiry-driven lab exercises require students to think carefully about a question, carry out an investigation of that question, and critically analyze the results of their investigation. Here, we describe the implementation and assessment of an inquiry-based laboratory exercise in which students obtain and analyze novel data that contribute to our…
Dark optical lattice of ring traps for cold atoms
NASA Astrophysics Data System (ADS)
Courtade, Emmanuel; Houde, Olivier; Clément, Jean-François; Verkerk, Philippe; Hennequin, Daniel
2006-09-01
We propose an optical lattice for cold atoms made of a one-dimensional stack of dark ring traps. It is obtained through the interference pattern of a standard Gaussian beam with a counterpropagating hollow beam obtained using a setup with two conical lenses. The traps of the resulting lattice are characterized by a high confinement and a filling rate much larger than unity, even if loaded with cold atoms from a magneto-optical trap. We have implemented this system experimentally, and demonstrated its feasibility. Applications in statistical physics, quantum computing, and Bose-Einstein condensate dynamics are conceivable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goumiri, I. R.; Rowley, C. W.; Sabbagh, S. A.
In this study, a model-based feedback system is presented enabling the simultaneous control of the stored energy through β n and the toroidal rotation profile of the plasma in National Spherical Torus eXperiment Upgrade device. Actuation is obtained using the momentum from six injected neutral beams and the neoclassical toroidal viscosity generated by applying three-dimensional magnetic fields. Based on a model of the momentum diffusion and torque balance, a feedback controller is designed and tested in closed-loop simulations using TRANSP, a time dependent transport analysis code, in predictive mode. Promising results for the ongoing experimental implementation of controllers are obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longhurst, G.R.
This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.
Qualitative analysis of pure and adulterated canola oil via SIMCA
NASA Astrophysics Data System (ADS)
Basri, Katrul Nadia; Khir, Mohd Fared Abdul; Rani, Rozina Abdul; Sharif, Zaiton; Rusop, M.; Zoolfakar, Ahmad Sabirin
2018-05-01
This paper demonstrates the utilization of near infrared (NIR) spectroscopy to classify pure and adulterated sample of canola oil. Soft Independent Modeling Class Analogies (SIMCA) algorithm was implemented to discriminate the samples to its classes. Spectral data obtained was divided using Kennard Stone algorithm into training and validation dataset by a fixed ratio of 7:3. The model accuracy obtained based on the model built is 0.99 whereas the sensitivity and precision are 0.92 and 1.00. The result showed the classification model is robust to perform qualitative analysis of canola oil for future application.
Mapa MEGNO para satélites irregulares de Satuno
NASA Astrophysics Data System (ADS)
Moyano, M. M.; Leiva, A. M.
By implementing the elliptic restricted three-body model we obtain high resolution dynamical maps in the phase space region corresponding to that where Saturn's irregular satellites are currently found. The nature of the trajectories is characterized by the MEGNO chaos indicator (Cincotta P. and Simó C., 2000), which allows to identify regions of chaotic and quasi- periodic trajectories much faster than with other indicators (e.g. Lyapunov exponents). The results obtained allow to identify with great detail the boundaries of the regions of regular motion, chaotic motion, and substruc- tures associated to mean motion resonances. FULL TEXT IN SPANISH
Dust-concentration measurement based on Mie scattering of a laser beam
Yu, Xiaoyu; Shi, Yunbo; Wang, Tian; Sun, Xu
2017-01-01
To realize automatic measurement of the concentration of dust particles in the air, a theory for dust concentration measurement was developed, and a system was designed to implement the dust concentration measurement method based on laser scattering. In the study, the principle of dust concentration detection using laser scattering is studied, and the detection basis of Mie scattering theory is determined. Through simulation, the influence of the incident laser wavelength, dust particle diameter, and refractive index of dust particles on the scattered light intensity distribution are obtained for determining the scattered light intensity curves of single suspended dust particles under different characteristic parameters. A genetic algorithm was used to study the inverse particle size distribution, and the reliability of the measurement system design is proven theoretically. The dust concentration detection system, which includes a laser system, computer circuitry, air flow system, and control system, was then implemented according to the parameters obtained from the theoretical analysis. The performance of the designed system was evaluated. Experimental results show that the system performance was stable and reliable, resulting in high-precision automatic dust concentration measurement with strong anti-interference ability. PMID:28767662
Measurement uncertainty: Friend or foe?
Infusino, Ilenia; Panteghini, Mauro
2018-02-02
The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A triple-mode hexa-standard reconfigurable TI cross-coupled ΣΔ modulator
NASA Astrophysics Data System (ADS)
Prakash A. V, Jos; Jose, Babita R.; Mathew, Jimson; Jose, Bijoy A.
2017-07-01
Hardware reconfigurability is an attractive solution for modern multi-standard wireless systems. This paper analyses the performance and implementation of an efficient triple-mode hexa-standard reconfigurable sigma-delta (∑Δ) modulator designed for six different wireless communication standards. Enhanced noise-shaping characteristics and increased digitisation rate, obtained by time-interleaved cross-coupling of ∑Δ paths, have been utilised for the modulator design. Power/hardware efficiency and the capability to acclimate the requirements of wide hexa-standard specifications are achieved by introducing an advanced noise-shaping structure, the dual-extended architecture. Simulation results of the proposed architecture using Hspice shows that the proposed modulator obtains a peak signal-to-noise ratio of 83.4/80.2/67.8/61.5/60.8/51.03 dB for hexa-standards, i.e. GSM/Bluetooth/GPS/WCDMA/WLAN/WiMAX standards with significantly less hardware and low operating frequency. The proposed architecture is implemented in 45 nm CMOS process using a 1 V supply and 0.7 V input range with a power consumption of 1.93 mW. Both architectural- and transistor-level simulation results prove the effectiveness and feasibility of this architecture to accomplish multi-standard cellular communication characteristics.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Ganguly, R.
With the current technological growth in the field of device fabrication, white power-LED's are available for solid state lighting applications. This is a paradigm shift from electrical lighting to electronic lighting. The implemented systems are showing some promise by saving a considerable amount of energy as well as providing a good and acceptable illumination level. However, the `useful life' of such devices is an important parameter. If the proper device is not chosen, the desired reliability and performance will not be obtained. In the present work, different parameters associated with reliability of such LED's are studied. Four different varieties of LED's are put to test the `useful life' as per IESNA LM 79 standard. From the results obtained, the proper LED is chosen for further application. Subsequently, lighting design is done for a hospital waiting room (indoor application) with 24 × 7 lighting requirements for replacement of existing CFLs there. The calculations show that although the initial cost is higher for LED based lighting, yet the savings on energy and replacement of the lamp results in a payback time of less than a year.
NASA Astrophysics Data System (ADS)
Enfedaque, A.; Alberti, M. G.; Gálvez, J. C.
2017-09-01
The relevance of fibre reinforced cementitious materials (FRC) has increased due to the appearance of regulations that establish the requirements needed to take into account the contribution of the fibres in the structural design. However, in order to exploit the properties of such materials it is a key aspect being able to simulate their behaviour under fracture conditions. Considering a cohesive crack approach, several authors have studied the suitability of using several softening functions. However, none of these functions can be directly applied to FRC. The present contribution analyses the suitability of multilinear softening functions in order to obtain simulation results of fracture tests of a wide variety of FRC. The implementation of multilinear softening functions has been successfully performed by means of a material user subroutine in a commercial finite element code obtaining accurate results in a wide variety of FRC. Such softening functions were capable of simulating a ductile unloading behaviour as well as a rapid unloading followed by a reloading and afterwards a slow unloading. Moreover, the implementation performed has been proven as versatile, robust and efficient from a numerical point of view.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
NASA Astrophysics Data System (ADS)
Kryjevskaia, Mila; Boudreaux, Andrew; Heins, Dustin
2014-03-01
Materials from Tutorials in Introductory Physics, originally designed and implemented by the Physics Education Group at the University of Washington, were used in modified form as interactive lectures under conditions significantly different from those suggested by the curriculum developers. Student learning was assessed using tasks drawn from the physics education research literature. Use of tutorials in the interactive lecture format yielded gains in student understanding comparable to those obtained through the canonical tutorial implementation at the University of Washington, suggesting that student engagement with the intellectual steps laid out in the tutorials, rather than the specific strategies used in facilitating such engagement, plays the central role in promoting student learning. We describe the implementation details and assessment of student learning for two different tutorials: one focused on mechanical waves, used at North Dakota State University, and one on Galilean relativity, used at Western Washington University. Also discussed are factors that may limit the generalizability of the results.
NASA Astrophysics Data System (ADS)
Molenda, Michał
2016-12-01
The article describes the effects of the improvement of the production process which one of the industrial enterprises obtained by implementing the method of Autonomous Maintenance (AM), which is one of the pillars of the concept of Total Productive Maintenance (TPM). AM method was presented as an aid to the formation of intelligent, self-improving procesess of the quality management system (QMS). The main part of this article is to present results of studies that have been conducted in one of the large industrial enterprises in Poland, manufacturing for the automotive industry. The aim of the study was to evaluate the effectiveness of the implementation of the AM method as a tool for selfimprovement of industrial processes in the following company. The study was conducted in 2015. The gathering and comparison of data from the period of two years, ie. the year before and the year after the implementation of AM, helped to determine the effectiveness of AM in building intelligent quality management system.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
Materassi, Donatello; Baschieri, Paolo; Tiribilli, Bruno; Zuccheri, Giampaolo; Samorì, Bruno
2009-08-01
We describe the realization of an atomic force microscope architecture designed to perform customizable experiments in a flexible and automatic way. Novel technological contributions are given by the software implementation platform (RTAI-LINUX), which is free and open source, and from a functional point of view, by the implementation of hard real-time control algorithms. Some other technical solutions such as a new way to estimate the optical lever constant are described as well. The adoption of this architecture provides many degrees of freedom in the device behavior and, furthermore, allows one to obtain a flexible experimental instrument at a relatively low cost. In particular, we show how such a system has been employed to obtain measures in sophisticated single-molecule force spectroscopy experiments [Fernandez and Li, Science 303, 1674 (2004)]. Experimental results on proteins already studied using the same methodologies are provided in order to show the reliability of the measure system.
Testing of a 4 K to 2 K heat exchanger with an intermediate pressure drop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudsen, Peter N.; Ganni, Venkatarao
2015-12-01
Most large sub-atmospheric helium refrigeration systems incorporate a heat exchanger at the load, or in the distribution system, to counter-flow the sub-atmospheric return with the super-critical or liquid supply. A significant process improvement is theoretically obtainable by handling the exergy loss across the Joule-Thompson throttling valve supplying the flow to the load in a simple but different manner. As briefly outlined in previous publications, the exergy loss can be minimized by allowing the supply flow pressure to decrease to a sub-atmospheric pressure concurrent with heat exchange flow from the load. One practical implementation is to sub-divide the supply flow pressuremore » drop between two heat exchanger sections, incorporating an intermediate pressure drop. Such a test is being performed at Jefferson Lab's Cryogenic Test Facility (CTF). This paper will briefly discuss the theory, practical implementation and test results and analysis obtained to date.« less
Source-Adaptation-Based Wireless Video Transport: A Cross-Layer Approach
NASA Astrophysics Data System (ADS)
Qu, Qi; Pei, Yong; Modestino, James W.; Tian, Xusheng
2006-12-01
Real-time packet video transmission over wireless networks is expected to experience bursty packet losses that can cause substantial degradation to the transmitted video quality. In wireless networks, channel state information is hard to obtain in a reliable and timely manner due to the rapid change of wireless environments. However, the source motion information is always available and can be obtained easily and accurately from video sequences. Therefore, in this paper, we propose a novel cross-layer framework that exploits only the motion information inherent in video sequences and efficiently combines a packetization scheme, a cross-layer forward error correction (FEC)-based unequal error protection (UEP) scheme, an intracoding rate selection scheme as well as a novel intraframe interleaving scheme. Our objective and subjective results demonstrate that the proposed approach is very effective in dealing with the bursty packet losses occurring on wireless networks without incurring any additional implementation complexity or delay. Thus, the simplicity of our proposed system has important implications for the implementation of a practical real-time video transmission system.
Timing effects of antecedent- and response-focused emotion regulation strategies.
Paul, Sandra; Simon, Daniela; Kniesche, Rainer; Kathmann, Norbert; Endrass, Tanja
2013-09-01
Distraction and cognitive reappraisal influence the emotion-generative process at early stages and have been shown to effectively attenuate emotional responding. Inhibiting emotion-expressive behavior is thought to be less beneficial due to later implementation, but empirical results are mixed. Thus, the current study examined the temporal dynamics of these emotion regulation strategies at attenuating the late positive potential (LPP) while participants were shown unpleasant pictures. Results revealed that all strategies successfully reduced the LPP and self-reported negative affect. We confirmed that distraction attenuated the LPP earlier than cognitive reappraisal. Surprisingly, expressive suppression affected emotional responding as early as distraction. This suggests that suppression was used preventively and disrupted the emotion-generative process from the very beginning instead of targeting the emotional response itself. Thus, the obtained results point to the importance of considering the point in time when response-focused emotion regulation strategies are being implemented. Copyright © 2013 Elsevier B.V. All rights reserved.
Fernández-Díaz, Mª Jose; Rodríguez-Mantilla, Jesús Miguel; Jover-Olmeda, Gonzalo
2017-08-01
This paper analyses the importance of evaluating the various components of the programmes or actions carried out by education organisations. It highlights the need to assess the impact of the intervention on the organisation and consider how changes are consolidated over time in interaction with the context. We propose an impact evaluation model and as an example have chosen the implementation of Quality Management Systems in schools. The paper analyses the results obtained in 40 schools in three regions (Spanish Autonomous Communities) with varying levels of implementation. The results show overall impact on these education centres as the teachers and management teams of the centres perceive it. This impact is more evident in some of the dimensions considered in the study than in others. The results also confirm the differences between regional contexts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas
2016-03-01
The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.
Poudel, Sashi; Weir, Lori; Dowling, Dawn; Medich, David C
2016-08-01
A statistical pilot study was retrospectively performed to analyze potential changes in occupational radiation exposures to Interventional Radiology (IR) staff at Lawrence General Hospital after implementation of the i2 Active Radiation Dosimetry System (Unfors RaySafe Inc, 6045 Cochran Road Cleveland, OH 44139-3302). In this study, the monthly OSL dosimetry records obtained during the eight-month period prior to i2 implementation were normalized to the number of procedures performed during each month and statistically compared to the normalized dosimetry records obtained for the 8-mo period after i2 implementation. The resulting statistics included calculation of the mean and standard deviation of the dose equivalences per procedure and included appropriate hypothesis tests to assess for statistically valid differences between the pre and post i2 study periods. Hypothesis testing was performed on three groups of staff present during an IR procedure: The first group included all members of the IR staff, the second group consisted of the IR radiologists, and the third group consisted of the IR technician staff. After implementing the i2 active dosimetry system, participating members of the Lawrence General IR staff had a reduction in the average dose equivalence per procedure of 43.1% ± 16.7% (p = 0.04). Similarly, Lawrence General IR radiologists had a 65.8% ± 33.6% (p=0.01) reduction while the technologists had a 45.0% ± 14.4% (p=0.03) reduction.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
14 CFR 120.117 - Implementing a drug testing program.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Implementing a drug testing program. 120... AND ALCOHOL TESTING PROGRAM Drug Testing Program Requirements § 120.117 Implementing a drug testing.... (4) A part 145 certificate holder who has your own drug testing program Obtain an Antidrug and...
14 CFR 120.117 - Implementing a drug testing program.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Implementing a drug testing program. 120... AND ALCOHOL TESTING PROGRAM Drug Testing Program Requirements § 120.117 Implementing a drug testing... 145 certificate holder who has your own drug testing program Obtain an Antidrug and Alcohol Misuse...
14 CFR 120.117 - Implementing a drug testing program.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Implementing a drug testing program. 120... AND ALCOHOL TESTING PROGRAM Drug Testing Program Requirements § 120.117 Implementing a drug testing... 145 certificate holder who has your own drug testing program Obtain an Antidrug and Alcohol Misuse...
14 CFR 120.117 - Implementing a drug testing program.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Implementing a drug testing program. 120... AND ALCOHOL TESTING PROGRAM Drug Testing Program Requirements § 120.117 Implementing a drug testing... 145 certificate holder who has your own drug testing program Obtain an Antidrug and Alcohol Misuse...
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
NASA Astrophysics Data System (ADS)
Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic
2014-03-01
The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.
First-principles chemical kinetic modeling of methyl trans-3-hexenoate epoxidation by HO 2
Cagnina, S.; Nicolle, Andre; de Bruin, T.; ...
2017-02-16
The design of innovative combustion processes relies on a comprehensive understanding of biodiesel oxidation kinetics. The present study aims at unraveling the reaction mechanism involved in the epoxidation of a realistic biodiesel surrogate, methyl trans-3-hexenoate, by hydroperoxy radicals using a bottom-up theoretical kinetics methodology. The obtained rate constants are in good agreement with experimental data for alkene epoxidation by HO 2. The impact of temperature and pressure on epoxidation pathways involving H-bonded and non-H-bonded conformers was assessed. As a result, the obtained rate constant was finally implemented into a state-of-the-art detailed combustion mechanism, resulting in fairly good agreement with enginemore » experiments.« less
A genetic algorithm solution to the unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.
1996-02-01
This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less
a Hyperspectral Image Classification Method Using Isomap and Rvm
NASA Astrophysics Data System (ADS)
Chang, H.; Wang, T.; Fang, H.; Su, Y.
2018-04-01
Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.
Egan, M; Bambra, C; Petticrew, M; Whitehead, M
2009-01-01
Background: The reporting of intervention implementation in studies included in systematic reviews of organisational-level workplace interventions was appraised. Implementation is taken to include such factors as intervention setting, resources, planning, collaborations, delivery and macro-level socioeconomic contexts. Understanding how implementation affects intervention outcomes may help prevent erroneous conclusions and misleading assumptions about generalisability, but implementation must be adequately reported if it is to be taken into account. Methods: Data on implementation were obtained from four systematic reviews of complex interventions in workplace settings. Implementation was appraised using a specially developed checklist and by means of an unstructured reading of the text. Results: 103 studies were identified and appraised, evaluating four types of organisational-level workplace intervention (employee participation, changing job tasks, shift changes and compressed working weeks). Many studies referred to implementation, but reporting was generally poor and anecdotal in form. This poor quality of reporting did not vary greatly by type or date of publication. A minority of studies described how implementation may have influenced outcomes. These descriptions were more usefully explored through an unstructured reading of the text, rather than by means of the checklist. Conclusions: Evaluations of complex interventions should include more detailed reporting of implementation and consider how to measure quality of implementation. The checklist helped us explore the poor reporting of implementation in a more systematic fashion. In terms of interpreting study findings and their transferability, however, the more qualitative appraisals appeared to offer greater potential for exploring how implementation may influence the findings of specific evaluations. Implementation appraisal techniques for systematic reviews of complex interventions require further development and testing. PMID:18718981
The energy audit process for universities accommodation in Malaysia: a preliminary study
NASA Astrophysics Data System (ADS)
Dzulkefli Muhammad, Hilmi
2017-05-01
The increase of energy consumption in the Malaysian Universities has raised national concerns due to the fact that its consumption increase government fiscal budget and at the same time contributes negative impacts towards the environment. The purpose of this research is to focus on the process of energy audit conducted in the Malaysian universities and to identify the significant practice that can improve energy consumption of the selected universities. The significant criteria in energy audit may be found by comparing the energy implementation process of selected Malaysian universities through the investigation of energy consumption behavior and the number of electrical appliances, equipment, machinery and buildings activities that have an impact on energy consumption that can improve energy-efficiency in building. The Energy Efficiency Index (EEI) will be used as an indicator and combined with the suggested application of HOMER software to obtain solution and possible improvement of energy consumption during energy audit implementation. A document analysis approach will also be obtained in order to identify the best practice through the selected energy documentations. The result of this research may be used as a guideline for other universities that consume high energy in order to help improving the implementation of energy audit process in their universities.
Transmission line relay mis-operation detection based on time-synchronized field data
Esmaeilian, Ahad; Popovic, Tomo; Kezunovic, Mladen
2015-05-04
In this paper, a real-time tool to detect transmission line relay mis-operation is implemented. The tool uses time-synchronized measurements obtained from both ends of the line during disturbances. The proposed fault analysis tool comes into the picture only after the protective device has operated and tripped the line. The proposed methodology is able not only to detect, classify, and locate transmission line faults, but also to accurately confirm whether the line was tripped due to a mis-operation of protective relays. The analysis report includes either detailed description of the fault type and location or detection of relay mis-operation. As such,more » it can be a source of very useful information to support the system restoration. The focus of the paper is on the implementation requirements that allow practical application of the methodology, which is illustrated using the field data obtained the real power system. Testing and validation is done using the field data recorded by digital fault recorders and protective relays. The test data included several hundreds of event records corresponding to both relay mis-operations and actual faults. The discussion of results addresses various challenges encountered during the implementation and validation of the presented methodology.« less
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
Strategic Planning, Implementation, and Evaluation Processes in Hospital Systems: A Survey From Iran
Sadeghifar, Jamil; Jafari, Mehdi; Tofighi, Shahram; Ravaghi, Hamid; Maleki, Mohammad Reza
2015-01-01
Aim & Background: Strategic planning has been presented as an important management practice. However, evidence of its deployment in healthcare systems in low-income and middle-income countries (LMICs) is limited. This study investigated the strategic management process in Iranian hospitals. Methods: The present study was accomplished in 24 teaching hospitals in Tehran, Iran from September 2012 to March 2013. The data collection instrument was a questionnaire including 130 items. This questionnaire measured the situation of formulation, implementation, and evaluation of strategic plan as well as the requirements, facilitators, and its benefits in the studied hospitals. Results: All the investigated hospitals had a strategic plan. The obtained percentages for the items “the rate of the compliance to requirements” and “the quantity of planning facilitators” (68.75%), attention to the stakeholder participation in the planning (55.74%), attention to the planning components (62.22%), the status of evaluating strategic plan (59.94%) and the benefits of strategic planning for hospitals (65.15%) were in the medium limit. However, the status of implementation of the strategic plan (53.71%) was found to be weak. Significant statistical correlations were observed between the incentive for developing strategic plan and status of evaluating phase (P=0.04), and between status of implementation phase and having a documented strategic plan (P=0.03). Conclusion: According to the results, it seems that absence of appropriate internal incentive for formulating and implementing strategies led more hospitals to start formulation strategic planning in accordance with the legal requirements of Ministry of Health. Consequently, even though all the investigated hospital had the documented strategic plan, the plan has not been implemented efficiently and valid evaluation of results is yet to be achieved. PMID:25716385
Implementing medical abortion with mifepristone and misoprostol in Norway 1998–2013
Løkeland, Mette; Bjørge, Tone; Iversen, Ole-Erik; Akerkar, Rupali; Bjørge, Line
2017-01-01
Abstract Background: Medical abortion with mifepristone and misoprostol was introduced in Norway in 1998, and since then there has been an almost complete change from predominantly surgical to medical abortions. We aimed to describe the medical abortion implementation process, and to compare characteristics of women obtaining medical and surgical abortion. Methods: Information from all departments of obstetrics and gynaecology in Norway on the time of implementation of medical abortion and abortion procedures in use up to 12 weeks of gestation was assessed by surveys in 2008 and 2012. We also analysed data from the National Abortion Registry comprising 223 692 women requesting abortion up to 12 weeks of gestation during 1998–2013. Results: In 2012, all hospitals offered medical abortion, 84.4% offered medical abortion at 9–12 weeks of gestation and 92.1% offered home administration of misoprostol. The use of medical abortion increased from 5.9% of all abortions in 1998 to 82.1% in 2013. Compared with women having a surgical abortion, women obtaining medical abortion had higher odds for undergoing an abortion at 4–6 weeks (adjusted OR 2.33; 95% confidence interval 2.28-2.38). Waiting time between registered request for an abortion until termination was reduced from 11.3 days in 1998 to 7.3 days in 2013. Conclusions: Norwegian women have gained access to more treatment modalities and simplified protocols for medical abortion. At the same time they obtained abortions at an earlier gestational age and the waiting time has been reduced. PMID:28031316
Velocity Statistics and Spectra in Three-Stream Jets
NASA Technical Reports Server (NTRS)
Ecker, Tobias; Lowe, K. Todd; Ng, Wing F.; Henderson, Brenda; Leib, Stewart
2016-01-01
Velocimetry measurements were obtained in three-stream jets at the NASA Glenn Research Center Nozzle Acoustics Test Rig using the time-resolved Doppler global velocimetry technique. These measurements afford exceptional frequency response, to 125 kHz bandwidth, in order to study the detailed dynamics of turbulence in developing shear flows. Mean stream-wise velocity is compared to measurements acquired using particle image velocimetry for validation. Detailed results for convective velocity distributions throughout an axisymmetric plume and the thick side of a plume with an offset third-stream duct are provided. The convective velocity results exhibit that, as expected, the eddy speeds are reduced on the thick side of the plume compared to the axisymmetric case. The results indicate that the time-resolved Doppler global velocimetry method holds promise for obtaining results valuable to the implementation and refinement of jet noise prediction methods being developed for three-stream jets.
Verloo, Henk; Desmedt, Mario; Morin, Diane
2017-09-01
To evaluate two psychometric properties of the French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales, namely their internal consistency and construct validity. The Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales developed by Melnyk et al. are recognised as valid, reliable instruments in English. However, no psychometric validation for their French versions existed. Secondary analysis of a cross sectional survey. Source data came from a cross-sectional descriptive study sample of 382 nurses and other allied healthcare providers. Cronbach's alpha was used to evaluate internal consistency, and principal axis factor analysis and varimax rotation were computed to determine construct validity. The French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales showed excellent reliability, with Cronbach's alphas close to the scores established by Melnyk et al.'s original versions. Principal axis factor analysis showed medium-to-high factor loading scores without obtaining collinearity. Principal axis factor analysis with varimax rotation of the 16-item Evidence-Based Practice Beliefs scale resulted in a four-factor loading structure. Principal axis factor analysis with varimax rotation of the 17-item Evidence-Based Practice Implementation scale revealed a two-factor loading structure. Further research should attempt to understand why the French Evidence-Based Practice Implementation scale showed a two-factor loading structure but Melnyk et al.'s original has only one. The French versions of the Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales can both be considered valid and reliable instruments for measuring Evidence-Based Practice beliefs and implementation. The results suggest that the French Evidence-Based Practice Beliefs and Evidence-Based Practice Implementation scales are valid and reliable and can therefore be used to evaluate the effectiveness of organisational strategies aimed at increasing professionals' confidence in Evidence-Based Practice, supporting its use and implementation. © 2017 John Wiley & Sons Ltd.
Wu, Shu Juan; Hayden, Joshua A
2018-02-15
Sandwich immunoassays offer advantages in the clinical laboratory but can yield erroneously low results due to hook (prozone) effect, especially with analytes whose concentrations span several orders of magnitude such as ferritin. This study investigated a new approach to reduce the likelihood of hook effect in ferritin immunoassays by performing upfront, five-fold dilutions of all samples for ferritin analysis. The impact of this change on turnaround time and costs were also investigated. Ferritin concentrations were analysed in routine clinical practice with and without upfront dilutions on Siemens Centaur® XP (Siemens Healthineers, Erlang, Germany) immunoanalysers. In addition, one month of baseline data (1026 results) were collected prior to implementing upfront dilutions and one month of data (1033 results) were collected after implementation. Without upfront dilutions, hook effect was observed in samples with ferritin concentrations as low as 86,028 µg/L. With upfront dilutions, samples with ferritin concentrations as high as 126,050 µg/L yielded values greater than the measurement interval and would have been diluted until an accurate value was obtained. The implementation of upfront dilution of ferritin samples led to a decrease in turnaround time from a median of 2 hours and 3 minutes to 1 hour and 18 minutes (P = 0.002). Implementation of upfront dilutions of all ferritin samples reduced the possibility of hook effect, improved turnaround time and saved the cost of performing additional dilutions.
5 CFR 410.308 - Training to obtain an academic degree.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 1 2011-01-01 2011-01-01 false Training to obtain an academic degree... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.308 Training to obtain an academic degree. (a) An agency may authorize training for an employee to obtain an academic degree under...
5 CFR 410.308 - Training to obtain an academic degree.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Training to obtain an academic degree... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.308 Training to obtain an academic degree. (a) An agency may authorize training for an employee to obtain an academic degree under...
5 CFR 410.308 - Training to obtain an academic degree.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Training to obtain an academic degree... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.308 Training to obtain an academic degree. (a) An agency may authorize training for an employee to obtain an academic degree under...
5 CFR 410.308 - Training to obtain an academic degree.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Training to obtain an academic degree... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.308 Training to obtain an academic degree. (a) An agency may authorize training for an employee to obtain an academic degree under...
5 CFR 410.308 - Training to obtain an academic degree.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Training to obtain an academic degree... REGULATIONS TRAINING Establishing and Implementing Training Programs § 410.308 Training to obtain an academic degree. (a) An agency may authorize training for an employee to obtain an academic degree under...
Feasibility study of robotic neural controllers
NASA Technical Reports Server (NTRS)
Magana, Mario E.
1990-01-01
The results are given of a feasibility study performed to establish if an artificial neural controller could be used to achieve joint space trajectory tracking of a two-link robot manipulator. The study is based on the results obtained by Hecht-Nielsen, who claims that a functional map can be implemented to a desired degree of accuracy with a three layer feedforward artificial neural network. Central to this study is the assumption that the robot model as well as its parameters values are known.
The Simulation of a Jumbo Jet Transport Aircraft. Volume 2: Modeling Data
NASA Technical Reports Server (NTRS)
Hanke, C. R.; Nordwall, D. R.
1970-01-01
The manned simulation of a large transport aircraft is described. Aircraft and systems data necessary to implement the mathematical model described in Volume I and a discussion of how these data are used in model are presented. The results of the real-time computations in the NASA Ames Research Center Flight Simulator for Advanced Aircraft are shown and compared to flight test data and to the results obtained in a training simulator known to be satisfactory.
Numerical simulation of h-adaptive immersed boundary method for freely falling disks
NASA Astrophysics Data System (ADS)
Zhang, Pan; Xia, Zhenhua; Cai, Qingdong
2018-05-01
In this work, a freely falling disk with aspect ratio 1/10 is directly simulated by using an adaptive numerical model implemented on a parallel computation framework JASMIN. The adaptive numerical model is a combination of the h-adaptive mesh refinement technique and the implicit immersed boundary method (IBM). Our numerical results agree well with the experimental results in all of the six degrees of freedom of the disk. Furthermore, very similar vortex structures observed in the experiment were also obtained.
Word and frame synchronization with verification for PPM optical communications
NASA Technical Reports Server (NTRS)
Marshall, William K.
1986-01-01
A method for obtaining word and frame synchronization in pulse position modulated optical communication systems is described. The method uses a short sync sequence inserted at the beginning of each data frame and a verification procedure to distinguish between inserted and randomly occurring sequences at the receiver. This results in an easy to implement sync system which provides reliable synchronization even at high symbol error rates. Results are given for the application of this approach to a highly energy efficient 256-ary PPM test system.
Modelling crystal growth: Convection in an asymmetrically heated ampoule
NASA Technical Reports Server (NTRS)
Alexander, J. Iwan D.; Rosenberger, Franz; Pulicani, J. P.; Krukowski, S.; Ouazzani, Jalil
1990-01-01
The objective was to develop and implement a numerical method capable of solving the nonlinear partial differential equations governing heat, mass, and momentum transfer in a 3-D cylindrical geometry in order to examine the character of convection in an asymmetrically heated cylindrical ampoule. The details of the numerical method, including verification tests involving comparison with results obtained from other methods, are presented. The results of the study of 3-D convection in an asymmetrically heated cylinder are described.
Bianchini, Monica; Scarselli, Franco
2014-08-01
Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.
Massively Multithreaded Maxflow for Image Segmentation on the Cray XMT-2
Bokhari, Shahid H.; Çatalyürek, Ümit V.; Gurcan, Metin N.
2014-01-01
SUMMARY Image segmentation is a very important step in the computerized analysis of digital images. The maxflow mincut approach has been successfully used to obtain minimum energy segmentations of images in many fields. Classical algorithms for maxflow in networks do not directly lend themselves to efficient parallel implementations on contemporary parallel processors. We present the results of an implementation of Goldberg-Tarjan preflow-push algorithm on the Cray XMT-2 massively multithreaded supercomputer. This machine has hardware support for 128 threads in each physical processor, a uniformly accessible shared memory of up to 4 TB and hardware synchronization for each 64 bit word. It is thus well-suited to the parallelization of graph theoretic algorithms, such as preflow-push. We describe the implementation of the preflow-push code on the XMT-2 and present the results of timing experiments on a series of synthetically generated as well as real images. Our results indicate very good performance on large images and pave the way for practical applications of this machine architecture for image analysis in a production setting. The largest images we have run are 320002 pixels in size, which are well beyond the largest previously reported in the literature. PMID:25598745
Data analysis of response interruption and redirection as a treatment for vocal stereotypy.
Wunderlich, Kara L; Vollmer, Timothy R
2015-12-01
Vocal stereotypy, or repetitive, noncontextual vocalizations, is a problematic form of behavior exhibited by many individuals with autism spectrum disorder (ASD). Recent research has evaluated the efficacy of response interruption and redirection (RIRD) in the reduction of vocal stereotypy. Research has indicated that RIRD often results in reductions in the level of vocal stereotypy; however, many previous studies have only presented data on vocal stereotypy that occurred outside RIRD implementation. The current study replicated the procedures of previous studies that have evaluated the efficacy of RIRD and compared 2 data-presentation methods: inclusion of only data collected outside RIRD implementation and inclusion of all vocal stereotypy data from the entirety of each session. Subjects were 7 children who had been diagnosed with ASD. Results indicated that RIRD appeared to be effective when we evaluated the level of vocal stereotypy outside RIRD implementation, but either no reductions or more modest reductions in the level of vocal stereotypy during the entirety of sessions were obtained for all subjects. Results suggest that data-analysis methods used in previous research may overestimate the efficacy of RIRD. © Society for the Experimental Analysis of Behavior.
Secure Hardware Design for Trust
2014-03-01
approach. The Grain VHDL code was obtained from [13] and implemented in the same fashion as shown in Figure 5. Approved for Public Release...CRC implementation for USB token protocol was chosen was the main candidate. The VHDL source code was generated from [14] using the standard CRC5...10 6.1 Logic Encryption Implementation of AES
Fujimoto, Kazumitsu; Asai, Noriaki; Nakajima, Yoshinaga; Inoue, Kaoru
2015-11-01
Our laboratory, for the purpose of Quality Management System (QMS) improvement, acquired ISO 15189:2003 accreditation 9 years ago and completed the renewal to ISO 15189:2012 last year. In this study, we reviewed the efficacy of ISO 15189 based on an analysis of laboratory director's and managers' opinions. We could realize QMS improvement through the proactive implementation of preventive and corrective actions, and also the continuous implementation of education and delivery by means of reviewing the interview records of ISO 15189:2012 renewal with the laboratory director. All answers to the questionnaire obtained from managers with regard to the advantages of ISO 15189 acquisition agreed with the purpose of ISO 15189. From these results, we concluded that ISO 15189 acquisition was successful for QMS improvement. [Review].
Combustion Diagnostics and Flow Visualization of Hypergolic Combustion and Gelled Mixing Behavior
1997-12-19
difference. Also, Exciplex Flourescence imaging has been implented to visualize diffusion layers which form at the contact interface of mixing...have been implemented and developed as a result of this effort. Among these techniques the most noteworthy involves a unique application of Exciplex ...fluorescence for visualization of diffusion layers formed between mixing liquids. Time resolved images of Exciplex fluorescence have been obtained
ERIC Educational Resources Information Center
Garcia Laborda, Jesus
2003-01-01
The main purpose of this paper is to describe the basic findings obtained as a result of the implementation of two projects of Computer and Information Technologies held in Valencia (Spain) between 2002 and 2003 with 92 second year university students enrolled in English as a foreign language to find out their ICT and foreign language needs both…
NASA Technical Reports Server (NTRS)
Gates, R. M.; Williams, J. E.
1974-01-01
Results are given of analytical studies performed in support of the design, implementation, checkout and use of NASA's dynamic docking test system (DDTS). Included are analyses of simulator components, a list of detailed operational test procedures, a summary of simulator performance, and an analysis and comparison of docking dynamics and loads obtained by test and analysis.
1988-08-01
routing at the network layer. Methods of implementing dynamic power control at the link la -er on an individual packet- by-packe transmission basis are...versions of the simulators that were used to obtain many of the results. Vida Pitman of Rockwell provided an appreciated review of the grammar and style of...155 R EFER EN C ES
ERIC Educational Resources Information Center
Montane, Angelica; Chesterfield, Ray
2005-01-01
This document summarizes the results obtained by the AprenDes project in 2004, the project's first year of implementation. It provides the principal findings on program performance from a baseline in May 2004 to the end of the school year (late October 2004). Progress on a number of project objectives related to decentralized school- and…
Pragmatic Approach to Device-Independent Color
NASA Technical Reports Server (NTRS)
Brandt, R. D.; Capraro, K. S.
1995-01-01
JPL has been producing images of planetary bodies for over 30 years. The results of an effort to implement device-independent color on three types of devices are described. The goal is to produce near the same eye-brain response when the observer views the image produced by each device under the correct lighting conditions. The procedure used to calibrate and obtain each device profile is described.
ERIC Educational Resources Information Center
Crepaldi, Davide; Berlingeri, Manuela; Paulesu, Eraldo; Luzzatti, Claudio
2011-01-01
It is generally held that noun processing is specifically sub-served by temporal areas, while the neural underpinnings of verb processing are located in the frontal lobe. However, this view is now challenged by a significant body of evidence accumulated over the years. Moreover, the results obtained so far on the neural implementation of noun and…
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
2014-01-01
Background In this article, the test implementation of a school-oriented drug prevention program “Study without Drugs” is discussed. The aims of this study were to determine the results of the process evaluation and to determine whether the proposed school-oriented drug prevention program during a pilot project was effective for the participating pupils. Methods Sixty second-grade pupils at a junior high school in Paramaribo, Suriname participated in the test implementation. They were divided into two classes. For the process evaluation the students completed a structured questionnaire focusing on content and teaching method after every lesson. Lessons were qualified with a score from 0–10. The process was also evaluated by the teachers through structured interviews. Attention was paid to reach, dose delivered, dose received, fidelity, connection, achieved effects/observed behaviors, areas for improvement, and lesson strengths. The effect evaluation was conducted by using the General Liniair Model (repeated measure). The research (-design) was a pre-experimental design with pre-and post-test. Results No class or sex differences were detected among the pupils with regard to the assessment of content, methodology, and qualification of the lessons. Post-testing showed that participating pupils obtained an increased knowledge of drugs, their drug-resisting skills were enhanced, and behavior determinants (attitude, subjective norm, self-efficacy, and intention) became more negative towards drugs. Conclusions From the results of the test implementation can be cautiously concluded that the program “Study without Drugs” may yield positive results when applied in schools). Thus, this pilot program can be considered a step towards the development and implementation of an evidence-based school-oriented program for pupils in Suriname. PMID:24920468
[Assessment of user embracement with risk rating in emergency hospital services].
Versa, Gelena Lucinéia Gomes da Silva; Vituri, Dagmar Wilamowius; Buriola, Aline Aparecida; Carlos Aparecido de Oliveira; Matsuda, Laura Misue
2014-09-01
Cross-sectional and quantitative study, conducted in 2013, aiming to evaluate the implementation of User Embracement with Risk Rating (ACCR) in four Emergency Hospital Services. One hundred fifty six nurses participated and answered the questionnaire"User Embracement with Risk Rating". The data were treated through descriptive and inferential statistics, from the Kruskal-Wallis test. The implementation of ACCR was assessed as precarious, mainly due to the lack of referral of low complexity cases to the basic health system, the inadequate physical space for companions and the lack of discussion and periodic assessment of the flow of care in ACCR. The dimension Result of Implementation obtained a slightly higher score and Structure was the dimension with the lowest score. It was concluded that the negative assessments by nursing professionals of the referred dimensions in the investigated sites suggests the need for improvements, especially in the dimension Structure.
Carr, Sandra E.; Celenza, Antonio; Lake, Fiona
2009-01-01
The essential procedural skills that newly graduated doctors require are rarely defined, do not take into account pre-vocational employer expectations, and differ between Universities. This paper describes how one Faculty used local evaluation data to drive curriculum change and implement a clinically integrated, multi-professional skills program. A curriculum restructure included a review of all undergraduate procedural skills training by academic staff and clinical departments, resulting in a curriculum skills map. Undergraduate training was then linked with postgraduate expectations using the Delphi process to identify the skills requiring structured standardised training. The skills program was designed and implemented without a dedicated simulation center. This paper shows the benefits of an alternate model in which clinical integration of training and multi-professional collaboration encouraged broad ownership of a program and, in turn, impacted the clinical experience obtained. PMID:20165528
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J. X., E-mail: jsliu9@berkeley.edu; Milbourne, T.; Bitter, M.
The implementation of advanced electron cyclotron emission imaging (ECEI) systems on tokamak experiments has revolutionized the diagnosis of magnetohydrodynamic (MHD) activities and improved our understanding of instabilities, which lead to disruptions. It is therefore desirable to have an ECEI system on the ITER tokamak. However, the large size of optical components in presently used ECEI systems have, up to now, precluded the implementation of an ECEI system on ITER. This paper describes a new optical ECEI concept that employs a single spherical mirror as the only optical component and exploits the astigmatism of such a mirror to produce an imagemore » with one-dimensional spatial resolution on the detector. Since this alternative approach would only require a thin slit as the viewing port to the plasma, it would make the implementation of an ECEI system on ITER feasible. The results obtained from proof-of-principle experiments with a 125 GHz microwave system are presented.« less
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.
2000-01-01
A research program is in progress to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to impact loads. Previously, strain rate dependent inelastic constitutive equations developed to model the polymer matrix were implemented into a mechanics of materials based micromechanics method. In the current work, the computation of the effective inelastic strain in the micromechanics model was modified to fully incorporate the Poisson effect. The micromechanics equations were also combined with classical laminate theory to enable the analysis of symmetric multilayered laminates subject to in-plane loading. A quasi-incremental trapezoidal integration method was implemented to integrate the constitutive equations within the laminate theory. Verification studies were conducted using an AS4/PEEK composite using a variety of laminate configurations and strain rates. The predicted results compared well with experimentally obtained values.
Storage strategies of eddy-current FE-BI model for GPU implementation
NASA Astrophysics Data System (ADS)
Bardel, Charles; Lei, Naiguang; Udpa, Lalita
2013-01-01
In the past few years graphical processing units (GPUs) have shown tremendous improvements in computational throughput over standard CPU architecture. However, this comes at the cost of restructuring the algorithms to meet the strengths and drawbacks of this GPU architecture. A major drawback is the state of limited memory, and hence storage of FE stiffness matrices on the GPU is important. In contrast to storage on CPU the GPU storage format has significant influence on the overall performance. This paper presents an investigation of a storage strategy in the implementation of a two-dimensional finite element-boundary integral (FE-BI) model for Eddy current NDE applications, on GPU architecture. Specifically, the high dimensional matrices are manipulated by examining the matrix structure and optimally splitting into structurally independent component matrices for efficient storage and retrieval of each component. Results obtained using the proposed approach are compared to those of conventional CPU implementation for validating the method.
NASA Astrophysics Data System (ADS)
Powell, Keith B.; Vaitheeswaran, Vidhya
2010-07-01
The MMT observatory has recently implemented and tested an optimal wavefront controller for the NGS adaptive optics system. Open loop atmospheric data collected at the telescope is used as the input to a MATLAB based analytical model. The model uses nonlinear constrained minimization to determine controller gains and optimize the system performance. The real-time controller performing the adaptive optics close loop operation is implemented on a dedicated high performance PC based quad core server. The controller algorithm is written in C and uses the GNU scientific library for linear algebra. Tests at the MMT confirmed the optimal controller significantly reduced the residual RMS wavefront compared with the previous controller. Significant reductions in image FWHM and increased peak intensities were obtained in J, H and K-bands. The optimal PID controller is now operating as the baseline wavefront controller for the MMT NGS-AO system.
Monitoring and Evaluating the Ebola Response Effort in Two Liberian Communities.
Munodawafa, Davison; Moeti, Matshidiso Rebecca; Phori, Peter Malekele; Fawcett, Stephen B; Hassaballa, Ithar; Sepers, Charles; Reed, Florence DiGennaro; Schultz, Jerry A; Chiriseri, Ephraim Tafadzwa
2018-04-01
Although credited with ultimately reducing incidence of Ebola Virus Disease (EVD) in West Africa, little is known about the amount and kind of Ebola response activities associated with reducing the incidence of EVD. Our team monitored Ebola response activities and associated effects in two rural counties in Liberia highly affected by Ebola. We used a participatory monitoring and evaluation system, and drew upon key informant interviews and document review, to systematically capture, code, characterize, and communicate patterns in Ebola response activities. We reviewed situation reports to obtain data on incidence of EVD over time. Results showed enhanced implementation of Ebola response activities corresponded with decreased incidence of EVD. The pattern of staggered implementation of activities and associated effects-replicated in both counties-is suggestive of the role of Ebola response activities in reducing EVD. Systematic monitoring of response activities to control disease outbreaks holds lessons for implementing and evaluating multi-sector, comprehensive community health efforts.
Urrios, Arturo; de Nadal, Eulàlia; Solé, Ricard; Posas, Francesc
2016-01-01
Engineered synthetic biological devices have been designed to perform a variety of functions from sensing molecules and bioremediation to energy production and biomedicine. Notwithstanding, a major limitation of in vivo circuit implementation is the constraint associated to the use of standard methodologies for circuit design. Thus, future success of these devices depends on obtaining circuits with scalable complexity and reusable parts. Here we show how to build complex computational devices using multicellular consortia and space as key computational elements. This spatial modular design grants scalability since its general architecture is independent of the circuit’s complexity, minimizes wiring requirements and allows component reusability with minimal genetic engineering. The potential use of this approach is demonstrated by implementation of complex logical functions with up to six inputs, thus demonstrating the scalability and flexibility of this method. The potential implications of our results are outlined. PMID:26829588
Purcell, Maureen K.; Getchell, Rodman G.; McClure, Carol A.; Weber, S.E.; Garver, Kyle A.
2011-01-01
Real-time, or quantitative, polymerase chain reaction (qPCR) is quickly supplanting other molecular methods for detecting the nucleic acids of human and other animal pathogens owing to the speed and robustness of the technology. As the aquatic animal health community moves toward implementing national diagnostic testing schemes, it will need to evaluate how qPCR technology should be employed. This review outlines the basic principles of qPCR technology, considerations for assay development, standards and controls, assay performance, diagnostic validation, implementation in the diagnostic laboratory, and quality assurance and control measures. These factors are fundamental for ensuring the validity of qPCR assay results obtained in the diagnostic laboratory setting.
Atmospheric Constituents in GEOS-5: Components for an Earth System Model
NASA Technical Reports Server (NTRS)
Pawson, Steven; Douglass, Anne; Duncan, Bryan; Nielsen, Eric; Ott, Leslie; Strode, Sarah
2011-01-01
The GEOS-S model is being developed for weather and climate processes, including the implementation of "Earth System" components. While the stratospheric chemistry capabilities are mature, we are presently extending this to include predictions of the tropospheric composition and chemistry - this includes CO2, CH4, CO, nitrogen species, etc. (Aerosols are also implemented, but are beyond the scope of this paper.) This work will give an overview of our chemistry modules, the approaches taken to represent surface emissions and uptake of chemical species, and some studies of the sensitivity of the atmospheric circulation to changes in atmospheric composition. Results are obtained through focused experiments and multi-decadal simulations.
Enhancement of event related potentials by iterative restoration algorithms
NASA Astrophysics Data System (ADS)
Pomalaza-Raez, Carlos A.; McGillem, Clare D.
1986-12-01
An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.
Ab initio determination of effective electron-phonon coupling factor in copper
NASA Astrophysics Data System (ADS)
Ji, Pengfei; Zhang, Yuwen
2016-04-01
The electron temperature Te dependent electron density of states g (ε), Fermi-Dirac distribution f (ε), and electron-phonon spectral function α2 F (Ω) are computed as prerequisites before achieving effective electron-phonon coupling factor Ge-ph. The obtained Ge-ph is implemented into a molecular dynamics (MD) and two-temperature model (TTM) coupled simulation of femtosecond laser heating. By monitoring temperature evolutions of electron and lattice subsystems, the result utilizing Ge-ph from ab initio calculation shows a faster decrease of Te and increase of Tl than those using Ge-ph from phenomenological treatment. The approach of calculating Ge-ph and its implementation into MD-TTM simulation is applicable to other metals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baring-Gould, I.; Flowers, L.; Kelly, M.
2009-05-01
As the world moves toward a vision of expanded wind energy, the industry is faced with the challenges of obtaining a skilled workforce and addressing local wind development concerns. Wind Powering America's Wind for Schools Program works to address these issues. The program installs small wind turbines at community "host" schools while developing wind application centers at higher education institutions. Teacher training with interactive and interschool curricula is implemented at each host school, while students at the universities assist in implementing the host school systems while participating in other wind course work. This poster provides an overview of the program'smore » objectives, goals, approach, and results.« less
NASA Technical Reports Server (NTRS)
Camarda, C. J.; Adelman, H. M.
1984-01-01
The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.
Fire fighters as basic life support responders: a study of successful implementation.
Høyer, Christian Bjerre; Christensen, Erika Frischknecht
2009-04-02
First responders are recommended as a supplement to the Emergency Medical Services (EMS) in order to achieve early defibrillation. Practical and organisational aspects are essential when trying to implement new parts in the "Chain of Survival"; areas to address include minimizing dispatch time, ensuring efficient and quick communication, and choosing areas with appropriate driving distances. The aim of this study was to implement a system using Basic Life Support (BLS) responders equipped with an automatic external defibrillator in an area with relatively short emergency medical services' response times. Success criteria for implementation was defined as arrival of the BLS responders before the EMS, attachment (and use) of the AED, and successful defibrillation. This was a prospective observational study from September 1, 2005 to December 31, 2007 (28 months) in the city of Aarhus, Denmark. The BLS responder system was implemented in an area up to three kilometres (driving distance) from the central fire station, encompassing approximately 81,500 inhabitants. The team trained on each shift and response times were reduced by choice of area and by sending the alarm directly to the fire brigade dispatcher. The BLS responders had 1076 patient contacts. The median response time was 3.5 minutes (25th percentile 2.75, 75th percentile 4.25). The BLS responders arrived before EMS in 789 of the 1076 patient contacts (73%). Cardiac arrest was diagnosed in 53 cases, the AED was attached in 29 cases, and a shockable rhythm was detected in nine cases. Eight were defibrillated using an AED. Seven of the eight obtained return of spontaneous circulation (ROSC). Six of the seven obtaining ROSC survived more than 30 days. In this study, the implementation of BLS responders may have resulted in successful resuscitations. On basis of the close corporation between all participants in the chain of survival this project contributed to the first link: short response time and trained personnel to ensure early defibrillation.
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schmidt, Phillip H.
1993-01-01
A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.
NASA Astrophysics Data System (ADS)
Gausachs, Gaston
2008-07-01
The Near Infrared Chronographic Imager (NICI) being commissioned at Gemini was upgraded with a more powerful Chilled Water Glycol System to address early overheating problems. The previous system was replaced with a completely new design favoring improved airflow and increased heat transfer capabilities. The research leading to this upgrade showed a significant lack of cooling power of the original design. The solution was a combination of commercial heat exchanger and fans and a custom built enclosure. As a prime infrared telescope facility, Gemini is very much interested in maintaining the least amount of heat dissipated to the ambient air. The results obtained through the implementation of this solution will be helpful in understanding the state of other existing electronics enclosures as well as those for new instruments to come. With the advent of electronic intensive AO systems, future electronics enclosures must take full advantage of improved cooling. This paper describes the design and implementation phases of the project. The results under maximum operating capacity proved to be within the expected theoretical values and were deemed successful.
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
González, Lorena; Elgart, Jorge F; Calvo, Héctor; Gagliardino, Juan J
2013-01-01
To measure the impact of a diabetes and cardiovascular risk factors program implemented in a social security institution upon short- and long-term clinical/metabolic outcomes and costs of care. Observational longitudinal cohort analysis of clinical/metabolic data and resource use of 300 adult male and female program participants with diabetes before (baseline) and 1 and 3 years after implementation of the program. Data were obtained from clinical records (Qualidiab) and the administration's database. The implementation of the program in "real world" conditions resulted in an immediate and sustainable improvement of the quality of care provided to people with diabetes incorporated therein. We also recorded a more appropriate oral therapy prescription for hyperglycemia and cardiovascular risk factors (CVRFs), as well as a decrease of events related to chronic complications. This improvement was associated with an increased use of diagnostic and therapeutic resources, particularly those related to pharmacy prescriptions, not specifically used for the control of hyperglycemia and other CVRFs. The implementation of a diabetes program in real-world conditions results in a significant short- and long-term improvement of the quality of care provided to people with diabetes and other CVRFs, but simultaneously increased the use of resources and the cost of diagnostic and therapeutic practices. Since controlled studies have shown improvement in quality of care without increasing costs, our results suggest the need to include management-control strategies in these programs for appropriate medical and administrative feedback to ensure the simultaneous improvement of clinical outcomes and optimization of the use of resources.
NASA Technical Reports Server (NTRS)
Arnold, Steven M; Bednarcyk, Brett; Aboydi, Jacob
2004-01-01
The High-Fidelity Generalized Method of Cells (HFGMC) micromechanics model has recently been reformulated by Bansal and Pindera (in the context of elastic phases with perfect bonding) to maximize its computational efficiency. This reformulated version of HFGMC has now been extended to include both inelastic phases and imperfect fiber-matrix bonding. The present paper presents an overview of the HFGMC theory in both its original and reformulated forms and a comparison of the results of the two implementations. The objective is to establish the correlation between the two HFGMC formulations and document the improved efficiency offered by the reformulation. The results compare the macro and micro scale predictions of the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) versions of both formulations into the inelastic regime, and, in the case of the discontinuous reinforcement version, with both perfect and weak interfacial bonding. The results demonstrate that identical predictions are obtained using either the original or reformulated implementations of HFGMC aside from small numerical differences in the inelastic regime due to the different implementation schemes used for the inelastic terms present in the two formulations. Finally, a direct comparison of execution times is presented for the original formulation and reformulation code implementations. It is shown that as the discretization employed in representing the composite repeating unit cell becomes increasingly refined (requiring a larger number of sub-volumes), the reformulated implementation becomes significantly (approximately an order of magnitude at best) more computationally efficient in both the continuous reinforcement (doubly-periodic) and discontinuous reinforcement (triply-periodic) cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory—e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer values of themore » action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states—and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H.; Cotton, Stephen J.
It is pointed out that the classical phase space distribution in action-angle (a-a) variables obtained from a Wigner function depends on how the calculation is carried out: if one computes the standard Wigner function in Cartesian variables (p, x), and then replaces p and x by their expressions in terms of a-a variables, one obtains a different result than if the Wigner function is computed directly in terms of the a-a variables. Furthermore, the latter procedure gives a result more consistent with classical and semiclassical theory - e.g., by incorporating the Bohr-Sommerfeld quantization condition (quantum states defined by integer valuesmore » of the action variable) as well as the Heisenberg correspondence principle for matrix elements of an operator between such states - and has also been shown to be more accurate when applied to electronically non-adiabatic applications as implemented within the recently developed symmetrical quasi-classical (SQC) Meyer-Miller (MM) approach. Moreover, use of the Wigner function (obtained directly) in a-a variables shows how our standard SQC/MM approach can be used to obtain off-diagonal elements of the electronic density matrix by processing in a different way the same set of trajectories already used (in the SQC/MM methodology) to obtain the diagonal elements.« less
Rodríguez-Cerrillo, Matilde; Fernández-Diaz, Eddita; Iñurrieta-Romero, Amaia; Poza-Montoro, Ana
2012-01-01
The purpose of this paper is to describe changes and results obtained after implementation of a quality management system (QMS) according to ISO standards in a Hospital in the Home (HIH) Unit. The paper describes changes made and outcomes achieved. This took part in the HiH Unit, Clinico Hospital, Madrid, Spain, and looked at admissions, mean stay, patient satisfaction, adverse events, returns to hospital, no admitted referrals, complaints, compliance to protocols, equipment failures and resolution of urgent consultations. In June 2008, HiH Unit, Clinico Hospital obtained ISO certification. The main results achieved are as follows. There was an increase in patients' satisfaction--in June 2008, assessment of the quality of care provided by staff was scored at 4.7 (on a scale of 1 to 5); in 2010 it has been scored at 4.96. Patient satisfaction rate has increased from 92 percent to 98.8 percent. No complaints from patients were received. Unscheduled returns to hospital have decreased from 7 percent to 3 percent. There were no medical equipment failures. External suppliers' performance has improved. Material and medication needed by staff was available when necessary. The number of admissions has increased. Compliance to protocols has reached 97 percent. Inappropriate referrals have decreased by 8 percent. Six medications-related incidents were detected; in two cases the incident was not due to an error. In the other four cases error could have been detected before reaching the patient. Implementations of an ISO quality management system allow improved quality of care and patient satisfaction in a HIH Unit.
NASA Astrophysics Data System (ADS)
Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando
2017-08-01
Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.
Lecellier, A; Gaydou, V; Mounier, J; Hermet, A; Castrec, L; Barbier, G; Ablain, W; Manfait, M; Toubas, D; Sockalingum, G D
2015-02-01
Filamentous fungi may cause food and feed spoilage and produce harmful metabolites to human and animal health such as mycotoxins. Identification of fungi using conventional phenotypic methods is time-consuming and molecular methods are still quite expensive and require specific laboratory skills. In the last two decades, it has been shown that Fourier transform infrared (FTIR) spectroscopy was an efficient tool for microorganism identification. The aims of this study were to use a simple protocol for the identification of filamentous fungi using FTIR spectroscopy coupled with a partial least squares discriminant analysis (PLS-DA), to implement a procedure to validate the obtained results, and to assess the transferability of the method and database. FTIR spectra of 486 strains (43 genera and 140 species) were recorded. An IR spectral database built with 288 strains was used to identify 105 different strains. It was found that 99.17% and 92.3% of spectra derived from these strains were correctly assigned at the genus and species levels, respectively. The establishment of a score and a threshold permitted to validate 80.79% of the results obtained. A standardization function (SF) was also implemented and tested on FTIR data from another instrument on a different site and permitted to increase the percentage of well predicted spectra for this set from 72.15% to 89.13%. This study confirms the good performance of high throughput FTIR spectroscopy for fungal identification using a spectral library of molds of industrial relevance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Analysis of production flow process with lean manufacturing approach
NASA Astrophysics Data System (ADS)
Siregar, Ikhsan; Arif Nasution, Abdillah; Prasetio, Aji; Fadillah, Kharis
2017-09-01
This research was conducted on the company engaged in the production of Fast Moving Consumer Goods (FMCG). The production process in the company are still exists several activities that cause waste. Non value added activity (NVA) in the implementation is still widely found, so the cycle time generated to make the product will be longer. A form of improvement on the production line is by applying lean manufacturing method to identify waste along the value stream to find non value added activities. Non value added activity can be eliminated and reduced by utilizing value stream mapping and identifying it with activity mapping process. According to the results obtained that there are 26% of value-added activities and 74% non value added activity. The results obtained through the current state map of the production process of process lead time value of 678.11 minutes and processing time of 173.94 minutes. While the results obtained from the research proposal is the percentage of value added time of 41% of production process activities while non value added time of the production process of 59%. While the results obtained through the future state map of the production process of process lead time value of 426.69 minutes and processing time of 173.89 minutes.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
1998-01-01
An implementation of the Model Based Parameter Estimation (MBPE) technique is presented for obtaining the frequency response of the Radar Cross Section (RCS) of arbitrarily shaped, three-dimensional perfect electric conductor (PEC) bodies. An Electric Field Integral Equation (EFTE) is solved using the Method of Moments (MoM) to compute the RCS. The electric current is expanded in a rational function and the coefficients of the rational function are obtained using the frequency derivatives of the EFIE. Using the rational function, the electric current on the PEC body is obtained over a frequency band. Using the electric current at different frequencies, RCS of the PEC body is obtained over a wide frequency band. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth. Good agreement between MBPE and the exact solution over the bandwidth is observed.
NASA Astrophysics Data System (ADS)
Moiseyev, V. A.; Nazarov, V. P.; Zhuravlev, V. Y.; Zhuykov, D. A.; Kubrikov, M. V.; Klokotov, Y. N.
2016-12-01
The development of new technological equipment for the implementation of highly effective methods of recovering highly viscous oil from deep reservoirs is an important scientific and technical challenge. Thermal recovery methods are promising approaches to solving the problem. It is necessary to carry out theoretical and experimental research aimed at developing oil-well tubing (OWT) with composite heatinsulating coatings on the basis of basalt and glass fibers. We used the method of finite element analysis in Nastran software, which implements complex scientific and engineering calculations, including the calculation of the stress-strain state of mechanical systems, the solution of problems of heat transfer, the study of nonlinear static, the dynamic transient analysis of frequency characteristics, etc. As a result, we obtained a mathematical model of thermal conductivity which describes the steady-state temperature and changes in the fibrous highly porous material with the heat loss by Stefan-Boltzmann's radiation. It has been performed for the first time using the method of computer modeling in Nastran software environments. The results give grounds for further implementation of the real design of the OWT when implementing thermal methods for increasing the rates of oil production and mitigating environmental impacts.
Hermanowski, Tomasz Roman; Drozdowska, Aleksandra Krystyna; Kowalczyk, Marta
2015-01-01
Objectives In this paper, we emphasised that effective management of health plans beneficiaries access to reimbursed medicines requires proper institutional set-up. The main objective was to identify and recommend an institutional framework of integrated pharmaceutical care providing effective, safe and equitable access to medicines. Method The institutional framework of drug policy was derived on the basis of publications obtained by systematic reviews. A comparative analysis concerning adaptation of coordinated pharmaceutical care services in the USA, the UK, Poland, Italy, Denmark and Germany was performed. Results While most European Union Member States promote the implementation of selected e-Health tools, like e-Prescribing, these efforts do not necessarily implement an integrated package. There is no single agent who would manage an insured patients’ access to medicines and health care in a coordinated manner, thereby increasing the efficiency and safety of drug policy. More attention should be paid by European Union Member States as to how to integrate various e-Health tools to enhance benefits to both individuals and societies. One solution could be to implement an integrated “pharmacy benefit management” model, which is well established in the USA and Canada and provides an integrated package of cost-containment methods, implemented within a transparent institutional framework and powered by strong motivation of the agent. PMID:26528099
NASA Astrophysics Data System (ADS)
Inguane, Ronaldo; Gallego-Ayala, Jordi; Juízo, Dinis
In the context of integrated water resources management implementation, the decentralization of water resources management (DWRM) at the river basin level is a crucial aspect for its success. However, decentralization requires the creation of new institutions on the ground, to stimulate an environment enabling stakeholder participation and integration into the water management decision-making process. In 1991, Mozambique began restructuring its water sector toward operational decentralized water resources management. Within this context of decentralization, new legal and institutional frameworks have been created, e.g., Regional Water Administrations (RWAs) and River Basin Committees. This paper identifies and analyzes the key institutional challenges and opportunities of DWRM implementation in Mozambique. The paper uses a critical social science research methodology for in-depth analysis of the roots of the constraining factors for the implementation of DWRM. The results obtained suggest that RWAs should be designed considering the specific geographic and infrastructural conditions of their jurisdictional areas and that priorities should be selected in their institutional capacity building strategies that match local realities. Furthermore, the results also indicate that RWAs have enjoyed limited support from basin stakeholders, mainly in basins with less hydraulic infrastructure, in securing water availability for their users and minimizing the effect of climate variability.
Implementation of E-Government in Mexico: The Case of Infonavit
NASA Astrophysics Data System (ADS)
Herrera, Lizbeth; Gil-Garcia, J. Ramon
The implementation of information and communication technologies (ICTs) in the public sector is a strategy for administrative reform that has grown in importance in recent years. The use of ICT in government can help to improve the efficiency, quality, and transparency of public services and reduce the operating costs of bureaucracy. ICTs have also opened a new communication channel for government to provide public services to citizens without intermediaries. However, the implementation of an ICT initiative is not a simple process. Organizations frequently invest a great amount of resources into ICT initiatives, but the results they obtain often do not meet expectations. This observation is particularly true in some developing countries. Based on a case study of a Mexican federal agency, this chapter analyzes a successful strategy involving three ICT projects, taking into consideration institutional, organizational, and managerial aspects. Overall, the results of this study show that having a strategic plan that aligns the ICT project objectives with the overarching organizational goals leads to successful implementation because the technical, organizational, and institutional resources are managed in an integrated fashion. The chapter also reports on specific factors that had an impact on the characteristics and success of the three ICT projects.
Impact of telemedicine in hospital culture and its consequences on quality of care and safety
Steinman, Milton; Morbeck, Renata Albaladejo; Pires, Philippe Vieira; Abreu, Carlos Alberto Cordeiro; Andrade, Ana Helena Vicente; Terra, Jose Claudio Cyrineu; Teixeira, José Carlos; Kanamura, Alberto Hideki
2015-01-01
ABSTRACT Objective To describe the impact of the telemedicine application on the clinical process of care and its different effects on hospital culture and healthcare practice. Methods The concept of telemedicine through real time audio-visual coverage was implemented at two different hospitals in São Paulo: a secondary and public hospital, Hospital Municipal Dr. Moysés Deutsch, and a tertiary and private hospital, Hospital Israelita Albert Einstein. Results Data were obtained from 257 teleconsultations records over a 12-month period and were compared to a similar period before telemedicine implementation. For 18 patients (7.1%) telemedicine consultation influenced in diagnosis conclusion, and for 239 patients (92.9%), the consultation contributed to clinical management. After telemedicine implementation, stroke thrombolysis protocol was applied in 11% of ischemic stroke patients. Telemedicine approach reduced the need to transfer the patient to another hospital in 25.9% regarding neurological evaluation. Sepsis protocol were adopted and lead to a 30.4% reduction mortality regarding severe sepsis. Conclusion The application is associated with differences in the use of health services: emergency transfers, mortality, implementation of protocols and patient management decisions, especially regarding thrombolysis. These results highlight the role of telemedicine as a vector for transformation of hospital culture impacting on the safety and quality of care. PMID:26676268
AESS: Accelerated Exact Stochastic Simulation
NASA Astrophysics Data System (ADS)
Jenkins, David D.; Peterson, Gregory D.
2011-12-01
The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution method: The Accelerated Exact Stochastic Simulation (AESS) tool provides implementations of a wide variety of popular variations on the Gillespie method. Users can select the specific algorithm considered most appropriate. Comparisons between the methods and with other available implementations indicate that AESS provides the fastest known implementation of Gillespie's method for a variety of test models. Users may wish to execute ensembles of simulations to sweep parameters or to obtain better statistical results, so AESS supports acceleration of ensembles of simulation using parallel processing with MPI, SSE vector units on x86 processors, and/or using NVIDIA GPUs with CUDA.
Electronic health records: critical success factors in implementation.
Safdari, Reza; Ghazisaeidi, Marjan; Jebraeily, Mohamad
2015-04-01
EHR implementation results in the improved quality of care, customer-orientation and timely access to complete information. Despite the potential benefits of EHR, its implementation is a difficult and complex task whose success depends on many factors. The purpose of this research is indeed to identify the key success factors of EHR. This is a cross-sectional survey conducted with participation of 340 work forces from different types of job from Hospitals of TUMS in 2014. Data were collected using a self-structured questionnaire which was estimated as both reliable and valid. The data were analyzed by SPSS software descriptive statistics and analytical statistics. 58.2% of respondents were female and their mean age and work experience were 37.7 and 11.2 years, respectively and most respondents (52.5%) was bachelor. In terms of job, the maximum rate was related to nursing (33 %) and physician (21 %). the main category of critical success factors in Implementation EHRs, the highest rate related to Project Management (4.62) and lowest related to Organizational factors (3.98). success in implementation EHRs requirement more centralization to project management and human factors. Therefore must be Creating to EHR roadmap implementation, establishment teamwork to participation of end-users and select prepare leadership, users obtains sufficient training to use of system and also prepare support from maintain and promotion system.
Traveling-Wave Solutions of the Kolmogorov-Petrovskii-Piskunov Equation
NASA Astrophysics Data System (ADS)
Pikulin, S. V.
2018-02-01
We consider quasi-stationary solutions of a problem without initial conditions for the Kolmogorov-Petrovskii-Piskunov (KPP) equation, which is a quasilinear parabolic one arising in the modeling of certain reaction-diffusion processes in the theory of combustion, mathematical biology, and other areas of natural sciences. A new efficiently numerically implementable analytical representation is constructed for self-similar plane traveling-wave solutions of the KPP equation with a special right-hand side. Sufficient conditions for an auxiliary function involved in this representation to be analytical for all values of its argument, including the endpoints, are obtained. Numerical results are obtained for model examples.
Video-signal improvement using comb filtering techniques.
NASA Technical Reports Server (NTRS)
Arndt, G. D.; Stuber, F. M.; Panneton, R. J.
1973-01-01
Significant improvement in the signal-to-noise performance of television signals has been obtained through the application of comb filtering techniques. This improvement is achieved by removing the inherent redundancy in the television signal through linear prediction and by utilizing the unique noise-rejection characteristics of the receiver comb filter. Theoretical and experimental results describe the signal-to-noise ratio and picture-quality improvement obtained through the use of baseband comb filters and the implementation of a comb network as the loop filter in a phase-lock-loop demodulator. Attention is given to the fact that noise becomes correlated when processed by the receiver comb filter.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
García-Calvo, Raúl; Guisado, JL; Diaz-del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco
2018-01-01
Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes—master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)—is carried out for this problem. Several procedures that optimize the use of the GPU’s resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs). PMID:29662297
García-Calvo, Raúl; Guisado, J L; Diaz-Del-Rio, Fernando; Córdoba, Antonio; Jiménez-Morales, Francisco
2018-01-01
Understanding the regulation of gene expression is one of the key problems in current biology. A promising method for that purpose is the determination of the temporal dynamics between known initial and ending network states, by using simple acting rules. The huge amount of rule combinations and the nonlinear inherent nature of the problem make genetic algorithms an excellent candidate for finding optimal solutions. As this is a computationally intensive problem that needs long runtimes in conventional architectures for realistic network sizes, it is fundamental to accelerate this task. In this article, we study how to develop efficient parallel implementations of this method for the fine-grained parallel architecture of graphics processing units (GPUs) using the compute unified device architecture (CUDA) platform. An exhaustive and methodical study of various parallel genetic algorithm schemes-master-slave, island, cellular, and hybrid models, and various individual selection methods (roulette, elitist)-is carried out for this problem. Several procedures that optimize the use of the GPU's resources are presented. We conclude that the implementation that produces better results (both from the performance and the genetic algorithm fitness perspectives) is simulating a few thousands of individuals grouped in a few islands using elitist selection. This model comprises 2 mighty factors for discovering the best solutions: finding good individuals in a short number of generations, and introducing genetic diversity via a relatively frequent and numerous migration. As a result, we have even found the optimal solution for the analyzed gene regulatory network (GRN). In addition, a comparative study of the performance obtained by the different parallel implementations on GPU versus a sequential application on CPU is carried out. In our tests, a multifold speedup was obtained for our optimized parallel implementation of the method on medium class GPU over an equivalent sequential single-core implementation running on a recent Intel i7 CPU. This work can provide useful guidance to researchers in biology, medicine, or bioinformatics in how to take advantage of the parallelization on massively parallel devices and GPUs to apply novel metaheuristic algorithms powered by nature for real-world applications (like the method to solve the temporal dynamics of GRNs).
NASA Astrophysics Data System (ADS)
Winter, Sebastian; Schlüter, Ralf; Hlousek, Felix; Buske, Stefan
2017-04-01
A test site for the design, implementation and operation of an underground in-situ bioleaching unit has been installed by the „Biohydrometallurgical Center for Strategic Elements" at the research and education mine "Reiche Zeche" of Technical University Bergakademie Freiberg. For this purpose an ore vein block will be developed and mined with the bio-hydrometallurgical in-situ leaching technology. As a site survey an underground seismic tomography experiment has been performed to investigate the spatial distribution of the ore vein within this block consisting mainly of gneiss and with dimensions of about 30 x 10 meters. The experiment was performed with a sledgehammer as source and 76 three-component receivers with source and receiver point intervals of about 1 m surrounding the approximately rectangular block. High precision laser scanning was performed to obtain accurate source and receiver positions which was particularly necessary to obtain reliable results due to the generally high wave velocities of the gneiss. The resulting seismic data set showed a high signal-to-noise ratio with clear first arrivals which were picked for all source and receiver combinations and subsequently used as input to a first-arrival tomographic inversion scheme. The resulting velocity model has very good ray coverage and shows well resolved high- and low-velocity regions within the block. These regions can be clearly assigned to mapped outcrops of the ore vein along the galleries surrounding the block, including a correlation of low velocities to fractured rock parts as well as high velocities to the undisturbed ore vein core, respectively. In summary the obtained velocity model and the inferred spatial distribution of the ore vein provides a good basis for planning and implementing the actual ore mining step using the envisaged bioleaching technology.
NASA Astrophysics Data System (ADS)
Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett
2017-05-01
Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned
Numerical analysis of whole-body cryotherapy chamber design improvement
NASA Astrophysics Data System (ADS)
Yerezhep, D.; Tukmakova, A. S.; Fomin, V. E.; Masalimov, A.; Asach, A. V.; Novotelnova, A. V.; Baranov, A. Yu
2018-05-01
Whole body cryotherapy is a state-of-the-art method that uses cold for treatment and prevention of diseases. The process implies the impact of cryogenic gas on a human body that implements in a special cryochamber. The temperature field in the chamber is of great importance since local integument over-cooling may occur. Numerical simulation of WBC has been carried out. Chamber design modification has been proposed in order to increase the uniformity of the internal temperature field. The results have been compared with the ones obtained for a standard chamber design. The value of temperature gradient formed in the chamber containing curved wall with certain height has been decreased almost twice in comparison with the results obtained for the standard design. The modification proposed may increase both safety and comfort of cryotherapy.
One-loop effective actions and higher spins. Part II
NASA Astrophysics Data System (ADS)
Bonora, L.; Cvitan, M.; Prester, P. Dominis; Giaccari, S.; Štemberga, T.
2018-01-01
In this paper we continue and improve the analysis of the effective actions obtained by integrating out a scalar and a fermion field coupled to external symmetric sources, started in the previous paper. The first subject we study is the geometrization of the results obtained there, that is we express them in terms of covariant Jacobi tensors. The second subject concerns the treatment of tadpoles and seagull terms in order to implement off-shell covariance in the initial model. The last and by far largest part of the paper is a repository of results concerning all two point correlators (including mixed ones) of symmetric currents of any spin up to 5 and in any dimensions between 3 and 6. In the massless case we also provide formulas for any spin in any dimension.
Improvement of Representation of the Cloud-Aerosol Interaction in Large-Scale Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khain, Alexander; Phillips, Vaughan; Pinsky, Mark
The main achievements reached under the DOE award DE-SC0006788 are described. It is shown that the plan of the Project is completed. Unique results concerning cloud-aerosol interaction are obtained. It is shown that aerosols affect intensity of hurricanes. The effects of small aerosols on formation of ice in anvils of deep convective clouds are discovered, for the first time the mechanisms of drizzle formation are found and described quantitatively. Mechanisms of formation of warm rain are clarified and the dominating role of adiabatic processes and turbulence are stressed. Important results concerning the effects of sea spray on intensity of cloudsmore » and tropical cyclones are obtained. A novel methods of calculation of hail formation has been developed and implemented.« less
The optimal design of UAV wing structure
NASA Astrophysics Data System (ADS)
Długosz, Adam; Klimek, Wiktor
2018-01-01
The paper presents an optimal design of UAV wing, made of composite materials. The aim of the optimization is to improve strength and stiffness together with reduction of the weight of the structure. Three different types of functionals, which depend on stress, stiffness and the total mass are defined. The paper presents an application of the in-house implementation of the evolutionary multi-objective algorithm in optimization of the UAV wing structure. Values of the functionals are calculated on the basis of results obtained from numerical simulations. Numerical FEM model, consisting of different composite materials is created. Adequacy of the numerical model is verified by results obtained from the experiment, performed on a tensile testing machine. Examples of multi-objective optimization by means of Pareto-optimal set of solutions are presented.
Area estimation of crops by digital analysis of Landsat data
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Hixson, M. M.; Davis, B. J.
1978-01-01
The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
Experimental results of active control on a large structure to suppress vibration
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1991-01-01
Three design methods, Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR), H-infinity, and mu-synthesis, are used to obtain compensators for suppressing the vibrations of a 10-bay vertical truss structure, a component typical of what may be used to build a large space structure. For the design process the plant dynamic characteristics of the structure were determined experimentally using an identification method. The resulting compensators were implemented on a digital computer and tested for their ability to suppress the first bending mode response of the 10-bay vertical truss. Time histories of the measured motion are presented, and modal damping obtained during the experiments are compared with analytical predictions. The advantages and disadvantages of using the various design methods are discussed.
Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains
NASA Astrophysics Data System (ADS)
Zhang, Xiong-Peng; Shao, Bin; Zou, Jian
2017-05-01
Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
NASA Technical Reports Server (NTRS)
Amano, R. S.
1982-01-01
Progress in implementing and refining two near-wall turbulence models in which the near-wall region is divided into either two or three zones is outlined. These models were successfully applied to the computation of recirculating flows. The research was further extended to obtaining experimental results of two different recirculating flow conditions in order to check the validity of the present models. Two different experimental apparatuses were set up: axisymmetric turbulent impinging jets on a flat plate, and turbulent flows in a circular pipe with a abrupt pipe expansion. It is shown that generally better results are obtained by using the present near-wall models, and among the models the three-zone model is superior to the two-zone model.
Pimenta, S; Cardoso, S; Miranda, A; De Beule, P; Castanheira, E M S; Minas, G
2015-08-01
This paper presents the design, optimization and fabrication of 16 MgO/TiO2 and SiO2/TiO2 based high selective narrow bandpass optical filters. Their performance to extract diffuse reflectance and fluorescence signals from gastrointestinal tissue phantoms was successfully evaluated. The obtained results prove their feasibility to correctly extract those spectroscopic signals, through a Spearman's rank correlation test (Spearman's correlation coefficient higher than 0.981) performed between the original spectra and the ones obtained using those 16 fabricated optical filters. These results are an important step for the implementation of a miniaturized, low-cost and minimal invasive microsystem that could help in the detection of gastrointestinal dysplasia.
Leveraging finances for public health system improvement: results from the Turning Point initiative.
Bekemeier, Betty; Riley, Catharine M; Berkowitz, Bobbie
2007-01-01
Reforming the public health infrastructure requires substantial system changes at the state level; state health agencies, however, often lack the resources and support for strategic planning and systemwide improvement. The Turning Point Initiative provided support for states to focus on large-scale system changes that resulted in increased funding for public health capacity and infrastructure development. Turning Point provides a test case for obtaining financial and institutional resources focused on systems change and infrastructure development-areas for which it has been historically difficult to obtain long-term support. The purpose of this exploratory, descriptive survey research was to enumerate the actual resources leveraged toward public health system improvement through the partnerships, planning, and implementation activities funded by the Robert Wood Johnson Foundation as a part of the Turning Point Initiative.
NASA Astrophysics Data System (ADS)
Genovese, Mariangela; Napoli, Ettore
2013-05-01
The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.
Methodological advances in unit cost calculation of psychiatric residential care in Spain.
Moreno, Karen; Sanchez, Eduardo; Salvador-Carulla, Luis
2008-06-01
The care of the severe mentally ill who need intensive support for their daily living (dependent persons), accounts for an increasingly large proportion of public expenditure in many European countries. The main aim of this study was the design and implementation of solid methodology to calculate unit costs of different types of care. To date, methodologies used in Spain have produced inaccurate figures, suggesting few variations in patient consumption of the same service. An adaptation of the Activity-Based-Costing methodology was applied in Navarre, a region in the North of Spain, as a pilot project for the public mental health services. A unit cost per care process was obtained for all levels of care considered in each service during 2005. The European Service Mapping Schedule (ESMS) codes were used to classify the services for later comparisons. Finally, in order to avoid problems of asymmetric cost distribution, a simple Bayesian model was used. As an illustration, we report the results obtained for long-term residential care and note that there are important variations between unit costs when considering different levels of care. Considering three levels of care (Level 1-low, Level 2-medium and Level 3-intensive), the cost per bed in Level 3 was 10% higher than that of Level 2. The results obtained using the cost methodology described provide more useful information than those using conventional methods, although its implementation requires much time to compile the necessary information during the initial stages and the collaboration of staff and managers working in the services. However, in some services, if no important variations exist in patient care, another method would be advisable, although our system provides very useful information about patterns of care from a clinical point of view. Detailed work is required at the beginning of the implementation in order to avoid the calculation of distorted figures and to improve the levels of decision making within the Health Care Service. IMPLICATIONS FOR HEALTH CARE POLICY AND FORMULATIONS: As other European countries, Spain has adopted a new care system for the dependent population. To finance this new system, reliable figures must be calculated for each type of user in order to establish tariffs or public prices. This study provides a useful management tool to assist in decision making. The methodology should be implemented in other regions of Spain and even in other countries in order to compare our results and validate the cost system designed.
NASA Astrophysics Data System (ADS)
Cucu, Daniela; Woods, Mike
2008-08-01
The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.
Navier-Stokes simulations of unsteady transonic flow phenomena
NASA Technical Reports Server (NTRS)
Atwood, C. A.
1992-01-01
Numerical simulations of two classes of unsteady flows are obtained via the Navier-Stokes equations: a blast-wave/target interaction problem class and a transonic cavity flow problem class. The method developed for the viscous blast-wave/target interaction problem assumes a laminar, perfect gas implemented in a structured finite-volume framework. The approximately factored implicit scheme uses Newton subiterations to obtain the spatially and temporally second-order accurate time history of the blast-waves with stationary targets. The inviscid flux is evaluated using either of two upwind techniques, while the full viscous terms are computed by central differencing. Comparisons of unsteady numerical, analytical, and experimental results are made in two- and three-dimensions for Couette flows, a starting shock-tunnel, and a shock-tube blockage study. The results show accurate wave speed resolution and nonoscillatory discontinuity capturing of the predominantly inviscid flows. Viscous effects were increasingly significant at large post-interaction times. While the blast-wave/target interaction problem benefits from high-resolution methods applied to the Euler terms, the transonic cavity flow problem requires the use of an efficient scheme implemented in a geometrically flexible overset mesh environment. Hence, the Reynolds averaged Navier-Stokes equations implemented in a diagonal form are applied to the cavity flow class of problems. Comparisons between numerical and experimental results are made in two-dimensions for free shear layers and both rectangular and quieted cavities, and in three-dimensions for Stratospheric Observatory For Infrared Astronomy (SOFIA) geometries. The acoustic behavior of the rectangular and three-dimensional cavity flows compare well with experiment in terms of frequency, magnitude, and quieting trends. However, there is a more rapid decrease in computed acoustic energy with frequency than observed experimentally owing to numerical dissipation. In addition, optical phase distortion due to the time-varying density field is modelled using geometrical constructs. The computed optical distortion trends compare with the experimentally inferred result, but underpredicts the fluctuating phase difference magnitude.
Chao, Chun-Tang; Maneetien, Nopadon; Wang, Chi-Jo; Chiou, Juing-Shian
2014-01-01
This paper presents the design and evaluation of the hardware circuit for electronic stethoscopes with heart sound cancellation capabilities using field programmable gate arrays (FPGAs). The adaptive line enhancer (ALE) was adopted as the filtering methodology to reduce heart sound attributes from the breath sounds obtained via the electronic stethoscope pickup. FPGAs were utilized to implement the ALE functions in hardware to achieve near real-time breath sound processing. We believe that such an implementation is unprecedented and crucial toward a truly useful, standalone medical device in outpatient clinic settings. The implementation evaluation with one Altera cyclone II-EP2C70F89 shows that the proposed ALE used 45% resources of the chip. Experiments with the proposed prototype were made using DE2-70 emulation board with recorded body signals obtained from online medical archives. Clear suppressions were observed in our experiments from both the frequency domain and time domain perspectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, K; Herzog, M; Landry, G
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less
Advanced information processing system: Fault injection study and results
NASA Technical Reports Server (NTRS)
Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.
1992-01-01
The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.
[Evaluation of the animal-assisted therapy in Alzheimer's disease].
Quibel, Clémence; Bonin, Marie; Bonnet, Magalie; Gaimard, Maryse; Mourey, France; Moesch, Isabelle; Ancet, Pierre
Animal-assisted therapy sessions have been set up in a protected unit for patients with a dementia-related syndrome. The aim is to measure the effects of animal-assisted therapy on behavioural disorders in daily life and care. The results obtained provided some interesting areas to explore and recommendations with a view to optimising the implementation of such a system. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Sadykov, R. A.; Strassle, Th; Podlesnyak, A.; Keller, L.; Fak, B.; Mesot, J.
2017-12-01
We have developed and implemented series of new original clamp high-pressure cells for neutron diffraction and inelastic neutron scattering at low temperatures. The cells design allows one to place them in the standard cryostats or cryomagnets used on neutron sources. Some results obtained for ZnCr2Se4 are demonstrated as an example.
An adaptive replacement algorithm for paged-memory computer systems.
NASA Technical Reports Server (NTRS)
Thorington, J. M., Jr.; Irwin, J. D.
1972-01-01
A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.
Optics detection and laser countermeasures on a combat vehicle
NASA Astrophysics Data System (ADS)
Sjöqvist, Lars; Allard, Lars; Pettersson, Magnus; Börjesson, Per; Lindskog, Nils; Bodin, Johan; Widén, Anders; Persson, Hâkan; Fredriksson, Jan; Edström, Sten
2016-10-01
Magnifying optical assemblies used for weapon guidance or rifle scopes may possess a threat for a combat vehicle and its personnel. Detection and localisation of optical threats is consequently of interest in military applications. Typically a laser system is used in optics detection, or optical augmentation, to interrogate a scene of interest to localise retroreflected laser radiation. One interesting approach for implementing optics detection on a combat vehicle is to use a continuous scanning scheme. In addition, optics detection can be combined with laser countermeasures, or a laser dazzling function, to efficiently counter an optical threat. An optics detection laser sensor demonstrator has been implemented on a combat vehicle. The sensor consists of a stabilised gimbal and was integrated together with a LEMUR remote electro-optical sight. A narrow laser slit is continuously scanned around the horizon to detect and locate optical threats. Detected threats are presented for the operator within the LEMUR presentation system, and by cueing a countermeasure laser installed in the LEMUR sensor housing threats can be defeated. Results obtained during a field demonstration of the optics detection sensor and the countermeasure laser will be presented. In addition, results obtained using a dual-channel optics detection system designed for false alarm reduction are also discussed.
The analysis of composite laminated beams using a 2D interpolating meshless technique
NASA Astrophysics Data System (ADS)
Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.
2018-02-01
Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.
Experimental study and empirical prediction of fuel flow parameters under air evolution conditions
NASA Astrophysics Data System (ADS)
Kitanina, E. E.; Kitanin, E. L.; Bondarenko, D. A.; Kravtsov, P. A.; Peganova, M. M.; Stepanov, S. G.; Zherebzov, V. L.
2017-11-01
Air evolution in kerosene under the effect of gravity flow with various hydraulic resistances in the pipeline was studied experimentally. The study was conducted at pressure ranging from 0.2 to 1.0 bar and temperature varying between -20°C and +20°C. Through these experiments, the oversaturation limit beyond which dissolved air starts evolving intensively from the fuel was established and the correlations for the calculation of pressure losses and air evolution on local loss elements were obtained. A method of calculating two-phase flow behaviour in a titled pipeline segment with very low mass flow quality and fairly high volume flow quality was developed. The complete set of empirical correlations obtained by experimental analysis was implemented in the engineering code. The software simulation results were repeatedly verified against our experimental findings and Airbus test data to show that the two-phase flow simulation agrees quite well with the experimental results obtained in the complex branched pipelines.
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.
Seguin, Rebecca A; Palombo, Ruth; Economos, Christina D; Hyatt, Raymond; Kuder, Julia; Nelson, Miriam E
2008-01-01
Background The benefits of community-based health programs are widely recognized. However, research examining factors related to community leaders' characteristics and roles in implementation is limited. Methods The purpose of this cross-sectional study was to use a social ecological framework of variables to explore and describe the relationships between socioeconomic, personal/behavioral, programmatic, leadership, and community-level social and demographic characteristics as they relate to the implementation of an evidence-based strength training program by community leaders. Eight-hundred fifty-four trained program leaders in 43 states were invited to participate in either an online or mail survey. Corresponding community-level characteristics were also collected. Programmatic details were obtained from those who implemented. Four-hundred eighty-seven program leaders responded to the survey (response rate = 57%), 78% online and 22% by mail. Results Of the 487 respondents, 270 implemented the program (55%). One or more factors from each category – professional, socioeconomic, personal/behavioral, and leadership characteristics – were significantly different between implementers and non-implementers, determined by chi square or student's t-tests as appropriate. Implementers reported higher levels of strength training participation, current and lifetime physical activity, perceived support, and leadership competence (all p < 0.05). Logistic regression analysis revealed a positive association between implementation and fitness credentials/certification (p = 0.003), program-specific self-efficacy (p = 0.002), and support-focused leadership (p = 0.006), and a negative association between implementation and educational attainment (p = 0.002). Conclusion Among this sample of trained leaders, several factors within the professional, socioeconomic, personal/behavioral, and leadership categories were related to whether they implemented a community-based exercise program. It may benefit future community-based physical activity program disseminations to consider these factors when selecting and training leaders. PMID:19055821
Efficient Implementation of a Symbol Timing Estimator for Broadband PLC.
Nombela, Francisco; García, Enrique; Mateos, Raúl; Hernández, Álvaro
2015-08-21
Broadband Power Line Communications (PLC) have taken advantage of the research advances in multi-carrier modulations to mitigate frequency selective fading, and their adoption opens up a myriad of applications in the field of sensory and automation systems, multimedia connectivity or smart spaces. Nonetheless, the use of these multi-carrier modulations, such as Wavelet-OFDM, requires a highly accurate symbol timing estimation for reliably recovering of transmitted data. Furthermore, the PLC channel presents some particularities that prevent the direct use of previous synchronization algorithms proposed in wireless communication systems. Therefore more research effort should be involved in the design and implementation of novel and robust synchronization algorithms for PLC, thus enabling real-time synchronization. This paper proposes a symbol timing estimator for broadband PLC based on cross-correlation with multilevel complementary sequences or Zadoff-Chu sequences and its efficient implementation in a FPGA; the obtained results show a 90% of success rate in symbol timing estimation for a certain PLC channel model and a reduced resource consumption for its implementation in a Xilinx Kyntex FPGA.
NASA Astrophysics Data System (ADS)
Perton, Mathieu; Contreras-Zazueta, Marcial A.; Sánchez-Sesma, Francisco J.
2016-06-01
A new implementation of indirect boundary element method allows simulating the elastic wave propagation in complex configurations made of embedded regions that are homogeneous with irregular boundaries or flat layers. In an older implementation, each layer of a flat layered region would have been treated as a separated homogeneous region without taking into account the flat boundary information. For both types of regions, the scattered field results from fictitious sources positioned along their boundaries. For the homogeneous regions, the fictitious sources emit as in a full-space and the wave field is given by analytical Green's functions. For flat layered regions, fictitious sources emit as in an unbounded flat layered region and the wave field is given by Green's functions obtained from the discrete wavenumber (DWN) method. The new implementation allows then reducing the length of the discretized boundaries but DWN Green's functions require much more computation time than the full-space Green's functions. Several optimization steps are then implemented and commented. Validations are presented for 2-D and 3-D problems. Higher efficiency is achieved in 3-D.
Sugianto, Jessica Z; Stewart, Brian; Ambruzs, Josephine M; Arista, Amanda; Park, Jason Y; Cope-Yokoyama, Sandy; Luu, Hung S
2015-01-01
To implement Lean principles to accommodate expanding volumes of gastrointestinal biopsies and to improve laboratory processes overall. Our continuous improvement (kaizen) project analyzed the current state for gastrointestinal biopsy handling using value-stream mapping for specimens obtained at a 487-bed tertiary care pediatric hospital in Dallas, Texas. We identified non-value-added time within the workflow process, from receipt of the specimen in the histology laboratory to the delivery of slides and paperwork to the pathologist. To eliminate non-value-added steps, we implemented the changes depicted in a revised-state value-stream map. Current-state value-stream mapping identified a total specimen processing time of 507 minutes, of which 358 minutes were non-value-added. This translated to a process cycle efficiency of 29%. Implementation of a revised-state value stream resulted in a total process time reduction to 238 minutes, of which 89 minutes were non-value-added, and an improved process cycle efficiency of 63%. Lean production principles of continuous improvement and waste elimination can be successfully implemented within the clinical laboratory.
A post-processing method to simulate the generalized RF sheath boundary condition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myra, James R.; Kohno, Haruhiko
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
A post-processing method to simulate the generalized RF sheath boundary condition
Myra, James R.; Kohno, Haruhiko
2017-10-23
For applications of ICRF power in fusion devices, control of RF sheath interactions is of great importance. A sheath boundary condition (SBC) was previously developed to provide an effective surface impedance for the interaction of the RF sheath with the waves. The SBC enables the surface power flux and rectified potential energy available for sputtering to be calculated. For legacy codes which cannot easily implement the SBC, or to speed convergence in codes which do implement it, we consider here an approximate method to simulate SBCs by post-processing results obtained using other, e.g. conducting wall, boundary conditions. The basic approximationmore » is that the modifications resulting from the generalized SBC are driven by a fixed incoming wave which could be either a fast wave or a slow wave. Finally, the method is illustrated in slab geometry and compared with exact numerical solutions; it is shown to work very well.« less
Fourth-order partial differential equation noise removal on welding images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan
2015-10-22
Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less
Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch
NASA Astrophysics Data System (ADS)
Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.
2014-10-01
The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.
Image based book cover recognition and retrieval
NASA Astrophysics Data System (ADS)
Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine
2017-11-01
In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.
Damin, Isabel C F; Santo, Maria A E; Hennigen, Rosmari; Vargas, Denise M
2013-01-01
In the present study, a method for the determination of mercury (Hg) in fish was validated according to ISO/IEC 17025, INMETRO (Brazil), and more recent European recommendations (Commission Decision 2007/333/EC and 2002/657/EC) for implementation in the Brazilian Residue Control Plan (NRCP) in routine applications. The parameters evaluated in the validation were investigated in detail. The results obtained for limit of detection and quantification were respectively, 2.36 and 7.88 μg kg(-1) of Hg. While the recovery varies between 90-96%. The coefficient of variation was of 4.06-8.94% for the repeatability. Furthermore, a comparison using an external proficiency testing scheme was realized. The results of method validated for the determination of the mercury in fish by Hydride generation atomic absorption spectrometry were considered suitable for implementation in routine analysis.
Unequal-area, fixed-shape facility layout problems using the firefly algorithm
NASA Astrophysics Data System (ADS)
Ingole, Supriya; Singh, Dinesh
2017-07-01
In manufacturing industries, the facility layout design is a very important task, as it is concerned with the overall manufacturing cost and profit of the industry. The facility layout problem (FLP) is solved by arranging the departments or facilities of known dimensions on the available floor space. The objective of this article is to implement the firefly algorithm (FA) for solving unequal-area, fixed-shape FLPs and optimizing the costs of total material handling and transportation between the facilities. The FA is a nature-inspired algorithm and can be used for combinatorial optimization problems. Benchmark problems from the previous literature are solved using the FA. To check its effectiveness, it is implemented to solve large-sized FLPs. Computational results obtained using the FA show that the algorithm is less time consuming and the total layout costs for FLPs are better than the best results achieved so far.
NASA Astrophysics Data System (ADS)
Nguyen, K. L.; Merchiers, O.; Chapuis, P.-O.
2017-11-01
We compute the near-field radiative heat transfer between a hot AFM tip and a cold substrate. This contribution to the tip-sample heat transfer in Scanning Thermal Microscopy is often overlooked, despite its leading role when the tip is out of contact. For dielectrics, we provide power levels exchanged as a function of the tip-sample distance in vacuum and spatial maps of the heat flux deposited into the sample which indicate the near-contact spatial resolution. The results are compared to analytical expressions of the Proximity Flux Approximation. The numerical results are obtained by means of the Boundary Element Method (BEM) implemented in the SCUFF-EM software, and require first a thorough convergence analysis of the progressive implementation of this method to the thermal emission by a sphere, the radiative transfer between two spheres, and the radiative exchange between a sphere and a finite substrate.
Monitoring of awareness level in dispensary patients with arterial hypertension.
Smiianov, Vladyslav; Witczak, Izabela; Smiianova, Olga; Rudenko, Lesia
2017-01-01
Results of monitoring of awareness level in dispensary patients with arterial hypertension (AH) are given in the article. The objective of the study was to investigate awareness level of dispensary patients with hypertension in Sumy as for the course of their disease, implementation of preventive measures, diagnosis and treatment, and to use the obtained information in the process of management of healthcare quality. The results of close-ended questionnaires were used in the capacity of materials. A total of 2019 patients were surveyed. Despite the high level of patients' awareness of AH course and possible complications, the survey showed insufficient level of their own responsibility for their health. The main reasons for poor adherence to doctor's recommendations are forgetfulness, lack of time, reluctance. Measures were developed to increase awareness level in patients with AH by means of strengthening awareness-raising activities and communications, as well as creation and implementation of effective targeted health-and-social programs.
Texture analysis of Napoleonic War Era copper bolts
NASA Astrophysics Data System (ADS)
Malamud, Florencia; Northover, Shirley; James, Jon; Northover, Peter; Kelleher, Joe
2016-04-01
Neutron diffraction techniques are suitable for volume texture analyses due to high penetration of thermal neutrons in most materials. We have implemented a new data analysis methodology that employed the spatial resolution achievable by a time-of-flight neutron strain scanner to non-destructively determine the crystallographic texture at selected locations within a macroscopic sample. The method is based on defining the orientation distribution function of the crystallites from several incomplete pole figures, and it has been implemented on ENGIN-X, a neutron strain scanner at the Isis Facility in the UK. Here, we demonstrate the application of this new texture analysis methodology in determining the crystallographic texture at selected locations within museum quality archaeological objects up to 1 m in length. The results were verified using samples of similar, but less valuable, objects by comparing the results of applying this method with those obtained using both electron backscatter diffraction and X-ray diffraction on their cross sections.
Broadband multiresonator quantum memory-interface.
Moiseev, S A; Gerasimov, K I; Latypov, R R; Perminov, N S; Petrovnin, K V; Sherstyukov, O N
2018-03-05
In this paper we experimentally demonstrated a broadband scheme of the multiresonator quantum memory-interface. The microwave photonic scheme consists of the system of mini-resonators strongly interacting with a common broadband resonator coupled with the external waveguide. We have implemented the impedance matched quantum storage in this scheme via controllable tuning of the mini-resonator frequencies and coupling of the common resonator with the external waveguide. Proof-of-principal experiment has been demonstrated for broadband microwave pulses when the quantum efficiency of 16.3% was achieved at room temperature. By using the obtained experimental spectroscopic data, the dynamics of the signal retrieval has been simulated and promising results were found for high-Q mini-resonators in microwave and optical frequency ranges. The results pave the way for the experimental implementation of broadband quantum memory-interface with quite high efficiency η > 0.99 on the basis of modern technologies, including optical quantum memory at room temperature.
NASA Astrophysics Data System (ADS)
Aloulou, R.; De Peslouan, P.-O. Lucas; Mnif, H.; Alicalapa, F.; Luk, J. D. Lan Sun; Loulou, M.
2016-05-01
Energy Harvesting circuits are developed as an alternative solution to supply energy to autonomous sensor nodes in Wireless Sensor Networks. In this context, this paper presents a micro-power management system for multi energy sources based on a novel design of charge pump circuit to allow the total autonomy of self-powered sensors. This work proposes a low-voltage and high performance charge pump (CP) suitable for implementation in standard complementary metal oxide semiconductor (CMOS) technologies. The CP design was implemented using Cadence Virtuoso with AMS 0.35μm CMOS technology parameters. Its active area is 0.112 mm2. Consistent results were obtained between the measured findings of the chip testing and the simulation results. The circuit can operate with an 800 mV supply and generate a boosted output voltage of 2.835 V with 1 MHz as frequency.
Intelligent composting assisted by a wireless sensing network.
López, Marga; Martinez-Farre, Xavier; Casas, Oscar; Quilez, Marcos; Polo, Jose; Lopez, Oscar; Hornero, Gemma; Pinilla, Mirta R; Rovira, Carlos; Ramos, Pedro M; Borges, Beatriz; Marques, Hugo; Girão, Pedro Silva
2014-04-01
Monitoring of the moisture and temperature of composting process is a key factor to obtain a quality product beyond the quality of raw materials. Current methodologies for monitoring these two parameters are time consuming for workers, sometimes not sufficiently reliable to help decision-making and thus are ignored in some cases. This article describes an advance on monitoring of composting process through a Wireless Sensor Network (WSN) that allows measurement of temperature and moisture in real time in multiple points of the composting material, the Compo-ball system. To implement such measurement capabilities on-line, a WSN composed of multiple sensor nodes was designed and implemented to provide the staff with an efficient monitoring composting management tool. After framing the problem, the objectives and characteristics of the WSN are briefly discussed and a short description of the hardware and software of the network's components are presented. Presentation and discussion of practical issues and results obtained with the WSN during a demonstration stage that took place in several composting sites concludes the paper. Copyright © 2014 Elsevier Ltd. All rights reserved.
Validation of a finite element method framework for cardiac mechanics applications
NASA Astrophysics Data System (ADS)
Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.
2017-11-01
Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.
Project management practice and its effects on project success in Malaysian construction industry
NASA Astrophysics Data System (ADS)
Haron, N. A.; Devi, P.; Hassim, S.; Alias, A. H.; Tahir, M. M.; Harun, A. N.
2017-12-01
The rapid economic development has increased the demand for construction of infrastructure and facilities globally. Sustainable development and globalization are the new ‘Zeitgeist’ of the 21st century. In order to implement these projects successfully and to meet the functional aim of the projects within their lifetime, an efficient project management practice is needed. The aim of this study is to identify the critical success factors (CSFs) and the extent of use of project management practice which affects project success, especially during the implementation stage. Data were obtained from self-administered questionnaires with 232 respondents. A mixed method of data collection was adopted using semi-structured interview and questionnaire approach. The result of the analysis of data obtained showed that new and emerging criteria such as customer satisfaction, competency of the project team, and performance of subcontractors/suppliers are becoming measures of success in addition to the classic iron triangle’s view of time, cost and quality. An insight on the extent of use of different project management practice in the industry was also achieved from the study.
3D brain tumor localization and parameter estimation using thermographic approach on GPU.
Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi
2018-01-01
The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
NASA Astrophysics Data System (ADS)
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
NASA Astrophysics Data System (ADS)
Gaburro, Nicola; Marchioro, Giacomo; Daffara, Claudia
2017-07-01
Surface metrology of artworks requires the design of suitable devices for in-situ non-destructive measurement together with reliable procedures for an effective analysis of such non-engineered variegate objects. To advance the state-of-the-art it has been implemented a versatile optical micro-profilometry taking advantage of the adapt- ability of conoscopic holography sensors, able to operate with irregular shapes and composite materials (diffusive, specular, and polychrome) of artworks. The scanning technique is used to obtain wide field and high spatially resolved areal profilometry. The prototype has a modular scheme based on a set of conoscopic sensors, extending the typical design based on a scanning stage and a single probe with a limited bandwidth, thus allowing the collection of heights data from surface with different scales and materials with variegate optical response. The system was optimized by characterizing the quality of the measurement with the probes triggered in continuous scanning modality. The results obtained on examples of cultural heritage objects (2D paintings, 3D height-relief) and materials (pictorial, metallic) demonstrate the versatility of the implemented device.
Low-template methods yield limited extra information for PowerPlex® Fusion 6C profiling.
Duijs, Francisca; van de Merwe, Linda; Sijen, Titia; Benschop, Corina C G
2018-06-01
Advances in autosomal DNA profiling systems enable analyzing increased numbers of short tandem repeat (STR) loci in one reaction. Increasing the number of STR loci increases the amount of information that may be obtained from a (crime scene) sample. In this study, we examined whether even more allelic information can be obtained by applying low-template methods. To this aim, the performance of the PowerPlex® Fusion 6C STR typing system was assessed when increasing the number of PCR cycles or enhancing the capillary electrophoresis (CE) injection settings. Results show that applying these low-template methods yields limited extra information and comes at cost of more background noise. In addition, the gain in detection of alleles was much smaller when compared to the gain when applying low-template methods to the 15-loci AmpFLSTR® NGM™ system. Consequently, the PowerPlex® Fusion 6C STR typing system was implemented using standard settings only; low-template methods were not implemented for our routine forensic casework. Copyright © 2018 Elsevier B.V. All rights reserved.
Leopold, Christine; Mantel-Teeuwisse, Aukje K; Vogler, Sabine; Valkova, Silvia; de Joncheere, Kees; Leufkens, Hubert G M; Wagner, Anita K; Ross-Degnan, Dennis; Laing, Richard
2014-09-01
To identify pharmaceutical policy changes during the economic recession in eight European countries and to determine whether policy measures resulted in lower sales of, and less expenditure on, pharmaceuticals. Information on pharmaceutical policy changes between 2008 and 2011 in eight European countries was obtained from publications and pharmaceutical policy databases. Data on the volume and value of the quarterly sales of products between 2006 and 2011 in the 10 highest-selling therapeutic classes in each country were obtained from a pharmaceutical market research database. We compared these indicators in economically stable countries; Austria, Estonia and Finland, to those in economically less stable countries, Greece, Ireland, Portugal, Slovakia and Spain. Economically stable countries implemented two to seven policy changes each, whereas less stable countries implemented 10 to 22 each. Of the 88 policy changes identified, 33 occurred in 2010 and 40 in 2011. They involved changing out-of-pocket payments for patients in 16 cases, price mark-up schemes in 13 and price cuts in 11. Sales volumes increased moderately in all countries except Greece and Portugal, which experienced slight declines after 2009. Sales values decreased in both groups of countries, but fell more in less stable countries. Less economically stable countries implemented more pharmaceutical policy changes during the recession than economically stable countries. Unexpectedly, pharmaceutical sales volumes increased in almost all countries, whereas sales values declined, especially in less stable countries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari
We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in othermore » OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.« less
Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images
Izquierdo, Alberto; Suárez, Luis; Suárez, David
2017-01-01
Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485
Implementation and Validation of the Chien k-epsilon Turbulence Model in the Wind Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Yoder, Dennis A.; Georgiadis, Nicholas J.
1999-01-01
The two equation k-epsilon turbulence model of Chien has been implemented in the WIND Navier-Stokes flow solver. Details of the numerical solution algorithm, initialization procedure, and stability enhancements are described. Results obtained with this version of the model are compared with those from the Chien k-epsilon model in the NPARC Navier-Stokes code and from the WIND SST model for three validation cases: the incompressible flow over a smooth flat plate, the incompressible flow over a backward facing step, and the shock-induced flow separation inside a transonic diffuser. The k-epsilon model results indicate that the WIND model functions very similarly to that in NPARC, though the WIND code appears to he slightly more accurate in the treatment of the near-wall region. Comparisons of the k-epsilon model results with those from the SST model were less definitive, as each model exhibited strengths and weaknesses for each particular case.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2001-01-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
Knowledge-based approach to video content classification
NASA Astrophysics Data System (ADS)
Chen, Yu; Wong, Edward K.
2000-12-01
A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplanoglu, Erkan; Safak, Koray K.; Varol, H. Selcuk
2009-01-12
An experiment based method is proposed for parameter estimation of a class of linear multivariable systems. The method was applied to a pressure-level control process. Experimental time domain input/output data was utilized in a gray-box modeling approach. Prior knowledge of the form of the system transfer function matrix elements is assumed to be known. Continuous-time system transfer function matrix parameters were estimated in real-time by the least-squares method. Simulation results of experimentally determined system transfer function matrix compare very well with the experimental results. For comparison and as an alternative to the proposed real-time estimation method, we also implemented anmore » offline identification method using artificial neural networks and obtained fairly good results. The proposed methods can be implemented conveniently on a desktop PC equipped with a data acquisition board for parameter estimation of moderately complex linear multivariable systems.« less
Jabłoński, Michał; Starčuková, Jana; Starčuk, Zenon
2017-01-23
Proton magnetic resonance spectroscopy is a non-invasive measurement technique which provides information about concentrations of up to 20 metabolites participating in intracellular biochemical processes. In order to obtain any metabolic information from measured spectra a processing should be done in specialized software, like jMRUI. The processing is interactive and complex and often requires many trials before obtaining a correct result. This paper proposes a jMRUI enhancement for efficient and unambiguous history tracking and file identification. A database storing all processing steps, parameters and files used in processing was developed for jMRUI. The solution was developed in Java, authors used a SQL database for robust storage of parameters and SHA-256 hash code for unambiguous file identification. The developed system was integrated directly in jMRUI and it will be publically available. A graphical user interface was implemented in order to make the user experience more comfortable. The database operation is invisible from the point of view of the common user, all tracking operations are performed in the background. The implemented jMRUI database is a tool that can significantly help the user to track the processing history performed on data in jMRUI. The created tool is oriented to be user-friendly, robust and easy to use. The database GUI allows the user to browse the whole processing history of a selected file and learn e.g. what processing lead to the results, where the original data are stored, to obtain the list of all processing actions performed on spectra.
Design and implementation of robust controllers for a gait trainer.
Wang, F C; Yu, C H; Chou, T Y
2009-08-01
This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.
Flight Experiment Verification of Shuttle Boundary Layer Transition Prediction Tool
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Berger, Karen T.; Horvath, Thomas J.; Wood, William A.
2016-01-01
Boundary layer transition at hypersonic conditions is critical to the design of future high-speed aircraft and spacecraft. Accurate methods to predict transition would directly impact the aerothermodynamic environments used to size a hypersonic vehicle's thermal protection system. A transition prediction tool, based on wind tunnel derived discrete roughness correlations, was developed and implemented for the Space Shuttle return-to-flight program. This tool was also used to design a boundary layer transition flight experiment in order to assess correlation uncertainties, particularly with regard to high Mach-number transition and tunnel-to-flight scaling. A review is provided of the results obtained from the flight experiment in order to evaluate the transition prediction tool implemented for the Shuttle program.
Isotalo, Aarno E.; Wieselquist, William A.
2015-05-15
A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less
Commissioning of the CMS Hadron Forward Calorimeters Phase I Upgrade
NASA Astrophysics Data System (ADS)
Bilki, B.; Onel, Y.
2018-03-01
The final phase of the CMS Hadron Forward Calorimeters Phase I Upgrade was performed during the Extended Year End Technical Stop of 2016-2017. In the framework of the upgrade, the PMT boxes were reworked to implement two channel readout in order to exploit the benefits of the multi-anode PMTs in background tagging and signal recovery. The front-end electronics were also upgraded to QIE10-based electronics which implement larger dynamic range and a 6-bit TDC. Following this major upgrade, the Hadron Forward Calorimeters were commissioned for operation readiness in 2017. Here we describe the details and the components of the upgrade, and discuss the operational experience and results obtained during the upgrade and commissioning.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Multireference adaptive noise canceling applied to the EEG.
James, C J; Hagan, M T; Jones, R D; Bones, P J; Carroll, G J
1997-08-01
The technique of multireference adaptive noise canceling (MRANC) is applied to enhance transient nonstationarities in the electroeancephalogram (EEG), with the adaptation implemented by means of a multilayer-perception artificial neural network (ANN). The method was applied to recorded EEG segments and the performance on documented nonstationarities recorded. The results show that the neural network (nonlinear) gives an improvement in performance (i.e., signal-to-noise ratio (SNR) of the nonstationarities) compared to a linear implementation of MRANC. In both cases an improvement in the SNR was obtained. The advantage of the spatial filtering aspect of MRANC is highlighted when the performance of MRANC is compared to that of the inverse auto-regressive filtering of the EEG, a purely temporal filter.
Data collection system for a wide range of gas-discharge proportional neutron counters
NASA Astrophysics Data System (ADS)
Oskomov, V.; Sedov, A.; Saduyev, N.; Kalikulov, O.; Kenzhina, I.; Tautaev, E.; Mukhamejanov, Y.; Dyachkov, V.; Utey, Sh
2017-12-01
This article describes the development and creation of a universal system of data collection to measure the intensity of pulsed signals. As a result of careful analysis of time conditions and operating conditions of software and hardware complex circuit solutions were selected that meet the required specifications: frequency response is optimized in order to obtain the maximum ratio signal/noise; methods and modes of operation of the microcontroller were worked out to implement the objectives of continuous measurement of signal amplitude at the output of amplifier and send the data to a computer; function of control of high voltage source was implemented. The preliminary program has been developed for microcontroller in its simplest form, which works on a particular algorithm.
Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Parisi, G.; Parisi, L.
2011-06-01
We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.
The worldwide market for photovoltaics in the rural sector
NASA Technical Reports Server (NTRS)
Brainard, W. A.
1982-01-01
Attention is given to the assessment of results obtained by three NASA studies aimed at determining the global market for stand-alone photovoltaic (PV) power systems in the village power, cottage industry, and agricultural applications areas of the rural sector. An attempt was made to identify technical, social, and institutional barriers to PV system implementation, as well as the funding sources available to potential users. Country- and sector-specific results are discussed, and marketing strategies appropriate for each sector are suggested for the benefit of American PV products manufacturers.
Modeling laser-driven electron acceleration using WARP with Fourier decomposition
Lee, P.; Audet, T. L.; Lehe, R.; ...
2015-12-31
WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.
Modeling laser-driven electron acceleration using WARP with Fourier decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, P.; Audet, T. L.; Lehe, R.
WARP is used with the recent implementation of the Fourier decomposition algorithm to model laser-driven electron acceleration in plasmas. Simulations were carried out to analyze the experimental results obtained on ionization-induced injection in a gas cell. The simulated results are in good agreement with the experimental ones, confirming the ability of the code to take into account the physics of electron injection and reduce calculation time. We present a detailed analysis of the laser propagation, the plasma wave generation and the electron beam dynamics.
Experimental and computational flow-field results for an all-body hypersonic aircraft
NASA Technical Reports Server (NTRS)
Cleary, Joseph W.
1989-01-01
A comprehensive test program is defined which is being implemented in the NASA/Ames 3.5 foot Hypersonic Wind Tunnel for obtaining data on a generic all-body hypersonic vehicle for computational fluid dynamics (CFD) code validation. Computational methods (approximate inviscid methods and an upwind parabolized Navier-Stokes code) currently being applied to the all-body model are outlined. Experimental and computational results on surface pressure distributions and Pitot-pressure surveys for the basic sharp-nose model (without control surfaces) at a free-stream Mach number of 7 are presented.
Multiple Optical Filter Design Simulation Results
NASA Astrophysics Data System (ADS)
Mendelsohn, J.; Englund, D. C.
1986-10-01
In this paper we continue our investigation of the application of matched filters to robotic vision problems. Specifically, we are concerned with the tray-picking problem. Our principal interest in this paper is the examination of summation affects which arise from attempting to reduce the matched filter memory size by averaging of matched filters. While the implementation of matched filtering theory to applications in pattern recognition or machine vision is ideally through the use of optics and optical correlators, in this paper the results were obtained through a digital simulation of the optical process.
Ishaak, Fariel; de Vries, Nanne Karel; van der Wolf, Kees
2014-06-11
In this article, the test implementation of a school-oriented drug prevention program "Study without Drugs" is discussed. The aims of this study were to determine the results of the process evaluation and to determine whether the proposed school-oriented drug prevention program during a pilot project was effective for the participating pupils. Sixty second-grade pupils at a junior high school in Paramaribo, Suriname participated in the test implementation. They were divided into two classes. For the process evaluation the students completed a structured questionnaire focusing on content and teaching method after every lesson. Lessons were qualified with a score from 0-10. The process was also evaluated by the teachers through structured interviews. Attention was paid to reach, dose delivered, dose received, fidelity, connection, achieved effects/observed behaviors, areas for improvement, and lesson strengths. The effect evaluation was conducted by using the General Liniair Model (repeated measure). The research (-design) was a pre-experimental design with pre-and post-test. No class or sex differences were detected among the pupils with regard to the assessment of content, methodology, and qualification of the lessons. Post-testing showed that participating pupils obtained an increased knowledge of drugs, their drug-resisting skills were enhanced, and behavior determinants (attitude, subjective norm, self-efficacy, and intention) became more negative towards drugs. From the results of the test implementation can be cautiously concluded that the program "Study without Drugs" may yield positive results when applied in schools). Thus, this pilot program can be considered a step towards the development and implementation of an evidence-based school-oriented program for pupils in Suriname.
NASA Astrophysics Data System (ADS)
Paul, Regina J.
This study examined the success rate of IEEIA inservice training, and attempted to identify key variables that influenced successful implementation of the IEEIA curriculum. The study used both quantitative and qualitative methods to obtain in-depth data. The total sample consisted of 251 participants; 132 usable surveys were returned resulting in a 53% response rate. The quantitative phase of the study consisted of a nine-page survey. The survey was designed to determine the effectiveness of inservice teacher training for the implementation of IEEIA, teachers' implementation, and their perceptions of the effectiveness of the inservice, the impact using IEEIA had on students and themselves, and barriers that prevented complete implementation. Additional analyses examined the relationships between use of the approach with the variables of length and type of training, and support types teachers received. The second phase analyzed both comments written by the respondents on their surveys, and eight teacher interviews. The research found that teachers perceived their workshops to be between moderately to very effective in helping them develop skills related to IEEIA and for teaching them how to implement it with students. Analyses revealed that teachers who received extended training or attended multiple inservices tended to use IEEIA more than teachers who did not. However, the number of years the teachers had been using the approach had a stronger influence for addressing the action components. Over half of the teachers had used the approach. Support after the inservice was important to implementation. The component of having students conduct an actual issue investigation was addressed the most. Fewer teachers addressed the final component of action by having their students resolve the issues they investigated. However, the teachers who fully implemented IEEIA had students who were active in their communities. Teachers perceived using the approach resulted in positive impacts for students, themselves, and their communities.
Calais, Jeremie; Fendler, Wolfgang P; Eiber, Matthias; Gartmann, Jeannine; Chu, Fang-I; Nickols, Nicholas G; Reiter, Robert E; Rettig, Matthew B; Marks, Leonard S; Ahlering, Thomas E; Huynh, Linda M; Slavik, Roger; Gupta, Pawan; Quon, Andrew; Allen-Auerbach, Martin S; Czernin, Johannes; Herrmann, Ken
2018-03-01
In this prospective survey of referring physicians, we investigated whether and how 68 Ga-labeled prostate-specific membrane antigen 11 ( 68 Ga-PSMA-11) PET/CT affects the implemented management of prostate cancer patients with biochemical recurrence (BCR). Methods: We conducted a prospective survey of physicians (NCT02940262) who referred 161 patients with prostate cancer BCR (median prostate-specific antigen value, 1.7 ng/mL; range, 0.05-202 ng/mL). Referring physicians completed one questionnaire before the scan to indicate the treatment plan without 68 Ga-PSMA-11 PET/CT information (Q1; n = 101), one immediately after the scan to denote intended management changes (Q2; n = 101), and one 3-6 mo later to document the final implemented management (Q3; n = 56). The implemented management was also obtained via electronic chart review or patient contact ( n = 45). Results: A complete documented management strategy (Q1 + Q2 + implemented management) was available for 101 of 161 patients (63%). Seventy-six of these (75%) had a positive 68 Ga-PSMA-11 PET/CT result. The implemented management differed from the prescan intended management (Q1) in 54 of 101 patients (53%). The postscan intended management (Q2) differed from the prescan intended management (Q1) in 62 of 101 patients (61%); however, these intended changes were not implemented in 29 of 62 patients (47%). Pelvic nodal and extrapelvic metastatic disease on 68 Ga-PSMA-11 PET/CT (PSMA T0N1M0 and PSMA T0N1M1 patterns) was significantly associated with implemented management changes ( P = 0.001 and 0.05). Conclusion: Information from 68 Ga-PSMA-11 PET/CT brings about management changes in more than 50% of prostate cancer patients with BCR (54/101; 53%). However, intended management changes early after 68 Ga-PSMA-11 PET/CT frequently differ from implemented management changes. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Development and formative evaluation of the e-Health Implementation Toolkit (e-HIT)
2010-01-01
Background The use of Information and Communication Technology (ICT) or e-Health is seen as essential for a modern, cost-effective health service. However, there are well documented problems with implementation of e-Health initiatives, despite the existence of a great deal of research into how best to implement e-Health (an example of the gap between research and practice). This paper reports on the development and formative evaluation of an e-Health Implementation Toolkit (e-HIT) which aims to summarise and synthesise new and existing research on implementation of e-Health initiatives, and present it to senior managers in a user-friendly format. Results The content of the e-HIT was derived by combining data from a systematic review of reviews of barriers and facilitators to implementation of e-Health initiatives with qualitative data derived from interviews of "implementers", that is people who had been charged with implementing an e-Health initiative. These data were summarised, synthesised and combined with the constructs from the Normalisation Process Model. The software for the toolkit was developed by a commercial company (RocketScience). Formative evaluation was undertaken by obtaining user feedback. There are three components to the toolkit - a section on background and instructions for use aimed at novice users; the toolkit itself; and the report generated by completing the toolkit. It is available to download from http://www.ucl.ac.uk/pcph/research/ehealth/documents/e-HIT.xls Conclusions The e-HIT shows potential as a tool for enhancing future e-Health implementations. Further work is needed to make it fully web-enabled, and to determine its predictive potential for future implementations. PMID:20955594
NASA Astrophysics Data System (ADS)
Pakpahan, N. F. D. B.
2018-01-01
All articles must contain an abstract. The research methodology is a subject in which the materials must be understood by the students who will take the thesis. Implementation of learning should create the conditions for active learning, interactive and effective are called Team Assisted Individualization (TAI) cooperative learning. The purpose of this study: 1) improving student learning outcomes at the course research methodology on TAI cooperative learning. 2) improvement of teaching activities. 3) improvement of learning activities. This study is a classroom action research conducted at the Department of Civil Engineering Universitas Negeri Surabaya. The research subjects were 30 students and lecturer of courses. Student results are complete in the first cycle by 20 students (67%) and did not complete 10 students (33%). In the second cycle students who complete being 26 students (87%) and did not complete 4 students (13%). There is an increase in learning outcomes by 20%. Results of teaching activities in the first cycle obtained the value of 3.15 with the criteria enough well. In the second cycle obtained the value of 4.22 with good criterion. The results of learning activities in the first cycle obtained the value of 3.05 with enough criterion. In the second cycle was obtained 3.95 with good criterion.
Gravitational decoupled anisotropies in compact stars
NASA Astrophysics Data System (ADS)
Gabbanelli, Luciano; Rincón, Ángel; Rubio, Carlos
2018-05-01
Simple generic extensions of isotropic Durgapal-Fuloria stars to the anisotropic domain are presented. These anisotropic solutions are obtained by guided minimal deformations over the isotropic system. When the anisotropic sector interacts in a purely gravitational manner, the conditions to decouple both sectors by means of the minimal geometric deformation approach are satisfied. Hence the anisotropic field equations are isolated resulting a more treatable set. The simplicity of the equations allows one to manipulate the anisotropies that can be implemented in a systematic way to obtain different realistic models for anisotropic configurations. Later on, observational effects of such anisotropies when measuring the surface redshift are discussed. To conclude, the consistency of the application of the method over the obtained anisotropic configurations is shown. In this manner, different anisotropic sectors can be isolated of each other and modeled in a simple and systematic way.
A genetic algorithm application in backcross breeding problem
NASA Astrophysics Data System (ADS)
Carnia, E.; Napitupulu, H.; Supriatna, A. K.
2018-03-01
In this paper we discuss a mathematical model of goat breeding strategy, i.e. the backcrossing breeding. The model is aimed to obtain a strategy in producing better variant of species. In this strategy, a female (doe) of a lesser quality goat, in terms of goat quality is bred with a male (buck) of an exotic goat which has a better goat quality. In this paper we will explore a problem on how to harvest the population optimally. A genetic algorithm (GA) approach will been devised to obtain the solution of the problem. We do several trials of the GA implementation which gives different set of solutions, but relatively close to each other in terms of the resulting total revenue, except a few. Further study need to be done to obtain GA solution that closer to the exact solution.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Morino, Taichi; Okazaki, Mitsuhiro; Toda, Takaki; Yokoyama, Takashi
2015-12-01
Recently, the abuse of designer drugs has become a social problem. Designer drugs are created by modifying part of the chemical structure of drugs that have already been categorized as illegal, thereby creating a different chemical compound in order to evade Pharmaceutical Affairs Law regulations. The new comprehensive system for designating illegal drug components has been in effect since March 2013, and many designer drugs can now be regulated. We conducted an online questionnaire survey of people with a history of designer drug use to elucidate the effects of the new system on the abuse of designer drugs and to identify potential future problems. Over half the subjects obtained designer drugs only before the new system was implemented. Awareness of the system was significantly lower among subjects who obtained designer drugs for the first time after its introduction than those who obtained the drugs only before its implementation. Due to the new system, all methods of acquiring designer drugs saw decreases in activity. However, the ratio of the acquisition of designer drugs via the Internet increased. Since over 50% of the subjects never obtained designer drugs after the new system was introduced, goals that aimed to make drug procurement more difficult were achieved. However, awareness of the new system among subjects who obtained designer drugs after the new system was introduced was significantly low. Therefore, fostering greater public awareness of the new system is necessary. The results of the questionnaire also suggested that acquiring designer drugs through the Internet has hardly been affected by the new system. We strongly hope that there will be a greater push to restrict the sale of designer drugs on the Internet in the near future.
NASA Technical Reports Server (NTRS)
Lathrop, J. W.; Prince, J. L.
1979-01-01
Results obtained include the definition of a simplified stress test schedule for terrestrial solar cells based on the work performed during the first program year, and the design and fabrication of improved jigs and fixtures for electrical measurement and stress testing. Implementation of these advanced techniques for accelerated stress testing is underway on three solar cell types. In addition, review of the literature on second quadrant phenomena was begun and some preliminary second-quadrant electrical measurements were performed. Results obtained at the first down time for 75 C B-T testing and biased and unbiased T-H pressure cooker testing of type F cells showed little or no degradation in electrical parameters. Significant physical effects (large solder bubbles) were noted for type F cells subjected to the pressure cooker stress test.
Pimenta, S.; Cardoso, S.; Miranda, A.; De Beule, P.; Castanheira, E.M.S.; Minas, G.
2015-01-01
This paper presents the design, optimization and fabrication of 16 MgO/TiO2 and SiO2/TiO2 based high selective narrow bandpass optical filters. Their performance to extract diffuse reflectance and fluorescence signals from gastrointestinal tissue phantoms was successfully evaluated. The obtained results prove their feasibility to correctly extract those spectroscopic signals, through a Spearman’s rank correlation test (Spearman’s correlation coefficient higher than 0.981) performed between the original spectra and the ones obtained using those 16 fabricated optical filters. These results are an important step for the implementation of a miniaturized, low-cost and minimal invasive microsystem that could help in the detection of gastrointestinal dysplasia. PMID:26309769
Complexity-entropy causality plane: A useful approach for distinguishing songs
NASA Astrophysics Data System (ADS)
Ribeiro, Haroldo V.; Zunino, Luciano; Mendes, Renio S.; Lenzi, Ervin K.
2012-04-01
Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.
Computer implemented classification of vegetation using aircraft acquired multispectral scanner data
NASA Technical Reports Server (NTRS)
Cibula, W. G.
1975-01-01
The use of aircraft 24-channel multispectral scanner data in conjunction with computer processing techniques to obtain an automated classification of plant species association was discussed. The classification of various plant species associations was related to information needed for specific applications. In addition, the necessity for multiple selection of training fields for a single class in situations where the study area consists of highly irregular terrain was detailed. A single classification was illuminated differently in different areas, resulting in the existence of multiple spectral signatures for a given class. These different signatures result since different qualities of radiation upwell to the detector from portions that have differing qualities of incident radiation. Techniques of training field selection were outlined, and a classification obtained from a natural area in Tishomingo State Park in northern Mississippi was presented.
ERIC Educational Resources Information Center
Maryono
2016-01-01
This study aims to describe the culture and local potential in Pacitan, East Java, as well as the implementation of local content in primary schools in the area, and some factors that support and hinder their implementation. This research is a qualitative case study. There were five primary schools used as samples obtained through purposive…
76 FR 43196 - Implementation of the Truth in Caller ID Act
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-20
...In this Report and Order (Order), the Commission adopts rules to implement the Truth in Caller ID Act of 2009 (Truth in Caller ID Act, or Act). The Truth in Caller ID Act, and the Commission's implementing rules, prohibit any person or entity from knowingly altering or manipulating caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value.
Cascade Error Projection: A Learning Algorithm for Hardware Implementation
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Daud, Taher
1996-01-01
In this paper, we workout a detailed mathematical analysis for a new learning algorithm termed Cascade Error Projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters. Furthermore, CEP learning algorithm is operated only on one layer, whereas the other set of weights can be calculated deterministically. In association with the dynamical stepsize change concept to convert the weight update from infinite space into a finite space, the relation between the current stepsize and the previous energy level is also given and the estimation procedure for optimal stepsize is used for validation of our proposed technique. The weight values of zero are used for starting the learning for every layer, and a single hidden unit is applied instead of using a pool of candidate hidden units similar to cascade correlation scheme. Therefore, simplicity in hardware implementation is also obtained. Furthermore, this analysis allows us to select from other methods (such as the conjugate gradient descent or the Newton's second order) one of which will be a good candidate for the learning technique. The choice of learning technique depends on the constraints of the problem (e.g., speed, performance, and hardware implementation); one technique may be more suitable than others. Moreover, for a discrete weight space, the theoretical analysis presents the capability of learning with limited weight quantization. Finally, 5- to 8-bit parity and chaotic time series prediction problems are investigated; the simulation results demonstrate that 4-bit or more weight quantization is sufficient for learning neural network using CEP. In addition, it is demonstrated that this technique is able to compensate for less bit weight resolution by incorporating additional hidden units. However, generation result may suffer somewhat with lower bit weight quantization.
Inequalities in public water supply fluoridation in Brazil: An ecological study
Gabardo, Marilisa CL; da Silva, Wander J; Olandoski, Marcia; Moysés, Simone T; Moysés, Samuel J
2008-01-01
Background The literature is scarce on the social and geographic inequalities in the access to and implementation of the fluoridation of public water supplies. This study adds knowledge to the Brazilian experience of the chronic privation of water and wastewater policies, access to potable water and fluoridation in the country. Thus, the aim of this study was to verify possible inequalities in the population's access to fluoridated drinking water in 246 Brazilian municipalities. Methods The information on the process of water fluoridation in the municipalities and in the macro region in which each municipality is located was obtained from the national epidemiological survey which was concluded in 2003. The data relating to the human development index at municipal level (HDI-M) and access to mains water came from the Brazilian Human Development Atlas, whilst the size of the population was obtained from a governmental source. The Fisher exact test (P < 0.05) was employed to identify significant associations between the explanatory variables and their ability to predict the principal outcomes of interest to this study, namely the presence or absence of the water fluoridation process in the municipalities as well as the length of time during which this measure has been implemented. Linear regression was used to observe the associations between the relevant variables in a multivariate environment. Results The results clearly showed that there is a relationship between municipalities with larger populations, located in more socio-economically advantaged regions and with better HDI-M, and where fluoridation is both present and has been implemented for a longer period of time (started before 1990). Conclusion The findings suggest that the aim of treating water with fluoride may not be being adequately achieved, requiring more effective strategies so that access to this measure can be expanded equitably. PMID:18402688
Jansen, Maria W J; Hoeijmakers, Marjan
2013-01-01
Public health professionals have a pivotal position in efforts to obtain more practice-based evidence about what people need and what works in real circumstances. Close collaboration with researchers should enable public health professionals to design and conduct research in practical settings to address today's complex public health problems and increase the external validity of results. This requires expanding the research competencies of public health professionals. We developed and implemented a masterclass for public health professionals, modeled on an existing scientific training course for general practitioners and rehabilitation physicians. The masterclass was evaluated using a multiple method design, involving quantitative and qualitative methods. Evaluation took place during, at the end of, and 9 months after the masterclass. Twenty-one candidates (mean age, 41 y) started the program, 66% of whom completed it. Teaching materials, lectures, organization, and facilities were favorably evaluated. At the end of the masterclass, participants were able to design and implement a research proposal in their daily work setting, write a draft article, and critically appraise scientific research for practice and policy purposes. Participants had become more confident about their research competence. Management support from their employer proved crucial. Results obtained with the different methods were consistent. The masterclass appeared to be an effective instrument to increase the practice-based research skills of public health professionals, provided the research is implemented in a supportive organization with management backing and supervision by senior university researchers. We recommend using masterclasses to contribute to the improvement of practice-based evidence for projects addressing current and future public health problems.
Multi-Temporal Analysis of Landsat Imagery for Bathymetry.
1983-05-01
this data set, typical results obtained when these data were used to implement proposed procedures, an interpretation of these analyses, and based...warping, etc.) have been carried out * as described in section 3.4 and the DIPS operator manuals . For each date * the best available parameter...1982. 5. Digital Image Processing System User’s Manual DBA Systems, Inc., Under Contract DMA800-78-C-0101, 8 November 1979. 6. Naylor, L.D. Status of
G and C boost and abort study summary, exhibit B
NASA Technical Reports Server (NTRS)
Backman, H. D.
1972-01-01
A six degree of freedom simulation of rigid vehicles was developed to study space shuttle vehicle boost-abort guidance and control techniques. The simulation was described in detail as an all digital program and as a hybrid program. Only the digital simulation was implemented. The equations verified in the digital simulation were adapted for use in the hybrid simulation. Study results were obtained from four abort cases using the digital program.
1990-09-01
35 C. HYDRAULICS ............................................. 39 iv VII. CONCLUSIONS AND RECOMMENDATIONS...requirement can be calculated. The maximum RMS torque required was obtained using the following equation: TMAX = t MAr T ZSm (15) 14 IV. RESULTS A...with the addition to added weight on the pitch arm/link 35 assemblies all related components would have to be strengthened to take the centrifugal loads
Governing for Enterprise Security (GES) Implementation Guide
2007-08-01
Lilly for its inadvertent failure to uphold a pri- vacy promise it had made to patients using Prozac , even though it had a policy covering the op...FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY...OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. Use of any trademarks in this report is not intended in any