Sample records for expensive iterative training

  1. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  2. Improving Acoustic Models by Watching Television

    NASA Technical Reports Server (NTRS)

    Witbrock, Michael J.; Hauptmann, Alexander G.

    1998-01-01

    Obtaining sufficient labelled training data is a persistent difficulty for speech recognition research. Although well transcribed data is expensive to produce, there is a constant stream of challenging speech data and poor transcription broadcast as closed-captioned television. We describe a reliable unsupervised method for identifying accurately transcribed sections of these broadcasts, and show how these segments can be used to train a recognition system. Starting from acoustic models trained on the Wall Street Journal database, a single iteration of our training method reduced the word error rate on an independent broadcast television news test set from 62.2% to 59.5%.

  3. Design and optimization of Artificial Neural Networks for the modelling of superconducting magnets operation in tokamak fusion reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froio, A.; Bonifetto, R.; Carli, S.

    In superconducting tokamaks, the cryoplant provides the helium needed to cool different clients, among which by far the most important one is the superconducting magnet system. The evaluation of the transient heat load from the magnets to the cryoplant is fundamental for the design of the latter and the assessment of suitable strategies to smooth the heat load pulses, induced by the intrinsically pulsed plasma scenarios characteristic of today's tokamaks, is crucial for both suitable sizing and stable operation of the cryoplant. For that evaluation, accurate but expensive system-level models, as implemented in e.g. the validated state-of-the-art 4C code, weremore » developed in the past, including both the magnets and the respective external cryogenic cooling circuits. Here we show how these models can be successfully substituted with cheaper ones, where the magnets are described by suitably trained Artificial Neural Networks (ANNs) for the evaluation of the heat load to the cryoplant. First, two simplified thermal-hydraulic models for an ITER Toroidal Field (TF) magnet and for the ITER Central Solenoid (CS) are developed, based on ANNs, and a detailed analysis of the chosen networks' topology and parameters is presented and discussed. The ANNs are then inserted into the 4C model of the ITER TF and CS cooling circuits, which also includes active controls to achieve a smoothing of the variation of the heat load to the cryoplant. The training of the ANNs is achieved using the results of full 4C simulations (including detailed models of the magnets) for conventional sigmoid-like waveforms of the drivers and the predictive capabilities of the ANN-based models in the case of actual ITER operating scenarios are demonstrated by comparison with the results of full 4C runs, both with and without active smoothing, in terms of both accuracy and computational time. Exploiting the low computational effort requested by the ANN-based models, a demonstrative optimization study has been finally carried out, with the aim of choosing among different smoothing strategies for the standard ITER plasma operation.« less

  4. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  5. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  6. Reducing weight precision of convolutional neural networks towards large-scale on-chip image recognition

    NASA Astrophysics Data System (ADS)

    Ji, Zhengping; Ovsiannikov, Ilia; Wang, Yibing; Shi, Lilong; Zhang, Qiang

    2015-05-01

    In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server - apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.

  7. Cupola Furnace Computer Process Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloymore » elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).« less

  8. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  9. Validity and reliability of an in-training evaluation report to measure the CanMEDS roles in emergency medicine residents.

    PubMed

    Kassam, Aliya; Donnon, Tyrone; Rigby, Ian

    2014-03-01

    There is a question of whether a single assessment tool can assess the key competencies of residents as mandated by the Royal College of Physicians and Surgeons of Canada CanMEDS roles framework. The objective of the present study was to investigate the reliability and validity of an emergency medicine (EM) in-training evaluation report (ITER). ITER data from 2009 to 2011 were combined for residents across the 5 years of the EM residency training program. An exploratory factor analysis with varimax rotation was used to explore the construct validity of the ITER. A total of 172 ITERs were completed on residents across their first to fifth year of training. A combined, 24-item ITER yielded a five-factor solution measuring the CanMEDs role Medical Expert/Scholar, Communicator/Collaborator, Professional, Health Advocate and Manager subscales. The factor solution accounted for 79% of the variance, and reliability coefficients (Cronbach alpha) ranged from α  =  0.90 to 0.95 for each subscale and α  =  0.97 overall. The combined, 24-item ITER used to assess residents' competencies in the EM residency program showed strong reliability and evidence of construct validity for assessment of the CanMEDS roles. Further research is needed to develop and test ITER items that will differentiate each CanMEDS role exclusively.

  10. A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction

    DTIC Science & Technology

    2010-08-09

    more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several

  11. NASA's Zero-g aircraft operations

    NASA Technical Reports Server (NTRS)

    Williams, R. K.

    1988-01-01

    NASA's Zero-g aircraft, operated by the Johnson Space Center, provides the unique weightless or zero-g environment of space flight for hardware development and test and astronaut training purposes. The program, which began in 1959, uses a slightly modified Boeing KC-135A aircraft, flying a parabolic trajectory, to produce weightless periods of 20 to 25 seconds. The program has supported the Mercury, Gemini, Apollo, Skylab, Apollo-Soyuz and Shuttle programs as well as a number of unmanned space operations. Typical experiments for flight in the aircraft have included materials processing experiments, welding, fluid manipulation, cryogenics, propellant tankage, satellite deployment dynamics, planetary sciences research, crew training with weightless indoctrination, space suits, tethers, etc., and medical studies including vestibular research. The facility is available to microgravity research organizations on a cost-reimbursable basis, providing a large, hands-on test area for diagnostic and support equipment for the Principal Investigators and providing an iterative-type design approach to microgravity experiment development. The facility allows concepts to be proven and baseline experimentation to be accomplished relatively inexpensively prior to committing to the large expense of a space flight.

  12. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  13. Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.

    PubMed

    Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris

    2010-07-15

    The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net

  14. 76 FR 8699 - Reporting Requirements for Positive Train Control Expenses and Investments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-15

    ... DEPARTMENT OF TRANSPORTATION Surface Transportation Board 49 CFR Part 1201 [Docket No. EP 706] Reporting Requirements for Positive Train Control Expenses and Investments AGENCY: Surface Transportation... Train Control, a federally mandated safety system that will automatically stop or slow a train before an...

  15. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  16. Simulation and Analysis of Launch Teams (SALT)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.

  17. Pediatric faculty and residents’ perspectives on In-Training Evaluation Reports (ITERs)

    PubMed Central

    Patel, Rikin; Drover, Anne; Chafe, Roger

    2015-01-01

    Background In-training evaluation reports (ITERs) are used by over 90% of postgraduate medical training programs in Canada for resident assessment. Our study examined the perspectives of faculty and residents in one pediatric program as a means to improve the ITER as an evaluation tool. Method Two separate focus groups were conducted, one with eight pediatric residents and one with nine clinical faculty within the pediatrics program of Memorial University’s Faculty of Medicine to discuss their perceptions of, and suggestions for improving, the use of ITERs. Results Residents and faculty shared many similar suggestions for improving the ITER as an evaluation tool. Both the faculty and residents emphasized the importance of written feedback, contextualizing the evaluation and timely follow-up. The biggest challenge appears to be the discrepancy in the quality of feedback sought by the residents and the faculty members’ ability to do so in a time effective manner. Others concerns related to the need for better engagement in setting rotation objectives and more direct observation by the faculty member completing the ITER. Conclusions The ITER is a useful tool in resident evaluations, but a number of issues relating to its actual use could improve the quality of feedback which residents receive. PMID:27004076

  18. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  19. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  20. Are early summer wildfires an opportunity to revegetate medusahead-invaded rangelands?

    USDA-ARS?s Scientific Manuscript database

    Successful revegetation of medusdahead-invaded plant communities can be prohibitively expensive, because it often requires iterative applications of integrated control and revegetation treatments. Prescribed burning has been used to control medusahead and prepare seedbeds for revegetation, but burni...

  1. Electromagnetic scattering of large structures in layered earths using integral equations

    NASA Astrophysics Data System (ADS)

    Xiong, Zonghou; Tripp, Alan C.

    1995-07-01

    An electromagnetic scattering algorithm for large conductivity structures in stratified media has been developed and is based on the method of system iteration and spatial symmetry reduction using volume electric integral equations. The method of system iteration divides a structure into many substructures and solves the resulting matrix equation using a block iterative method. The block submatrices usually need to be stored on disk in order to save computer core memory. However, this requires a large disk for large structures. If the body is discretized into equal-size cells it is possible to use the spatial symmetry relations of the Green's functions to regenerate the scattering impedance matrix in each iteration, thus avoiding expensive disk storage. Numerical tests show that the system iteration converges much faster than the conventional point-wise Gauss-Seidel iterative method. The numbers of cells do not significantly affect the rate of convergency. Thus the algorithm effectively reduces the solution of the scattering problem to an order of O(N2), instead of O(N3) as with direct solvers.

  2. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  3. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  4. Reinforcement learning produces dominant strategies for the Iterated Prisoner's Dilemma.

    PubMed

    Harper, Marc; Knight, Vincent; Jones, Martin; Koutsovoulos, Georgios; Glynatsi, Nikoleta E; Campbell, Owen

    2017-01-01

    We present tournament results and several powerful strategies for the Iterated Prisoner's Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also.

  5. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  6. Preparing your organization's training program for ICD-10.

    PubMed

    Carolan, Katie; Reitzel, David

    2011-10-01

    Training for ICD-10 is going to be expensive, though predictions of how expensive vary widely. Healthcare finance executives should create a flexible, multiyear capital and operating budget to prepare for ICD-10 conversion and the training and support that will be required. Healthcare organizations also should assess staff knowledge in the critical ICD-10 areas and begin training now to be ready for go-live by early 2013.

  7. Reinforcement learning produces dominant strategies for the Iterated Prisoner’s Dilemma

    PubMed Central

    Jones, Martin; Koutsovoulos, Georgios; Glynatsi, Nikoleta E.; Campbell, Owen

    2017-01-01

    We present tournament results and several powerful strategies for the Iterated Prisoner’s Dilemma created using reinforcement learning techniques (evolutionary and particle swarm algorithms). These strategies are trained to perform well against a corpus of over 170 distinct opponents, including many well-known and classic strategies. All the trained strategies win standard tournaments against the total collection of other opponents. The trained strategies and one particular human made designed strategy are the top performers in noisy tournaments also. PMID:29228001

  8. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  9. Virtual reality cataract surgery training: learning curves and concurrent validity.

    PubMed

    Selvander, Madeleine; Åsman, Peter

    2012-08-01

    To investigate initial learning curves on a virtual reality (VR) eye surgery simulator and whether achieved skills are transferable between tasks. Thirty-five medical students were randomized to complete ten iterations on either the VR Caspulorhexis module (group A) or the Cataract navigation training module (group B) and then two iterations on the other module. Learning curves were compared between groups. The second Capsulorhexis video was saved and evaluated with the performance rating tool Objective Structured Assessment of Cataract Surgical Skill (OSACSS). The students' stereoacuity was examined. Both groups demonstrated significant improvements in performance over the 10 iterations: group A for all parameters analysed including score (p < 0.0001), time (p < 0.0001) and corneal damage (p = 0.0003), group B for time (p < 0.0001), corneal damage (p < 0.0001) but not for score (p = 0.752). Training on one module did not improve performance on the other. Capsulorhexis score correlated significantly with evaluation of the videos using the OSACSS performance rating tool. For stereoacuity < and ≥120 seconds of arc, sum of both modules' second iteration score was 73.5 and 41.0, respectively (p = 0.062). An initial rapid improvement in performance on a simulator with repeated practice was shown. For capsulorhexis, 10 iterations with only simulator feedback are not enough to reach a plateau for overall score. Skills transfer between modules was not found suggesting benefits from training on both modules. Stereoacuity may be of importance in the recruitment and training of new cataract surgeons. Additional studies are needed to investigate this further. Concurrent validity was found for Capsulorhexis module. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.

  10. How do IMGs compare with Canadian medical school graduates in a family practice residency program?

    PubMed Central

    Andrew, Rodney F.

    2010-01-01

    ABSTRACT OBJECTIVE To compare international medical graduates (IMGs) with Canadian medical school graduates in a family practice residency program. DESIGN Analysis of the results of the in-training evaluation reports (ITERs) and the Certification in Family Medicine (CCFP) examination results for 2 cohorts of IMGs and Canadian-trained graduates between the years 2006 and 2008. SETTING St Paul’s Hospital (SPH) in Vancouver, BC, a training site of the University of British Columbia (UBC) Family Practice Residency Program. PARTICIPANTS In-training evaluation reports were examined for 12 first-year and 9 second-year Canadian-trained residents at the SPH site, and 12 first-year and 12 second-year IMG residents at the IMG site at SPH; CCFP examination results were reviewed for all UBC family practice residents who took the May 2008 examination and disclosed their results. MAIN OUTCOME MEASURES Pass or fail rates on the CCFP examination; proportions of evaluations in each group of residents given each of the following designations: exceeds expectations, meets expectations, or needs improvement. The May 2008 CCFP examination results were reviewed. RESULTS Compared with the second-year IMGs, the second-year SPH Canadian-trained residents had a greater proportion of exceeds expectations designations than the IMGs. For the first-year residents, both the SPH Canadian graduates and IMGs had similar results in all 3 categories. Combining the results of the 2 cohorts, the Canadian-trained residents had 310 (99%) ITERs that were designated as either exceeds expectations or meets expectations, and only 3 (1%) ITERs were in the needs improvement category. The IMG results were 362 (97.6%) ITERs in the exceeds expectations or meets expectations categories; 9 (2%) were in the needs improvement category. Statistically these are not significant differences. Seven of the 12 (58%) IMG candidates passed the CCFP examination compared with 59 of 62 (95%) of the UBC family practice residents. CONCLUSION The IMG residents compared favourably with their Canadian-trained colleagues when comparing ITERs but not in passing the CCFP examination. Further research is needed to elucidate these results. PMID:20841570

  11. Photoacoustic image reconstruction via deep learning

    NASA Astrophysics Data System (ADS)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  12. Do in-training evaluation reports deserve their bad reputations? A study of the reliability and predictive ability of ITER scores and narrative comments.

    PubMed

    Ginsburg, Shiphra; Eva, Kevin; Regehr, Glenn

    2013-10-01

    Although scores on in-training evaluation reports (ITERs) are often criticized for poor reliability and validity, ITER comments may yield valuable information. The authors assessed across-rotation reliability of ITER scores in one internal medicine program, ability of ITER scores and comments to predict postgraduate year three (PGY3) performance, and reliability and incremental predictive validity of attendings' analysis of written comments. Numeric and narrative data from the first two years of ITERs for one cohort of residents at the University of Toronto Faculty of Medicine (2009-2011) were assessed for reliability and predictive validity of third-year performance. Twenty-four faculty attendings rank-ordered comments (without scores) such that each resident was ranked by three faculty. Mean ITER scores and comment rankings were submitted to regression analyses; dependent variables were PGY3 ITER scores and program directors' rankings. Reliabilities of ITER scores across nine rotations for 63 residents were 0.53 for both postgraduate year one (PGY1) and postgraduate year two (PGY2). Interrater reliabilities across three attendings' rankings were 0.83 for PGY1 and 0.79 for PGY2. There were strong correlations between ITER scores and comments within each year (0.72 and 0.70). Regressions revealed that PGY1 and PGY2 ITER scores collectively explained 25% of variance in PGY3 scores and 46% of variance in PGY3 rankings. Comment rankings did not improve predictions. ITER scores across multiple rotations showed decent reliability and predictive validity. Comment ranks did not add to the predictive ability, but correlation analyses suggest that trainee performance can be measured through these comments.

  13. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  14. Australian Vocational Education and Training Statistics: Financial Information 2007

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2008

    2008-01-01

    This publication details the financial operations of Australia's public vocational education and training (VET) system for 2007. The information presented covers revenues and expenses; assets, liabilities and equities; cash flows; and trends in total revenues and expenses. The scope of the financial data collection covers all transactions that…

  15. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed Central

    Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-01-01

    Objectives To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. Methods This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Results Document analysis found all programs’ ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Conclusions Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences. PMID:28315858

  16. How does culture affect experiential training feedback in exported Canadian health professional curricula?

    PubMed

    Wilbur, Kerry; Mousa Bacha, Rasha; Abdelaziz, Somaia

    2017-03-17

    To explore feedback processes of Western-based health professional student training curricula conducted in an Arab clinical teaching setting. This qualitative study employed document analysis of in-training evaluation reports (ITERs) used by Canadian nursing, pharmacy, respiratory therapy, paramedic, dental hygiene, and pharmacy technician programs established in Qatar. Six experiential training program coordinators were interviewed between February and May 2016 to explore how national cultural differences are perceived to affect feedback processes between students and clinical supervisors. Interviews were recorded, transcribed, and coded according to a priori cultural themes. Document analysis found all programs' ITERs outlined competency items for students to achieve. Clinical supervisors choose a response option corresponding to their judgment of student performance and may provide additional written feedback in spaces provided. Only one program required formal face-to-face feedback exchange between students and clinical supervisors. Experiential training program coordinators identified that no ITER was expressly culturally adapted, although in some instances, modifications were made for differences in scopes of practice between Canada and Qatar.  Power distance was recognized by all coordinators who also identified both student and supervisor reluctance to document potentially negative feedback in ITERs. Instances of collectivism were described as more lenient student assessment by clinical supervisors of the same cultural background. Uncertainty avoidance did not appear to impact feedback processes. Our findings suggest that differences in specific cultural dimensions between Qatar and Canada have implications on the feedback process in experiential training which may be addressed through simple measures to accommodate communication preferences.

  17. Mapping raised bogs with an iterative one-class classification approach

    NASA Astrophysics Data System (ADS)

    Mack, Benjamin; Roscher, Ribana; Stenzel, Stefanie; Feilhauer, Hannes; Schmidtlein, Sebastian; Waske, Björn

    2016-10-01

    Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative OCC approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative OCC outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.

  18. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets.

    PubMed

    Stanescu, Ana; Caragea, Doina

    2015-01-01

    Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.

  19. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets

    PubMed Central

    2015-01-01

    Background Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Results Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. Conclusions In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework. PMID:26356316

  20. Adaptable Iterative and Recursive Kalman Filter Schemes

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato

    2014-01-01

    Nonlinear filters are often very computationally expensive and usually not suitable for real-time applications. Real-time navigation algorithms are typically based on linear estimators, such as the extended Kalman filter (EKF) and, to a much lesser extent, the unscented Kalman filter. The Iterated Kalman filter (IKF) and the Recursive Update Filter (RUF) are two algorithms that reduce the consequences of the linearization assumption of the EKF by performing N updates for each new measurement, where N is the number of recursions, a tuning parameter. This paper introduces an adaptable RUF algorithm to calculate N on the go, a similar technique can be used for the IKF as well.

  1. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  2. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  3. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review.

    PubMed

    Hatala, Rose; Sawatsky, Adam P; Dudek, Nancy; Ginsburg, Shiphra; Cook, David A

    2017-06-01

    In-training evaluation reports (ITERs) constitute an integral component of medical student and postgraduate physician trainee (resident) assessment. ITER narrative comments have received less attention than the numeric scores. The authors sought both to determine what validity evidence informs the use of narrative comments from ITERs for assessing medical students and residents and to identify evidence gaps. Reviewers searched for relevant English-language studies in MEDLINE, EMBASE, Scopus, and ERIC (last search June 5, 2015), and in reference lists and author files. They included all original studies that evaluated ITERs for qualitative assessment of medical students and residents. Working in duplicate, they selected articles for inclusion, evaluated quality, and abstracted information on validity evidence using Kane's framework (inferences of scoring, generalization, extrapolation, and implications). Of 777 potential articles, 22 met inclusion criteria. The scoring inference is supported by studies showing that rich narratives are possible, that changing the prompt can stimulate more robust narratives, and that comments vary by context. Generalization is supported by studies showing that narratives reach thematic saturation and that analysts make consistent judgments. Extrapolation is supported by favorable relationships between ITER narratives and numeric scores from ITERs and non-ITER performance measures, and by studies confirming that narratives reflect constructs deemed important in clinical work. Evidence supporting implications is scant. The use of ITER narratives for trainee assessment is generally supported, except that evidence is lacking for implications and decisions. Future research should seek to confirm implicit assumptions and evaluate the impact of decisions.

  4. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  5. Super-resolution Time-Lapse Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.

    2017-12-01

    Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.

  6. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle 3 Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  7. 42 CFR 61.11 - Payments: Tuition and other expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Payments: Tuition and other expenses. 61.11 Section 61.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.11 Payments: Tuition and other expenses. (a...

  8. 42 CFR 61.10 - Benefits: Tuition and other expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Benefits: Tuition and other expenses. 61.10 Section 61.10 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.10 Benefits: Tuition and other expenses. The...

  9. 42 CFR 61.11 - Payments: Tuition and other expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Payments: Tuition and other expenses. 61.11 Section 61.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.11 Payments: Tuition and other expenses. (a...

  10. 42 CFR 61.10 - Benefits: Tuition and other expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Benefits: Tuition and other expenses. 61.10 Section 61.10 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.10 Benefits: Tuition and other expenses. The...

  11. 42 CFR 61.11 - Payments: Tuition and other expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Payments: Tuition and other expenses. 61.11 Section 61.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.11 Payments: Tuition and other expenses. (a...

  12. 42 CFR 61.11 - Payments: Tuition and other expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Payments: Tuition and other expenses. 61.11 Section 61.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.11 Payments: Tuition and other expenses. (a...

  13. 42 CFR 61.10 - Benefits: Tuition and other expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false Benefits: Tuition and other expenses. 61.10 Section 61.10 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.10 Benefits: Tuition and other expenses. The...

  14. 42 CFR 61.11 - Payments: Tuition and other expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Payments: Tuition and other expenses. 61.11 Section 61.11 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.11 Payments: Tuition and other expenses. (a...

  15. 42 CFR 61.10 - Benefits: Tuition and other expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Benefits: Tuition and other expenses. 61.10 Section 61.10 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.10 Benefits: Tuition and other expenses. The...

  16. 42 CFR 61.10 - Benefits: Tuition and other expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Benefits: Tuition and other expenses. 61.10 Section 61.10 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES FELLOWSHIPS, INTERNSHIPS, TRAINING FELLOWSHIPS Regular Fellowships § 61.10 Benefits: Tuition and other expenses. The...

  17. A statistically valid method for using FIA plots to guide spectral class rejection in producing stratification maps

    Treesearch

    Michael L. Hoppus; Andrew J. Lister

    2002-01-01

    A Landsat TM classification method (iterative guided spectral class rejection) produced a forest cover map of southern West Virginia that provided the stratification layer for producing estimates of timberland area from Forest Service FIA ground plots using a stratified sampling technique. These same high quality and expensive FIA ground plots provided ground reference...

  18. Computational Short-cutting the Big Data Classification Bottleneck: Using the MODIS Land Cover Product to Derive a Consistent 30 m Landsat Land Cover Product of the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Roy, D. P.

    2016-12-01

    Classification is a fundamental process in remote sensing used to relate pixel values to land cover classes present on the surface. The state of the practice for large area land cover classification is to classify satellite time series metrics with a supervised (i.e., training data dependent) non-parametric classifier. Classification accuracy generally increases with training set size. However, training data collection is expensive and the optimal training distribution over large areas is unknown. The MODIS 500 m land cover product is available globally on an annual basis and so provides a potentially very large source of land cover training data. A novel methodology to classify large volume Landsat data using high quality training data derived automatically from the MODIS land cover product is demonstrated for all of the Conterminous United States (CONUS). The known misclassification accuracy of the MODIS land cover product and the scale difference between the 500 m MODIS and 30 m Landsat data are accommodated for by a novel MODIS product filtering, Landsat pixel selection, and iterative training approach to balance the proportion of local and CONUS training data used. Three years of global Web-enabled Landsat data (WELD) data for all of the CONUS are classified using a random forest classifier and the results assessed using random forest `out-of-bag' training samples. The global WELD data are corrected to surface nadir BRDF-Adjusted Reflectance and are defined in 158 × 158 km tiles in the same projection and nested to the MODIS land cover products. This reduces the need to pre-process the considerable Landsat data volume (more than 14,000 Landsat 5 and 7 scenes per year over the CONUS covering 11,000 million 30 m pixels). The methodology is implemented in a parallel manner on WELD tile by tile basis but provides a wall-to-wall seamless 30 m land cover product. Detailed tile and CONUS results are presented and the potential for global production using the recently available global WELD products are discussed.

  19. Co-Labeling for Multi-View Weakly Labeled Learning.

    PubMed

    Xu, Xinxing; Li, Wen; Xu, Dong; Tsang, Ivor W

    2016-06-01

    It is often expensive and time consuming to collect labeled training samples in many real-world applications. To reduce human effort on annotating training samples, many machine learning techniques (e.g., semi-supervised learning (SSL), multi-instance learning (MIL), etc.) have been studied to exploit weakly labeled training samples. Meanwhile, when the training data is represented with multiple types of features, many multi-view learning methods have shown that classifiers trained on different views can help each other to better utilize the unlabeled training samples for the SSL task. In this paper, we study a new learning problem called multi-view weakly labeled learning, in which we aim to develop a unified approach to learn robust classifiers by effectively utilizing different types of weakly labeled multi-view data from a broad range of tasks including SSL, MIL and relative outlier detection (ROD). We propose an effective approach called co-labeling to solve the multi-view weakly labeled learning problem. Specifically, we model the learning problem on each view as a weakly labeled learning problem, which aims to learn an optimal classifier from a set of pseudo-label vectors generated by using the classifiers trained from other views. Unlike traditional co-training approaches using a single pseudo-label vector for training each classifier, our co-labeling approach explores different strategies to utilize the predictions from different views, biases and iterations for generating the pseudo-label vectors, making our approach more robust for real-world applications. Moreover, to further improve the weakly labeled learning on each view, we also exploit the inherent group structure in the pseudo-label vectors generated from different strategies, which leads to a new multi-layer multiple kernel learning problem. Promising results for text-based image retrieval on the NUS-WIDE dataset as well as news classification and text categorization on several real-world multi-view datasets clearly demonstrate that our proposed co-labeling approach achieves state-of-the-art performance for various multi-view weakly labeled learning problems including multi-view SSL, multi-view MIL and multi-view ROD.

  20. 41 CFR 302-5.14 - What transportation expenses will my agency pay?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... authorize you to travel by the transportation mode(s) (e.g., airline, train, or privately owned automobile... mode(s) (e.g., common carrier or POV) that it determines to be advantageous to the Government. Your... SUBSISTENCE AND TRANSPORTATION EXPENSES 5-ALLOWANCE FOR HOUSEHUNTING TRIP EXPENSES Employee's Allowance For...

  1. Left ventricle segmentation via graph cut distribution matching.

    PubMed

    Ben Ayed, Ismail; Punithakumar, Kumaradevan; Li, Shuo; Islam, Ali; Chong, Jaron

    2009-01-01

    We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

  2. Enhancing public health outcomes in developing countries: from good policies and best practices to better implementation.

    PubMed

    Woolcock, Michael

    2018-06-01

    In rich and poor countries alike, a core challenge is building the state's capability for policy implementation. Delivering high-quality public health and health care-affordably, reliably and at scale, for all-exemplifies this challenge, since doing so requires deftly integrating refined technical skills (surgery), broad logistics management (supply chains, facilities maintenance), adaptive problem solving (curative care), and resolving ideological differences (who pays? who provides?), even as the prevailing health problems themselves only become more diverse, complex, and expensive as countries become more prosperous. However, the current state of state capability in developing countries is demonstrably alarming, with the strains and demands only likely to intensify in the coming decades. Prevailing "best practice" strategies for building implementation capability-copying and scaling putative successes from abroad-are too often part of the problem, while individual training ("capacity building") and technological upgrades (e.g. new management information systems) remain necessary but deeply insufficient. An alternative approach is outlined, one centered on building implementation capability by working iteratively to solve problems nominated and prioritized by local actors.

  3. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  4. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    PubMed Central

    Wolverton, Christopher; Hattrick-Simpers, Jason; Mehta, Apurva

    2018-01-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, but there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict. PMID:29662953

  5. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Fang; Ward, Logan; Williams, Travis

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  6. Accelerated discovery of metallic glasses through iteration of machine learning and high-throughput experiments

    DOE PAGES

    Ren, Fang; Ward, Logan; Williams, Travis; ...

    2018-04-01

    With more than a hundred elements in the periodic table, a large number of potential new materials exist to address the technological and societal challenges we face today; however, without some guidance, searching through this vast combinatorial space is frustratingly slow and expensive, especially for materials strongly influenced by processing. We train a machine learning (ML) model on previously reported observations, parameters from physiochemical theories, and make it synthesis method–dependent to guide high-throughput (HiTp) experiments to find a new system of metallic glasses in the Co-V-Zr ternary. Experimental observations are in good agreement with the predictions of the model, butmore » there are quantitative discrepancies in the precise compositions predicted. We use these discrepancies to retrain the ML model. The refined model has significantly improved accuracy not only for the Co-V-Zr system but also across all other available validation data. We then use the refined model to guide the discovery of metallic glasses in two additional previously unreported ternaries. Although our approach of iterative use of ML and HiTp experiments has guided us to rapid discovery of three new glass-forming systems, it has also provided us with a quantitatively accurate, synthesis method–sensitive predictor for metallic glasses that improves performance with use and thus promises to greatly accelerate discovery of many new metallic glasses. We believe that this discovery paradigm is applicable to a wider range of materials and should prove equally powerful for other materials and properties that are synthesis path–dependent and that current physiochemical theories find challenging to predict.« less

  7. Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation

    NASA Astrophysics Data System (ADS)

    Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia

    2013-08-01

    Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.

  8. Terminal iterative learning control based station stop control of a train

    NASA Astrophysics Data System (ADS)

    Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao

    2011-07-01

    The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.

  9. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  10. Improving Access to Care for Warfighters: Virtual Worlds Technology to Enhance Primary Care Training in Post-Traumatic Stress and Motivational Interviewing

    DTIC Science & Technology

    2017-10-01

    chronic mental and physical health problems. Therefore, the project aims to: (1) iteratively design a new web-based PTS and Motivational Interviewing...result in missed opportunities to intervene to prevent chronic mental and physical health problems. The project aims are to: (1) iteratively design a new...intervene to prevent chronic mental and physical health problems. We propose to: (1) Iteratively design a new web-based PTS and Motivational

  11. 18 CFR 367.83 - Training costs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Training costs. 367.83... Expense Instructions § 367.83 Training costs. When it is necessary that employees be trained to... functional accounts currently as they are incurred. However, when the training costs involved relate to...

  12. LAVA: Large scale Automated Vulnerability Addition

    DTIC Science & Technology

    2016-05-23

    memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out

  13. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. The Effect of Study Group Participation on Student Naval Aviator Persistence

    ERIC Educational Resources Information Center

    Sheppard, Thomas H.

    2010-01-01

    Training naval combat pilots is expensive and time consuming, it follows that attrition from flight training is costly to the government and traumatic for the individual. Flight training is considered high-risk training, and therefore strictly voluntary. Voluntary withdrawal from naval flight training is the largest unexplained reason for Student…

  15. Device-Task Fidelity and Transfer of Training: Aircraft Cockpit Procedures Training.

    ERIC Educational Resources Information Center

    Prophet, Wallace W.; Boyd, H. Alton

    An evaluation was made of the training effectiveness of two cockpit procedures training devices, differing greatly in physical fidelity and cost, for use on the ground for a twin-engine, turboprop, fixed-wing aircraft. One group of students received training in cockpit procedures in a relatively expensive, sophisticated, computerized trainer,…

  16. 5 CFR 410.404 - Determining if a conference is a training activity.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... organizational performance, and (d) Development benefits will be derived through the employee's attendance. ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Determining if a conference is a training... REGULATIONS TRAINING Paying for Training Expenses § 410.404 Determining if a conference is a training activity...

  17. Hybrid analysis of multiaxis electromagnetic data for discrimination of munitions and explosives of concern

    USGS Publications Warehouse

    Friedel, M.J.; Asch, T.H.; Oden, C.

    2012-01-01

    The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot–Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.

  18. Hybrid analysis of multiaxis electromagnetic data for discrimination of munitions and explosives of concern

    NASA Astrophysics Data System (ADS)

    Friedel, M. J.; Asch, T. H.; Oden, C.

    2012-08-01

    The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot-Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.

  19. 7 CFR 1775.36 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (CONTINUED) TECHNICAL ASSISTANCE GRANTS Technical Assistance and Training Grants § 1775.36 Purpose. Grants... water and/or waste disposal loan/grant applications. (d) Provide technical assistance/training to... facilities. (e) Pay the expenses associated with providing the technical assistance and/or training...

  20. 49 CFR 1242.57 - Dispatching trains (account XX-51-58).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Dispatching trains (account XX-51-58). 1242.57 Section 1242.57 Transportation Other Regulations Relating to Transportation (Continued) SURFACE...-Transportation § 1242.57 Dispatching trains (account XX-51-58). Separate common expenses on the basis of train...

  1. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  2. Off-site training of laparoscopic skills, a scoping review using a thematic analysis.

    PubMed

    Thinggaard, Ebbe; Kleif, Jakob; Bjerrum, Flemming; Strandbygaard, Jeanett; Gögenur, Ismail; Matthew Ritter, E; Konge, Lars

    2016-11-01

    The focus of research in simulation-based laparoscopic training has changed from examining whether simulation training works to examining how best to implement it. In laparoscopic skills training, portable and affordable box trainers allow for off-site training. Training outside simulation centers and hospitals can increase access to training, but also poses new challenges to implementation. This review aims to guide implementation of off-site training of laparoscopic skills by critically reviewing the existing literature. An iterative systematic search was carried out in MEDLINE, EMBASE, ERIC, Scopus, and PsychINFO, following a scoping review methodology. The included literature was analyzed iteratively using a thematic analysis approach. The study was reported in accordance with the STructured apprOach to the Reporting In healthcare education of Evidence Synthesis statement. From the search, 22 records were identified and included for analysis. A thematic analysis revealed the themes: access to training, protected training time, distribution of training, goal setting and testing, task design, and unsupervised training. The identified themes were based on learning theories including proficiency-based learning, deliberate practice, and self-regulated learning. Methods of instructional design vary widely in off-site training of laparoscopic skills. Implementation can be facilitated by organizing courses and training curricula following sound education theories such as proficiency-based learning and deliberate practice. Directed self-regulated learning has the potential to improve off-site laparoscopic skills training; however, further studies are needed to demonstrate the effect of this type of instructional design.

  3. 32 CFR 202.12 - Administrative support and eligible expenses.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... expense of a RAB: (1) RAB establishment. (2) Membership selection. (3) Training if it is: (i) Site... availability of funds, administrative support to RABs may be funded as follows: (1) At active installations... Restoration account for the Formerly Used Defense Sites program. ...

  4. 32 CFR 202.12 - Administrative support and eligible expenses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... expense of a RAB: (1) RAB establishment. (2) Membership selection. (3) Training if it is: (i) Site... availability of funds, administrative support to RABs may be funded as follows: (1) At active installations... Restoration account for the Formerly Used Defense Sites program. ...

  5. 32 CFR 202.12 - Administrative support and eligible expenses.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... expense of a RAB: (1) RAB establishment. (2) Membership selection. (3) Training if it is: (i) Site... availability of funds, administrative support to RABs may be funded as follows: (1) At active installations... Restoration account for the Formerly Used Defense Sites program. ...

  6. 32 CFR 202.12 - Administrative support and eligible expenses.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... expense of a RAB: (1) RAB establishment. (2) Membership selection. (3) Training if it is: (i) Site... availability of funds, administrative support to RABs may be funded as follows: (1) At active installations... Restoration account for the Formerly Used Defense Sites program. ...

  7. 32 CFR 202.12 - Administrative support and eligible expenses.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... expense of a RAB: (1) RAB establishment. (2) Membership selection. (3) Training if it is: (i) Site... availability of funds, administrative support to RABs may be funded as follows: (1) At active installations... Restoration account for the Formerly Used Defense Sites program. ...

  8. Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Mani, Ramani

    2007-01-01

    A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.

  9. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  10. The Essential Components of Coach Training for Mental Health Professionals: A Delphi Study

    ERIC Educational Resources Information Center

    Moriarity, Marlene Therese

    2010-01-01

    Purpose. The purpose of this study was to discover how coach training experts define coaching and what they would identify to be the essential components of a coach training program for mental health professionals. Methods. A panel of nine experts, through an iterative Delphi process of responding to three rounds of questionnaires, provided…

  11. 38 CFR 21.370 - Intraregional travel at government expense.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... facility or sheltered workshop; (v) To return to his or her home from the training or rehabilitation...) To report to the chosen school or training facility for the purpose of starting training; (ii) To report to a prospective employer-trainer for an interview prior to induction into training, when there is...

  12. 38 CFR 21.370 - Intraregional travel at government expense.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... facility or sheltered workshop; (v) To return to his or her home from the training or rehabilitation...) To report to the chosen school or training facility for the purpose of starting training; (ii) To report to a prospective employer-trainer for an interview prior to induction into training, when there is...

  13. 38 CFR 21.370 - Intraregional travel at government expense.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... facility or sheltered workshop; (v) To return to his or her home from the training or rehabilitation...) To report to the chosen school or training facility for the purpose of starting training; (ii) To report to a prospective employer-trainer for an interview prior to induction into training, when there is...

  14. 38 CFR 21.370 - Intraregional travel at government expense.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... facility or sheltered workshop; (v) To return to his or her home from the training or rehabilitation...) To report to the chosen school or training facility for the purpose of starting training; (ii) To report to a prospective employer-trainer for an interview prior to induction into training, when there is...

  15. SimCenter Hawaii Technology Enabled Learning and Intervention Systems

    DTIC Science & Technology

    2008-01-01

    manikin training in acquiring triage skills and self -efficacy. Phase II includes the development of the VR training scenarios, which includes iterative...Task A5. Skills acquisition relative to self -efficacy study See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves...relative to self -efficacy study • See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves Speed and Accuracy of First

  16. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  17. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  18. A Capital Idea.

    ERIC Educational Resources Information Center

    Geber, Beverly

    1992-01-01

    The Department of Labor is considering changing accounting rules to allow companies to treat training expenditures as investments rather than expenses. Human asset accounting could be a powerful incentive to increase business investment in training. (SK)

  19. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  20. Fostering Self-Regulated Learning in a Blended Environment Using Group Awareness and Peer Assistance as External Scaffolds

    ERIC Educational Resources Information Center

    Lin, J-W.; Lai, Y-C.; Lai, Y-C.; Chang, L-C.

    2016-01-01

    Most systems for training self-regulated learning (SRL) behaviour focus on the provision of a learner-centred environment. Such systems repeat the training process and place learners alone to experience that process iteratively. According to the relevant literature, external scaffolds are more promising for effective SRL training. In this work,…

  1. Ethics creep or governance creep? Challenges for Australian Human Research Ethics Committees (HRECS).

    PubMed

    Gorman, Susanna M

    2011-09-01

    Australian Human Research Ethics Committees (HRECs) have to contend with ever-increasing workloads and responsibilities which go well beyond questions of mere ethics. In this article, I shall examine how the roles of HRECs have changed, and show how this is reflected in the iterations of the National Statement on Ethical Conduct in Human Research 2007 (NS). In particular I suggest that the focus of the National Statement has shifted to concentrate on matters of research governance at the expense of research ethics, compounded by its linkage to the Australian Code for the Responsible Conduct of Research (2007) in its most recent iteration. I shall explore some of the challenges this poses for HRECs and institutions and the risks it poses to ensuring that Australian researchers receive clear ethical guidance and review.

  2. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  3. Digital micromirror device as amplitude diffuser for multiple-plane phase retrieval

    NASA Astrophysics Data System (ADS)

    Abregana, Timothy Joseph T.; Hermosa, Nathaniel P.; Almoro, Percival F.

    2017-06-01

    Previous implementations of the phase diffuser used in the multiple-plane phase retrieval method included a diffuser glass plate with fixed optical properties or a programmable yet expensive spatial light modulator. Here a model for phase retrieval based on a digital micromirror device as amplitude diffuser is presented. The technique offers programmable, convenient and low-cost amplitude diffuser for a non-stagnating iterative phase retrieval. The technique is demonstrated in the reconstructions of smooth object wavefronts.

  4. Improving Defense Acquisition Management and Policy Through a Life-Cycle Affordability Framework

    DTIC Science & Technology

    2014-02-04

    substrates based on gender, culture, and propensity. Four Design a neurofeedback -based training program that will produce changes in neuronal substrates...Validate the training program by iterating Step 3 until the desired behavioral outcome is achieved. Confirm that the neurofeedback creates desired

  5. Remote sensing training needs in professional forest and range resource management curricula

    NASA Technical Reports Server (NTRS)

    Meyer, M. P.

    1981-01-01

    The status of remote sensing training in accredited U.S. forestry schools is reviewed. It is noted that there is a serious lack of emphasis on aerial photography and aerial photointerpretation in the current curricula. This lack of training at the professional school limits entering employee capability and necessitates expensive on-the-job training.

  6. 25 CFR 39.603 - Is school board training required for all Bureau-funded schools?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false Is school board training required for all Bureau-funded schools? 39.603 Section 39.603 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM School Board Training Expenses § 39.603 Is school board training...

  7. 25 CFR 39.603 - Is school board training required for all Bureau-funded schools?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Is school board training required for all Bureau-funded schools? 39.603 Section 39.603 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM School Board Training Expenses § 39.603 Is school board training...

  8. 75 FR 39619 - Proposed Information Collection (Quarterly Report of State Approving Agency) Activities Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ... information needed to accurately reimburse State Approving Agencies (SAAs) for expenses incurred in the... reimburses SAAs for expenses incurred in the approval and supervision of education and training programs. SAAs are required to report their activities to VA quarterly and provide notices regarding which...

  9. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.

    PubMed

    Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin

    2014-01-01

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.

  10. The development and preliminary testing of a multimedia patient-provider survivorship communication module for breast cancer survivors.

    PubMed

    Wen, Kuang-Yi; Miller, Suzanne M; Stanton, Annette L; Fleisher, Linda; Morra, Marion E; Jorge, Alexandra; Diefenbach, Michael A; Ropka, Mary E; Marcus, Alfred C

    2012-08-01

    This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors' preparedness for effective communication with their health care providers after active treatment. The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Our study demonstrates survivors' openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. Copyright © 2012. Published by Elsevier Ireland Ltd.

  11. The development and preliminary testing of a multimedia patient–provider survivorship communication module for breast cancer survivors

    PubMed Central

    Wen, Kuang-Yi; Miller, Suzanne M.; Stanton, Annette L.; Fleisher, Linda; Morra, Marion E.; Jorge, Alexandra; Diefenbach, Michael A.; Ropka, Mary E.; Marcus, Alfred C.

    2012-01-01

    Objective This paper describes the development of a theory-guided and evidence-based multimedia training module to facilitate breast cancer survivors’ preparedness for effective communication with their health care providers after active treatment. Methods The iterative developmental process used included: (1) theory and evidence-based content development and vetting; (2) user testing; (3) usability testing; and (4) participant module utilization. Results Formative evaluation of the training module prototype occurred through user testing (n = 12), resulting in modification of the content and layout. Usability testing (n = 10) was employed to improve module functionality. Preliminary web usage data (n = 256, mean age = 53, 94.5% White, 75% college graduate and above) showed that 59% of the participants accessed the communication module, for an average of 7 min per login. Conclusion The iterative developmental process was informative in enhancing the relevance of the communication module. Preliminary web usage results demonstrate the potential feasibility of such a program. Practice implications Our study demonstrates survivors’ openness to the use of a web-based communication skills training module and outlines a systematic iterative user and interface program development and testing process, which can serve as a prototype for others considering such an approach. PMID:22770812

  12. Multidisciplinary optimization of an HSCT wing using a response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giunta, A.A.; Grossman, B.; Mason, W.H.

    1994-12-31

    Aerospace vehicle design is traditionally divided into three phases: conceptual, preliminary, and detailed. Each of these design phases entails a particular level of accuracy and computational expense. While there are several computer programs which perform inexpensive conceptual-level aircraft multidisciplinary design optimization (MDO), aircraft MDO remains prohibitively expensive using preliminary- and detailed-level analysis tools. This occurs due to the expense of computational analyses and because gradient-based optimization requires the analysis of hundreds or thousands of aircraft configurations to estimate design sensitivity information. A further hindrance to aircraft MDO is the problem of numerical noise which occurs frequently in engineering computations. Computermore » models produce numerical noise as a result of the incomplete convergence of iterative processes, round-off errors, and modeling errors. Such numerical noise is typically manifested as a high frequency, low amplitude variation in the results obtained from the computer models. Optimization attempted using noisy computer models may result in the erroneous calculation of design sensitivities and may slow or prevent convergence to an optimal design.« less

  13. Training vegetable parenting practices through a mobile game: Iterative qualitative alpha test

    USDA-ARS?s Scientific Manuscript database

    Vegetable consumption protects against chronic diseases, but many young children do not eat vegetables. One quest within the mobile application Mommio was developed to train mothers of preschoolers in effective vegetable parenting practices, or ways to approach getting their child to eat and enjoy v...

  14. 25 CFR 39.604 - Is there a separate weight for school board training at Bureau-operated schools?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Is there a separate weight for school board training at... INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM School Board Training Expenses § 39.604 Is there a separate weight for school board training at Bureau-operated schools? Yes. There is an ISEP weight...

  15. 5 CFR 410.307 - Training for promotion or placement in other positions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... capability to learn skills and acquire knowledge and abilities needed in the new position; and (C) The... employees in job search skills, techniques, and strategies; and (D) Pay for training related expenses as...

  16. 5 CFR 410.307 - Training for promotion or placement in other positions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... capability to learn skills and acquire knowledge and abilities needed in the new position; and (C) The... employees in job search skills, techniques, and strategies; and (D) Pay for training related expenses as...

  17. 5 CFR 410.307 - Training for promotion or placement in other positions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... capability to learn skills and acquire knowledge and abilities needed in the new position; and (C) The... employees in job search skills, techniques, and strategies; and (D) Pay for training related expenses as...

  18. 41 CFR 304-2.1 - What definitions apply to this chapter?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... promotional vendor training or other meetings held for the primary purpose of marketing the non-Federal... Regulation System PAYMENT OF TRAVEL EXPENSES FROM A NON-FEDERAL SOURCE EMPLOYEE'S ACCEPTANCE OF PAYMENT FROM A NON-FEDERAL SOURCE FOR TRAVEL EXPENSES 2-DEFINITIONS § 304-2.1 What definitions apply to this...

  19. 41 CFR 304-2.1 - What definitions apply to this chapter?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... promotional vendor training or other meetings held for the primary purpose of marketing the non-Federal... Regulation System PAYMENT OF TRAVEL EXPENSES FROM A NON-FEDERAL SOURCE EMPLOYEE'S ACCEPTANCE OF PAYMENT FROM A NON-FEDERAL SOURCE FOR TRAVEL EXPENSES 2-DEFINITIONS § 304-2.1 What definitions apply to this...

  20. 41 CFR 304-2.1 - What definitions apply to this chapter?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... promotional vendor training or other meetings held for the primary purpose of marketing the non-Federal... Regulation System PAYMENT OF TRAVEL EXPENSES FROM A NON-FEDERAL SOURCE EMPLOYEE'S ACCEPTANCE OF PAYMENT FROM A NON-FEDERAL SOURCE FOR TRAVEL EXPENSES 2-DEFINITIONS § 304-2.1 What definitions apply to this...

  1. The impact of turnover among respiratory care practitioners in a health care system: frequency and associated costs.

    PubMed

    Stoller, J K; Orens, D K; Kester, L

    2001-03-01

    Retention of respiratory therapists (RTs) is a desired institutional goal that reflects department loyalty and RTs' satisfaction. When RTs leave a department, services are disrupted and new therapists must undergo orientation and training, which requires time and expense. Despite the widely shared goal of minimal turnover, neither the annual rate nor the associated expense of turnover for RTs has been described. Determine the rate of RT turnover and the costs related to training new staff members. The Cleveland Clinic Health System is composed of 9 participating hospitals, which range from small, community-based institutions to large, tertiary care institutions. To elicit information about annual turnover among RTs throughout the system, we conducted a survey of key personnel in each of the hospitals' respiratory therapy departments. To calculate the costs of training, we reviewed the training schedule for an RT joining the Respiratory Therapy Section at the Cleveland Clinic Hospital. Cost estimates reflect the duration of training by various supervisory RTs, their respective wages (including benefit costs), and educational materials used in training and orientation. Turnover rates ranged from 3% to 18% per year. Five of the 8 institutions from which rates were available reported rates greater than 8% per year. The rate of annual turnover correlated significantly with the ratio of hospital beds to RT staff (Pearson r = 0.784, r(2) = 0.61, p = 0.02). The cost of training an RT at the Cleveland Clinic Hospital totaled $3,447.11. Turnover among respiratory therapists poses a substantial problem because of its frequency and expense. Greater attention to issues affecting turnover and to enhancing retention of RTs is warranted.

  2. Reasons Why Training and Development Fails...and What You Can Do about It.

    ERIC Educational Resources Information Center

    Phillips, Jack L.; Phillips, Patricia P.

    2002-01-01

    Among the reasons why training and development fail are lack of alignment with needs, failure to recognize nontraining solutions, lack of objectives, expensive solutions, lack of accountability for results, failure to prepare for transfer, lack of management support, failure to isolate the effects of training, lack of executive commitment and…

  3. 49 CFR 1242.59 - Train inspection and lubrication (account XX-51-62).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Train inspection and lubrication (account XX-51-62). 1242.59 Section 1242.59 Transportation Other Regulations Relating to Transportation (Continued) SURFACE...-Transportation § 1242.59 Train inspection and lubrication (account XX-51-62). Separate common expenses on basis...

  4. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE PAGES

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin; ...

    2016-04-01

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  5. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven; Berrill, Mark; Clarno, Kevin

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Furthermore, numerical simulations demonstrating the efficiency ofmore » JFNK and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  6. An assessment of coupling algorithms for nuclear reactor core physics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven, E-mail: hamiltonsp@ornl.gov; Berrill, Mark, E-mail: berrillma@ornl.gov; Clarno, Kevin, E-mail: clarnokt@ornl.gov

    This paper evaluates the performance of multiphysics coupling algorithms applied to a light water nuclear reactor core simulation. The simulation couples the k-eigenvalue form of the neutron transport equation with heat conduction and subchannel flow equations. We compare Picard iteration (block Gauss–Seidel) to Anderson acceleration and multiple variants of preconditioned Jacobian-free Newton–Krylov (JFNK). The performance of the methods are evaluated over a range of energy group structures and core power levels. A novel physics-based approximation to a Jacobian-vector product has been developed to mitigate the impact of expensive on-line cross section processing steps. Numerical simulations demonstrating the efficiency of JFNKmore » and Anderson acceleration relative to standard Picard iteration are performed on a 3D model of a nuclear fuel assembly. Both criticality (k-eigenvalue) and critical boron search problems are considered.« less

  7. Permittivity and conductivity parameter estimations using full waveform inversion

    NASA Astrophysics Data System (ADS)

    Serrano, Jheyston O.; Ramirez, Ana B.; Abreo, Sergio A.; Sadler, Brian M.

    2018-04-01

    Full waveform inversion of Ground Penetrating Radar (GPR) data is a promising strategy to estimate quantitative characteristics of the subsurface such as permittivity and conductivity. In this paper, we propose a methodology that uses Full Waveform Inversion (FWI) in time domain of 2D GPR data to obtain highly resolved images of the permittivity and conductivity parameters of the subsurface. FWI is an iterative method that requires a cost function to measure the misfit between observed and modeled data, a wave propagator to compute the modeled data and an initial velocity model that is updated at each iteration until an acceptable decrease of the cost function is reached. The use of FWI with GPR are expensive computationally because it is based on the computation of the electromagnetic full wave propagation. Also, the commercially available acquisition systems use only one transmitter and one receiver antenna at zero offset, requiring a large number of shots to scan a single line.

  8. A Parallel Numerical Algorithm To Solve Linear Systems Of Equations Emerging From 3D Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.

    2016-10-01

    Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.

  9. Clinical Training at Remote Sites Using Mobile Technology: An India-USA Partnership

    ERIC Educational Resources Information Center

    Vyas, R.; Albright, S.; Walker, D.; Zachariah, A.; Lee, M. Y.

    2010-01-01

    Christian Medical College (CMC), India, and Tufts University School of Medicine, USA, have developed an "institutional hub and spokes" model (campus-based e-learning supporting m-learning in the field) to facilitate clinical education and training at remote secondary hospital sites across India. Iterative research, design, development,…

  10. A Preliminary Examination of the In-Training Evaluation Report

    ERIC Educational Resources Information Center

    Skakun, Ernest N.; And Others

    1975-01-01

    The In-Training Evaluation Report (ITER), in use by the Royal College of Physicians and Surgeons of Canada for examining the competencies of candidates eligible for the certifying examination, was tested for validity and reliability. This analysis suggests revisions but declares the ITEA a useful instrument to aid in candidate assessment. (JT)

  11. Training Social Justice Journalists: A Case Study

    ERIC Educational Resources Information Center

    Nelson, Jacob L.; Lewis, Dan A.

    2015-01-01

    Journalism schools are in the midst of sorting through what it means to prepare journalists for a rapidly transitioning field. In this article, we describe an effort to train students in "social justice journalism" at an elite school of journalism. In our ethnographic analysis of its first iteration, we found that this effort failed to…

  12. Design of 4D x-ray tomography experiments for reconstruction using regularized iterative algorithms

    NASA Astrophysics Data System (ADS)

    Mohan, K. Aditya

    2017-10-01

    4D X-ray computed tomography (4D-XCT) is widely used to perform non-destructive characterization of time varying physical processes in various materials. The conventional approach to improving temporal resolution in 4D-XCT involves the development of expensive and complex instrumentation that acquire data faster with reduced noise. It is customary to acquire data with many tomographic views at a high signal to noise ratio. Instead, temporal resolution can be improved using regularized iterative algorithms that are less sensitive to noise and limited views. These algorithms benefit from optimization of other parameters such as the view sampling strategy while improving temporal resolution by reducing the total number of views or the detector exposure time. This paper presents the design principles of 4D-XCT experiments when using regularized iterative algorithms derived using the framework of model-based reconstruction. A strategy for performing 4D-XCT experiments is presented that allows for improving the temporal resolution by progressively reducing the number of views or the detector exposure time. Theoretical analysis of the effect of the data acquisition parameters on the detector signal to noise ratio, spatial reconstruction resolution, and temporal reconstruction resolution is also presented in this paper.

  13. Crowd-sourced assessment of technical skills: an adjunct to urology resident surgical simulation training.

    PubMed

    Holst, Daniel; Kowalewski, Timothy M; White, Lee W; Brand, Timothy C; Harper, Jonathan D; Sorenson, Mathew D; Kirsch, Sarah; Lendvay, Thomas S

    2015-05-01

    Crowdsourcing is the practice of obtaining services from a large group of people, typically an online community. Validated methods of evaluating surgical video are time-intensive, expensive, and involve participation of multiple expert surgeons. We sought to obtain valid performance scores of urologic trainees and faculty on a dry-laboratory robotic surgery task module by using crowdsourcing through a web-based grading tool called Crowd Sourced Assessment of Technical Skill (CSATS). IRB approval was granted to test the technical skills grading accuracy of Amazon.com Mechanical Turk™ crowd-workers compared to three expert faculty surgeon graders. The two groups assessed dry-laboratory robotic surgical suturing performances of three urology residents (PGY-2, -4, -5) and two faculty using three performance domains from the validated Global Evaluative Assessment of Robotic Skills assessment tool. After an average of 2 hours 50 minutes, each of the five videos received 50 crowd-worker assessments. The inter-rater reliability (IRR) between the surgeons and crowd was 0.91 using Cronbach's alpha statistic (confidence intervals=0.20-0.92), indicating an agreement level between the two groups of "excellent." The crowds were able to discriminate the surgical level, and both the crowds and the expert faculty surgeon graders scored one senior trainee's performance above a faculty's performance. Surgery-naive crowd-workers can rapidly assess varying levels of surgical skill accurately relative to a panel of faculty raters. The crowds provided rapid feedback and were inexpensive. CSATS may be a valuable adjunct to surgical simulation training as requirements for more granular and iterative performance tracking of trainees become mandated and commonplace.

  14. Improved Neural Signal Classification in a Rapid Serial Visual Presentation Task Using Active Learning.

    PubMed

    Marathe, Amar R; Lawhern, Vernon J; Wu, Dongrui; Slayback, David; Lance, Brent J

    2016-03-01

    The application space for brain-computer interface (BCI) technologies is rapidly expanding with improvements in technology. However, most real-time BCIs require extensive individualized calibration prior to use, and systems often have to be recalibrated to account for changes in the neural signals due to a variety of factors including changes in human state, the surrounding environment, and task conditions. Novel approaches to reduce calibration time or effort will dramatically improve the usability of BCI systems. Active Learning (AL) is an iterative semi-supervised learning technique for learning in situations in which data may be abundant, but labels for the data are difficult or expensive to obtain. In this paper, we apply AL to a simulated BCI system for target identification using data from a rapid serial visual presentation (RSVP) paradigm to minimize the amount of training samples needed to initially calibrate a neural classifier. Our results show AL can produce similar overall classification accuracy with significantly less labeled data (in some cases less than 20%) when compared to alternative calibration approaches. In fact, AL classification performance matches performance of 10-fold cross-validation (CV) in over 70% of subjects when training with less than 50% of the data. To our knowledge, this is the first work to demonstrate the use of AL for offline electroencephalography (EEG) calibration in a simulated BCI paradigm. While AL itself is not often amenable for use in real-time systems, this work opens the door to alternative AL-like systems that are more amenable for BCI applications and thus enables future efforts for developing highly adaptive BCI systems.

  15. An Open-Source Toolbox for Surrogate Modeling of Joint Contact Mechanics

    PubMed Central

    Eskinazi, Ilan

    2016-01-01

    Goal Incorporation of elastic joint contact models into simulations of human movement could facilitate studying the interactions between muscles, ligaments, and bones. Unfortunately, elastic joint contact models are often too expensive computationally to be used within iterative simulation frameworks. This limitation can be overcome by using fast and accurate surrogate contact models that fit or interpolate input-output data sampled from existing elastic contact models. However, construction of surrogate contact models remains an arduous task. The aim of this paper is to introduce an open-source program called Surrogate Contact Modeling Toolbox (SCMT) that facilitates surrogate contact model creation, evaluation, and use. Methods SCMT interacts with the third party software FEBio to perform elastic contact analyses of finite element models and uses Matlab to train neural networks that fit the input-output contact data. SCMT features sample point generation for multiple domains, automated sampling, sample point filtering, and surrogate model training and testing. Results An overview of the software is presented along with two example applications. The first example demonstrates creation of surrogate contact models of artificial tibiofemoral and patellofemoral joints and evaluates their computational speed and accuracy, while the second demonstrates the use of surrogate contact models in a forward dynamic simulation of an open-chain leg extension-flexion motion. Conclusion SCMT facilitates the creation of computationally fast and accurate surrogate contact models. Additionally, it serves as a bridge between FEBio and OpenSim musculoskeletal modeling software. Significance Researchers may now create and deploy surrogate models of elastic joint contact with minimal effort. PMID:26186761

  16. Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.

    PubMed

    Bae, Seung-Hwan; Yoon, Kuk-Jin

    2018-03-01

    Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.

  17. Technical Performance Measures and Distributed-Simulation Training Systems

    DTIC Science & Technology

    2000-01-01

    and Salas (1995) indicate that “ free play ” training exercises Acquisition Review Quarterly—Winter 2000 24 “The use of both process measures to...performance change from training period to training period, whereas the alternative to “ free play ”—a structured exercise—was expensive to build and...semiautomated-force operators used their “ free play ” prerogative in the second run. Specifically, units typically train against a lesser able opposing force

  18. 41 CFR 301-11.3 - Must my agency pay an allowance (either a per diem allowance or actual expense)?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 4 2011-07-01 2011-07-01 false Must my agency pay an... Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES ALLOWABLE... per diem allowance or actual expense)? Yes, unless: (a) You perform travel to a training event under...

  19. 41 CFR 301-11.3 - Must my agency pay an allowance (either a per diem allowance or actual expense)?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Must my agency pay an... Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES ALLOWABLE... per diem allowance or actual expense)? Yes, unless: (a) You perform travel to a training event under...

  20. MACBETH: Development of a Training Game for the Mitigation of Cognitive Bias

    ERIC Educational Resources Information Center

    Dunbar, Norah E.; Wilson, Scott N.; Adame, Bradley J.; Elizondo, Javier; Jensen, Matthew L.; Miller, Claude H.; Kauffman, Abigail Allums; Seltsam, Toby; Bessarabova, Elena; Vincent, Cindy; Straub, Sara K.; Ralston, Ryan; Dulawan, Christopher L.; Ramirez, Dennis; Squire, Kurt; Valacich, Joseph S.; Burgoon, Judee K.

    2013-01-01

    This paper describes the process of rapid iterative prototyping used by a research team developing a training video game for the Sirius program funded by the Intelligence Advanced Research Projects Activity (IARPA). Described are three stages of development, including a paper prototype, and builds for alpha and beta testing. Game development is…

  1. A successive overrelaxation iterative technique for an adaptive equalizer

    NASA Technical Reports Server (NTRS)

    Kosovych, O. S.

    1973-01-01

    An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.

  2. A regressive storm model for extreme space weather

    NASA Astrophysics Data System (ADS)

    Terkildsen, Michael; Steward, Graham; Neudegg, Dave; Marshall, Richard

    2012-07-01

    Extreme space weather events, while rare, pose significant risk to society in the form of impacts on critical infrastructure such as power grids, and the disruption of high end technological systems such as satellites and precision navigation and timing systems. There has been an increased focus on modelling the effects of extreme space weather, as well as improving the ability of space weather forecast centres to identify, with sufficient lead time, solar activity with the potential to produce extreme events. This paper describes the development of a data-based model for predicting the occurrence of extreme space weather events from solar observation. The motivation for this work was to develop a tool to assist space weather forecasters in early identification of solar activity conditions with the potential to produce extreme space weather, and with sufficient lead time to notify relevant customer groups. Data-based modelling techniques were used to construct the model, and an extensive archive of solar observation data used to train, optimise and test the model. The optimisation of the base model aimed to eliminate false negatives (missed events) at the expense of a tolerable increase in false positives, under the assumption of an iterative improvement in forecast accuracy during progression of the solar disturbance, as subsequent data becomes available.

  3. 49 CFR 1242.17 - Signals and interlockers (accounts XX-17-19 and XX-18-19).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Signals and interlockers (accounts XX-17-19 and XX... RAILROADS 1 Operating Expenses-Way and Structures § 1242.17 Signals and interlockers (accounts XX-17-19 and XX-18-19). Separate common expenses on the basis of the total train-hours in running service, and/or...

  4. States Eyeing Expense of Hand-Scored Tests in Light of NCLB Rules

    ERIC Educational Resources Information Center

    Archer, Jeff

    2005-01-01

    When students put down their pencils at the end of Connecticut's testing each year, another intensive process begins. Hundreds of trained evaluators work day and night for about a month to score the written responses. Although expensive, the use of open-ended questions drives the kind of instruction that state leaders say they want in their…

  5. A qualitative analysis of bus simulator training on transit incidents : a case study in Florida. [Summary].

    DOT National Transportation Integrated Search

    2013-01-01

    The simulator was once a very expensive, large-scale mechanical device for training military pilots or astronauts. Modern computers, linking sophisticated software and large-screen displays, have yielded simulators for the desktop or configured as sm...

  6. 48 CFR 831.7001-1 - Tuition.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... other on-the-job training. VA may elect to pay charges or expenses that fall into either of the following categories: (1) Charges customarily made by a nonprofit workshop or similar establishment for providing work adjustment training to similarly circumstanced nonveterans even if the trainee receives an...

  7. 48 CFR 831.7001-1 - Tuition.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... other on-the-job training. VA may elect to pay charges or expenses that fall into either of the following categories: (1) Charges customarily made by a nonprofit workshop or similar establishment for providing work adjustment training to similarly circumstanced nonveterans even if the trainee receives an...

  8. 48 CFR 831.7001-1 - Tuition.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... other on-the-job training. VA may elect to pay charges or expenses that fall into either of the following categories: (1) Charges customarily made by a nonprofit workshop or similar establishment for providing work adjustment training to similarly circumstanced nonveterans even if the trainee receives an...

  9. 48 CFR 831.7001-1 - Tuition.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... other on-the-job training. VA may elect to pay charges or expenses that fall into either of the following categories: (1) Charges customarily made by a nonprofit workshop or similar establishment for providing work adjustment training to similarly circumstanced nonveterans even if the trainee receives an...

  10. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  11. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  12. 49 CFR 1242.56 - Engine crews and train crews (accounts XX-51-56 and XX-51-57).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Engine crews and train crews (accounts XX-51-56 and XX-51-57). 1242.56 Section 1242.56 Transportation Other Regulations Relating to Transportation... RAILROADS 1 Operating Expenses-Transportation § 1242.56 Engine crews and train crews (accounts XX-51-56 and...

  13. Pro: pediatric anesthesia training in developing countries is best achieved by selective out of country scholarships.

    PubMed

    Gathuya, Zipporah N

    2009-01-01

    Pediatric anesthesia training in developing countries is best achieved by out of country scholarships rather than structured outreach visits by teams of specialists from the developed world. Although this may seem an expensive option with slow return, it is the only sustainable way to train future generations of specialized pediatric anesthetists in developing countries.

  14. [Training in iterative hypothesis testing as part of psychiatric education. A randomized study].

    PubMed

    Lampen-Imkamp, S; Alte, C; Sipos, V; Kordon, A; Hohagen, F; Schweiger, U; Kahl, K G

    2012-01-01

    The improvement of medical education is at the center of efforts to reform the studies of medicine. Furthermore, an excellent teaching program for students is a quality feature of medical universities. Besides teaching of disease-specific contents, the acquisition of interpersonal and decision-making skills is important. However, the cognitive style of senior physicians leading to a diagnosis cannot easily be taught. Therefore, the following study aimed at examining whether specific training in iterative hypothesis testing (IHT) may improve the correctness of the diagnostic process. Seventy-one medical students in their 9th-11th terms were randomized to medical teaching as usual or to IHT training for 4 weeks. The intervention group received specific training according to the method of IHT. All students were examined by a multiple choice (MC) exam and additionally by simulated patients (SP). The SPs were instructed to represent either a patient with depression and comorbid anxiety and substance use disorder (SP1) or to represent a patient with depression, obsessive-compulsive disorder and acute suicidal tendencies (SP2). All students identified the diagnosis of major depression in the SPs, but IHT-trained students recognized more diagnostic criteria. Furthermore, IHT-trained students recognized acute suicide tendencies in SP2 more often and identified more comorbid psychiatric disorders. The results of the MC exam were comparable in both groups. An analysis of the satisfaction with the different training programs revealed that the IHT training received a better appraisal. Our results point to the role of IHT in teaching diagnostic skills. However, the results of the MC exam were not influenced by IHT training. Furthermore, our results show that students are in need of training in practical clinical skills.

  15. F-16 Instructional Sequencing Plan Report.

    DTIC Science & Technology

    1981-03-01

    information). 2. Interference (learning of some tasks interferes with the learning of other tasks when they possess similar but confusing differences ...profound effect on the total training expense. This increases the desirability of systematic, precise methods of syllabus generation. Inherent in a given...the expensive to acquire. resource. Least cost The syllabus must Select sequences which provide a least total make maximum use of cost method of

  16. Joint Sparse Recovery With Semisupervised MUSIC

    NASA Astrophysics Data System (ADS)

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-01

    Discrete multiple signal classification (MUSIC) with its low computational cost and mild condition requirement becomes a significant noniterative algorithm for joint sparse recovery (JSR). However, it fails in rank defective problem caused by coherent or limited amount of multiple measurement vectors (MMVs). In this letter, we provide a novel sight to address this problem by interpreting JSR as a binary classification problem with respect to atoms. Meanwhile, MUSIC essentially constructs a supervised classifier based on the labeled MMVs so that its performance will heavily depend on the quality and quantity of these training samples. From this viewpoint, we develop a semisupervised MUSIC (SS-MUSIC) in the spirit of machine learning, which declares that the insufficient supervised information in the training samples can be compensated from those unlabeled atoms. Instead of constructing a classifier in a fully supervised manner, we iteratively refine a semisupervised classifier by exploiting the labeled MMVs and some reliable unlabeled atoms simultaneously. Through this way, the required conditions and iterations can be greatly relaxed and reduced. Numerical experimental results demonstrate that SS-MUSIC can achieve much better recovery performances than other MUSIC extended algorithms as well as some typical greedy algorithms for JSR in terms of iterations and recovery probability.

  17. Patch-based iterative conditional geostatistical simulation using graph cuts

    NASA Astrophysics Data System (ADS)

    Li, Xue; Mariethoz, Gregoire; Lu, DeTang; Linde, Niklas

    2016-08-01

    Training image-based geostatistical methods are increasingly popular in groundwater hydrology even if existing algorithms present limitations that often make real-world applications difficult. These limitations include a computational cost that can be prohibitive for high-resolution 3-D applications, the presence of visual artifacts in the model realizations, and a low variability between model realizations due to the limited pool of patterns available in a finite-size training image. In this paper, we address these issues by proposing an iterative patch-based algorithm which adapts a graph cuts methodology that is widely used in computer graphics. Our adapted graph cuts method optimally cuts patches of pixel values borrowed from the training image and assembles them successively, each time accounting for the information of previously stitched patches. The initial simulation result might display artifacts, which are identified as regions of high cost. These artifacts are reduced by iteratively placing new patches in high-cost regions. In contrast to most patch-based algorithms, the proposed scheme can also efficiently address point conditioning. An advantage of the method is that the cut process results in the creation of new patterns that are not present in the training image, thereby increasing pattern variability. To quantify this effect, a new measure of variability is developed, the merging index, quantifies the pattern variability in the realizations with respect to the training image. A series of sensitivity analyses demonstrates the stability of the proposed graph cuts approach, which produces satisfying simulations for a wide range of parameters values. Applications to 2-D and 3-D cases are compared to state-of-the-art multiple-point methods. The results show that the proposed approach obtains significant speedups and increases variability between realizations. Connectivity functions applied to 2-D models transport simulations in 3-D models are used to demonstrate that pattern continuity is preserved.

  18. Identifying Critical Manned-Unmanned Teaming Skills for Unmanned Aircraft System Operators

    DTIC Science & Technology

    2012-09-01

    require expensive training device support, could be trained at home station on PC- based media . However, training resources was regarded simply as an...Contact 3-2 Perform BDA 3-40 Prioritize the engagement of targets 3-27 Provide accurate description of the target to support...informal BDA to firing unit. • Determine target effects requirements. • Determine risk for collateral damage. • Determine

  19. Creating Diverse Ensemble Classifiers to Reduce Supervision

    DTIC Science & Technology

    2005-12-01

    artificial examples. Quite often training with noise improves network generalization (Bishop, 1995; Raviv & Intrator, 1996). Adding noise to training...full training set, as seen by comparing to the to- tal dataset sizes. Hence, improving on the data utilization of DECORATE is a fairly difficult task...prohibitively expensive, except (perhaps) with an incremen- tal learner such as Naive Bayes. Our AFA framework is significantly more efficient because

  20. THERMAL DESIGN OF THE ITER VACUUM VESSEL COOLING SYSTEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbajo, Juan J; Yoder Jr, Graydon L; Kim, Seokho H

    RELAP5-3D models of the ITER Vacuum Vessel (VV) Primary Heat Transfer System (PHTS) have been developed. The design of the cooling system is described in detail, and RELAP5 results are presented. Two parallel pump/heat exchanger trains comprise the design one train is for full-power operation and the other is for emergency operation or operation at decay heat levels. All the components are located inside the Tokamak building (a significant change from the original configurations). The results presented include operation at full power, decay heat operation, and baking operation. The RELAP5-3D results confirm that the design can operate satisfactorily during bothmore » normal pulsed power operation and decay heat operation. All the temperatures in the coolant and in the different system components are maintained within acceptable operating limits.« less

  1. Beryllium fabrication/cost assessment for ITER (International Thermonuclear Experimental Reactor)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeston, J.M.; Longhurst, G.R.; Parsonage, T.

    1990-06-01

    A fabrication and cost estimate of three possible beryllium shapes for the International Thermonuclear Experimental Reactor (ITER) blanket is presented. The fabrication method by hot pressing (HP), cold isostatic pressing plus sintering (CIP+S), cold isostatic pressing plus sintering plus hot isostatic pressing (CIP+S+HIP), and sphere production by atomization or rotary electrode will be discussed. Conventional hot pressing blocks of beryllium with subsequent machining to finished shapes can be more expensive than production of a net shape by cold isostatic pressing and sintering. The three beryllium shapes to be considered here and proposed for ITER are: (1) cubic blocks (3 tomore » 17 cm on an edge), (2) tubular cylinders (33 to 50 mm i.d. by 62 mm o.d. by 8 m long), and (3) spheres (1--5 mm dia.). A rough cost estimate of the basic shape is presented which would need to be refined if the surface finish and tolerances required are better than the sintering process produces. The final cost of the beryllium in the blanket will depend largely on the machining and recycling of beryllium required to produce the finished product. The powder preparation will be discussed before shape fabrication. 10 refs., 6 figs.« less

  2. Turbulence Enhancement by Fractal Square Grids: Effects of the Number of Fractal Scales

    NASA Astrophysics Data System (ADS)

    Omilion, Alexis; Ibrahim, Mounir; Zhang, Wei

    2017-11-01

    Fractal square grids offer a unique solution for passive flow control as they can produce wakes with a distinct turbulence intensity peak and a prolonged turbulence decay region at the expense of only minimal pressure drop. While previous studies have solidified this characteristic of fractal square grids, how the number of scales (or fractal iterations N) affect turbulence production and decay of the induced wake is still not well understood. The focus of this research is to determine the relationship between the fractal iteration N and the turbulence produced in the wake flow using well-controlled water-tunnel experiments. Particle Image Velocimetry (PIV) is used to measure the instantaneous velocity fields downstream of four different fractal grids with increasing number of scales (N = 1, 2, 3, and 4) and a conventional single-scale grid. By comparing the turbulent scales and statistics of the wake, we are able to determine how each iteration affects the peak turbulence intensity and the production/decay of turbulence from the grid. In light of the ability of these fractal grids to increase turbulence intensity with low pressure drop, this work can potentially benefit a wide variety of applications where energy efficient mixing or convective heat transfer is a key process.

  3. 45 CFR 2510.20 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... lands means any real property owned by an Indian tribe, any real property held in trust by the United States for an Indian or Indian tribe, and any real property held by an Indian or Indian tribe that is... expenses for training and travel. (2) Costs (including salary, benefits, training, travel) attributable to...

  4. Australian Vocational Education and Training Statistics, 2001: Financial Data.

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research, Leabrook (Australia).

    In presenting highlights of vocational education and training (VET) finances for 2001, this publication provides insight into how publicly funded VET in Australia is financed and where the money is spent. Information includes primary summaries focusing on revenues and expenses (to show financial performance); assets and liabilities (to show…

  5. 5 CFR 410.405 - Protection of Government interest.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Protection of Government interest. 410... TRAINING Paying for Training Expenses § 410.405 Protection of Government interest. The head of an agency shall establish such procedures as he or she considers necessary to protect the Government's interest...

  6. 20 CFR 632.255 - Program planning.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that may be characterized as planning and design but not program operation. (c) Expenses incurred in... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Program planning. 632.255 Section 632.255... EMPLOYMENT AND TRAINING PROGRAMS Summer Youth Employment and Training Programs § 632.255 Program planning. (a...

  7. Training Research: Practical Recommendations for Maximum Impact

    PubMed Central

    Beidas, Rinad S.; Koerner, Kelly; Weingardt, Kenneth R.; Kendall, Philip C.

    2011-01-01

    This review offers practical recommendations regarding research on training in evidence-based practices for mental health and substance abuse treatment. When designing training research, we recommend: (a) aligning with the larger dissemination and implementation literature to consider contextual variables and clearly defining terminology, (b) critically examining the implicit assumptions underlying the stage model of psychotherapy development, (c) incorporating research methods from other disciplines that embrace the principles of formative evaluation and iterative review, and (d) thinking about how technology can be used to take training to scale throughout all stages of a training research project. An example demonstrates the implementation of these recommendations. PMID:21380792

  8. Remote experimental site concept development

    NASA Astrophysics Data System (ADS)

    Casper, Thomas A.; Meyer, William; Butner, David

    1995-01-01

    Scientific research is now often conducted on large and expensive experiments that utilize collaborative efforts on a national or international scale to explore physics and engineering issues. This is particularly true for the current US magnetic fusion energy program where collaboration on existing facilities has increased in importance and will form the basis for future efforts. As fusion energy research approaches reactor conditions, the trend is towards fewer large and expensive experimental facilities, leaving many major institutions without local experiments. Since the expertise of various groups is a valuable resource, it is important to integrate these teams into an overall scientific program. To sustain continued involvement in experiments, scientists are now often required to travel frequently, or to move their families, to the new large facilities. This problem is common to many other different fields of scientific research. The next-generation tokamaks, such as the Tokamak Physics Experiment (TPX) or the International Thermonuclear Experimental Reactor (ITER), will operate in steady-state or long pulse mode and produce fluxes of fusion reaction products sufficient to activate the surrounding structures. As a direct consequence, remote operation requiring robotics and video monitoring will become necessary, with only brief and limited access to the vessel area allowed. Even the on-site control room, data acquisition facilities, and work areas will be remotely located from the experiment, isolated by large biological barriers, and connected with fiber-optics. Current planning for the ITER experiment includes a network of control room facilities to be located in the countries of the four major international partners; USA, Russian Federation, Japan, and the European Community.

  9. Retrieval of cloud properties from POLDER-3 data using the neural network approach

    NASA Astrophysics Data System (ADS)

    Di Noia, A.; Hasekamp, O. P.

    2017-12-01

    Satellite multi-angle spectroplarimetry is a useful technique for observing the microphysical properties of clouds and aerosols. Most of the algorithms for the retrieval of cloud and aerosol properties from satellite measurements require multiple calls to radiative transfer models, which make the retrieval computationally expensive. A traditional alternative to these schemes is represented by lookup-tables (LUTs), where the retrieval is performed by choosing, within a predefined database of combinations of clouds or aerosol properties, the combination that best fits the measurements. LUT retrievals are quicker than full-physics, iterative retrievals, but their accuracy is limited by the number of entries stored in the LUT. Another retrieval method capable of producing very quick retrievals without a big sacrifice in accuracy is the neural network method. Neural network methods are routinely applied to several types of satellite measurements, but their application to multi-angle spectropolarimetric data is still in its early stage, because of the difficulty of accounting for the angular variability of the measurements in the training process. We have recently developed a neural network scheme for the retrieval of cloud properties from POLDER-3 data. The neural network retrieval is trained using synthetic measurements performed for realistic combinations of cloud properties and measurement angles, and is able to process an entire orbit in about 20 seconds. Comparisons of the retrieved cloud properties with Moderate Resolution Imaging Spectroradiometer (MODIS) gridded products during one year show encouraging retrieval performance for cloud optical thickness and effective radius. A discussion of the setup of the neural network and of the validation results will be the main topic of our presentation.

  10. Osteoarthritis classification using self organizing map based on gabor kernel and contrast-limited adaptive histogram equalization.

    PubMed

    Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery

    2013-01-01

    Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4.

  11. Osteoarthritis Classification Using Self Organizing Map Based on Gabor Kernel and Contrast-Limited Adaptive Histogram Equalization

    PubMed Central

    Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery

    2013-01-01

    Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4. PMID:23525188

  12. Phase I Forest Area Estimation Using Landsat TM and Iterative Guided Spectral Class Rejection: Assessment of Possible Training Data Protocols

    Treesearch

    John A. Scrivani; Randolph H. Wynne; Christine E. Blinn; Rebecca F. Musy

    2001-01-01

    Two methods of training data collection for automated image classification were tested in Virginia as part of a larger effort to develop an objective, repeatable, and low-cost method to provide forest area classification from satellite imagery. The derived forest area estimates were compared to estimates derived from a traditional photo-interpreted, double sample. One...

  13. Cost analysis of objective resident cataract surgery assessments.

    PubMed

    Nandigam, Kiran; Soh, Jonathan; Gensheimer, William G; Ghazi, Ahmed; Khalifa, Yousuf M

    2015-05-01

    To compare 8 ophthalmology resident surgical training tools to determine which is most cost effective. University of Rochester Medical Center, Rochester, New York, USA. Retrospective evaluation of technology. A cost-analysis model was created to compile all relevant costs in running each tool in a medium-sized ophthalmology program. Quantitative cost estimates were obtained based on cost of tools, cost of time in evaluations, and supply and maintenance costs. For wet laboratory simulation, Eyesi was the least expensive cataract surgery simulation method; however, it is only capable of evaluating simulated cataract surgery rehearsal and requires supplementation with other evaluative methods for operating room performance and for noncataract wet lab training and evaluation. The most expensive training tool was the Eye Surgical Skills Assessment Test (ESSAT). The 2 most affordable methods for resident evaluation in operating room performance were the Objective Assessment of Skills in Intraocular Surgery (OASIS) and Global Rating Assessment of Skills in Intraocular Surgery (GRASIS). Cost-based analysis of ophthalmology resident surgical training tools are needed so residency programs can implement tools that are valid, reliable, objective, and cost effective. There is no perfect training system at this time. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  14. GWASinlps: Nonlocal prior based iterative SNP selection tool for genome-wide association studies.

    PubMed

    Sanyal, Nilotpal; Lo, Min-Tzu; Kauppi, Karolina; Djurovic, Srdjan; Andreassen, Ole A; Johnson, Valen E; Chen, Chi-Hua

    2018-06-19

    Multiple marker analysis of the genome-wide association study (GWAS) data has gained ample attention in recent years. However, because of the ultra high-dimensionality of GWAS data, such analysis is challenging. Frequently used penalized regression methods often lead to large number of false positives, whereas Bayesian methods are computationally very expensive. Motivated to ameliorate these issues simultaneously, we consider the novel approach of using nonlocal priors in an iterative variable selection framework. We develop a variable selection method, named, iterative nonlocal prior based selection for GWAS, or GWASinlps, that combines, in an iterative variable selection framework, the computational efficiency of the screen-and-select approach based on some association learning and the parsimonious uncertainty quantification provided by the use of nonlocal priors. The hallmark of our method is the introduction of 'structured screen-and-select' strategy, that considers hierarchical screening, which is not only based on response-predictor associations, but also based on response-response associations, and concatenates variable selection within that hierarchy. Extensive simulation studies with SNPs having realistic linkage disequilibrium structures demonstrate the advantages of our computationally efficient method compared to several frequentist and Bayesian variable selection methods, in terms of true positive rate, false discovery rate, mean squared error, and effect size estimation error. Further, we provide empirical power analysis useful for study design. Finally, a real GWAS data application was considered with human height as phenotype. An R-package for implementing the GWASinlps method is available at https://cran.r-project.org/web/packages/GWASinlps/index.html. Supplementary data are available at Bioinformatics online.

  15. Competencies "plus": the nature of written comments on internal medicine residents' evaluation forms.

    PubMed

    Ginsburg, Shiphra; Gold, Wayne; Cavalcanti, Rodrigo B; Kurabi, Bochra; McDonald-Blumer, Heather

    2011-10-01

    Comments on residents' in-training evaluation reports (ITERs) may be more useful than scores in identifying trainees in difficulty. However, little is known about the nature of comments written by internal medicine faculty on residents' ITERs. Comments on 1,770 ITERs (from 180 residents in postgraduate years 1-3) were analyzed using constructivist grounded theory beginning with an existing framework. Ninety-three percent of ITERs contained comments, which were frequently easy to map onto traditional competencies, such as knowledge base (n = 1,075 comments) to the CanMEDs Medical Expert role. Many comments, however, could be linked to several overlapping competencies. Also common were comments completely unrelated to competencies, for instance, the resident's impact on staff (813), or personality issues (450). Residents' "trajectory" was a major theme (performance in relation to expected norms [494], improvement seen [286], or future predictions [286]). Faculty's assessments of residents are underpinned by factors related and unrelated to traditional competencies. Future evaluations should attempt to capture these holistic, integrated impressions.

  16. Teaching Citizen Science Skills Online: Implications for Invasive Species Training Programs

    ERIC Educational Resources Information Center

    Newman, Greg; Crall, Alycia; Laituri, Melinda; Graham, Jim; Stohlgren, Tom; Moore, John C.; Kodrich, Kris; Holfelder, Kirstin A.

    2010-01-01

    Citizen science programs are emerging as an efficient way to increase data collection and help monitor invasive species. Effective invasive species monitoring requires rigid data quality assurances if expensive control efforts are to be guided by volunteer data. To achieve data quality, effective online training is needed to improve field skills…

  17. When Should You Offer Online Training?

    ERIC Educational Resources Information Center

    Bandy, Jim

    2010-01-01

    Many companies struggle with the question of when to take a manual training process online. For most, the internal conflict comes from a belief that the course development costs will be astronomical, making the process cost-prohibitive. However, the facilities expense and that of hiring qualified professors, combined with the rising trend of busy…

  18. Loans for Learning. Briefing Note

    ERIC Educational Resources Information Center

    Cedefop - European Centre for the Development of Vocational Training, 2011

    2011-01-01

    A good loan scheme must balance costs with coverage. If loans are too expensive then people will not borrow. Governments are not banks, but they provide or support loans for many things, including education and training. Governments too need to get the balance right. Cedefop surveyed 35 education and training loan schemes in Europe, examining…

  19. 38 CFR 21.370 - Intraregional travel at government expense.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for a personal interview prior to induction into training when: (A) The school requires the interview... transportation for the veteran's dependents, or for moving personal effects. (Authority: 38 U.S.C. 111, 3104(a... report to a prospective employer-trainer for an interview prior to induction into training, when there is...

  20. Adaptive artificial neural network for autonomous robot control

    NASA Technical Reports Server (NTRS)

    Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.

    1992-01-01

    The topics are presented in viewgraph form and include: neural network controller for robot arm positioning with visual feedback; initial training of the arm; automatic recovery from cumulative fault scenarios; and error reduction by iterative fine movements.

  1. Field tests of a participatory ergonomics toolkit for Total Worker Health

    PubMed Central

    Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2018-01-01

    Growing interest in Total Worker Health® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and team-work skills of participants. PMID:28166897

  2. A retrospective review of general surgery training outcomes at the University of Toronto

    PubMed Central

    Compeau, Christopher; Tyrwhitt, Jessica; Shargall, Yaron; Rotstein, Lorne

    2009-01-01

    Background Surgical educators have struggled with achieving an optimal balance between the service workload and education of surgical residents. In Ontario, a variety of factors during the past 12 years have had the net impact of reducing the clinical training experience of general surgery residents. We questioned what impact the reductions in trainee workload have had on general surgery graduates at the University of Toronto. Methods We evaluated graduates from the University of Toronto general surgery training program from 1995 to 2006. We compared final-year In-Training Evaluation Reports (ITERs) of trainees during this interval. For purposes of comparison, we subdivided residents into 4 groups according to year of graduation (1995–1997, 1998–2000, 2001–2003 and 2004–2006). We evaluated postgraduate “performance” by categorizing residents into 1 of 4 groups: first, residents who entered directly into general surgery practice after graduation; second, residents who entered into a certification subspecialty program of the Royal College of Physicians and Surgeons of Canada (RCPSC); third, residents who entered into a noncertification program of the RCPSC; and fourth, residents who entered into a variety of nonregulated “clinical fellowships.” Results We assessed and evaluated 118 of 134 surgical trainees (88%) in this study. We included in the study graduates for whom completed ITER records were available and postgraduate training records were known and validated. The mean scores for each of the 5 evaluated residency training parameters included in the ITER (technical skills, professional attitudes, application of knowledge, teaching performance and overall performance) were not statistically different for each of the 4 graduating groups from 1995 to 2006. However, we determined that there were statistically fewer general surgery graduates (p < 0.05) who entered directly into general surgery practice in the 2004–2006 group compared with the 1998–2000 and 2001–2003 groups. The graduates from 2004 to 2006 who did not enter into general surgery practice appeared to choose a clinical fellowship. Conclusion These observations may indicate that recent surgical graduates possess an acceptable skill set but may lack the clinical confidence and experience to enter directly into general surgery practice. Evidence seems to indicate that the clinical fellowship has become an unregulated surrogate extension of the training program whereby surgeons can gain additional clinical experience and surgical expertise. PMID:19865542

  3. Balance maintenance as an acquired motor skill: Delayed gains and robust retention after a single session of training in a virtual environment.

    PubMed

    Elion, Orit; Sela, Itamar; Bahat, Yotam; Siev-Ner, Itzhak; Weiss, Patrice L Tamar; Karni, Avi

    2015-06-03

    Does the learning of a balance and stability skill exhibit time-course phases and transfer limitations characteristic of the acquisition and consolidation of voluntary movement sequences? Here we followed the performance of young adults trained in maintaining balance while standing on a moving platform synchronized with a virtual reality road travel scene. The training protocol included eight 3 min long iterations of the road scene. Center of Pressure (CoP) displacements were analyzed for each task iteration within the training session, as well as during tests at 24h, 4 weeks and 12 weeks post-training to test for consolidation phase ("offline") gains and assess retention. In addition, CoP displacements in reaction to external perturbations were assessed before and after the training session and in the 3 subsequent post-training assessments (stability tests). There were significant reductions in CoP displacements as experience accumulated within session, with performance stabilizing by the end of the session. However, CoP displacements were further reduced at 24h post-training (delayed "offline" gains) and these gains were robustly retained. There was no transfer of the practice-related gains to performance in the stability tests. The time-course of learning the balance maintenance task, as well as the limitation on generalizing the gains to untrained conditions, are in line with the results of studies of manual movement skill learning. The current results support the conjecture that a similar repertoire of basic neuronal mechanisms of plasticity may underlay skill (procedural, "how to" knowledge) acquisition and skill memory consolidation in voluntary and balance maintenance tasks. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Improving the Emergency Manager’s Hurricane Evacuation Decision Making Through Serious Gaming

    DTIC Science & Technology

    2016-06-17

    Serious Gaming Hayley J. Davison Reynolds, Maxwell H. Perlman Darren P. Wilson MIT Lincoln Laboratory DHS Science and Technology Directorate...transfer it to an actual evacuation event. Through this work, a web-based, ‘serious gaming ’ approach was used to develop hurricane evacuation decision...training for the emergency manager. This paper describes the iterative design approach to developing a training game and collect initial feedback

  5. An Annotated Bibliography of the Manned Systems Measurement Literature

    DTIC Science & Technology

    1985-02-01

    designs that are considered applicable to assessment of training effectiveness include the classic Solomon four - group design; iterative adaptation to...element (analogue computer) were used for this study . *».U Operators were taken from 3 groups : (1) persons with both licensed flying and driving...conclusions are that the classic four - group design is impractical for most training evaluation; that "adaptive research for big effects" is apt to be

  6. Efficient Kriging via Fast Matrix-Vector Products

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.

    2008-01-01

    Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.

  7. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  8. ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method

    NASA Technical Reports Server (NTRS)

    Inampudi, Ravi

    2016-01-01

    This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.

  9. Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building

    NASA Astrophysics Data System (ADS)

    Habtezion, S.

    2015-12-01

    Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Senay Habtezion (shabtezion@start.org) / Hassan Virji (hvirji@start.org)Global Change SySTem for Analysis, Training and Research (START) (www.start.org) 2000 Florida Avenue NW, Suite 200 Washington, DC 20009 USA As part of the Global Observation of Forest and Land Cover Dynamics (GOFC-GOLD) project partnership effort to promote use of earth observations in advancing scientific knowledge, START works to bridge capacity needs related to earth observations (EOs) and their applications in the developing world. GOFC-GOLD regional networks, fostered through the support of regional and thematic workshops, have been successful in (1) enabling participation of scientists for developing countries and from the US to collaborate on key GOFC-GOLD and Land Cover and Land Use Change (LCLUC) issues, including NASA Global Data Set validation and (2) training young developing country scientists to gain key skills in EOs data management and analysis. Members of the regional networks are also engaged and reengaged in other EOs programs (e.g. visiting scientists program; data initiative fellowship programs at the USGS EROS Center and Boston University), which has helped strengthen these networks. The presentation draws from these experiences in advocating for integrative and iterative approaches to capacity building through the lens of the GOFC-GOLD partnership effort. Specifically, this presentation describes the role of the GODC-GOLD partnership in nurturing organic networks of scientists and EOs practitioners in Asia, Africa, Eastern Europe and Latin America.

  10. 49 CFR 1242.58 - Operating signals and interlockers, operating drawbridges, highway crossing protection (accounts...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... drawbridges, highway crossing protection (accounts XX-51-59, XX-51-60 and XX-51-61). 1242.58 Section 1242.58... Operating signals and interlockers, operating drawbridges, highway crossing protection (accounts XX-51-59, XX-51-60 and XX-51-61). Separate common expenses on the basis of total train hours (including train...

  11. What if Best Practice Is Too Expensive? Feedback on Oral Presentations and Efficient Use of Resources

    ERIC Educational Resources Information Center

    Leger, Lawrence A.; Glass, Karligash; Katsiampa, Paraskevi; Liu, Shibo; Sirichand, Kavita

    2017-01-01

    We evaluate feedback methods for oral presentations used in training non-quantitative research skills (literature review and various associated tasks). Training is provided through a credit-bearing module taught to MSc students of banking, economics and finance in the UK. Monitoring oral presentations and providing "best practice"…

  12. Application of E-Learning to Pilot Training at TransAsia Airways in Taiwan

    ERIC Educational Resources Information Center

    Chuang, Chi-Kuo; Chang, Maiga; Wang, Chin-Yeh; Chung, Wen-Cheng; Chen, Gwo-Dong

    2008-01-01

    TransAsia Airway is one of the four domestic airlines in Taiwan. Taiwan has 13 domestic airports with the longest distance between two airports being about 400 kilometers. The domestic airline market is highly competitive. TransAsia decided to apply e-learning within its organization to reduce training expenses and improve service quality. This…

  13. Low-cost phantom for stereotactic breast biopsy training.

    PubMed

    Larrison, Matthew; DiBona, Alex; Hogg, David E

    2006-10-01

    This article reports on the construction of a low-cost phantom to be used for training technologists, residents, and radiologists to perform stereotactic breast biopsy. The model is adaptable to a variety of biopsy devices and realistically simulates the aspects of stereotactic breast biopsy. We believe our model provides an excellent alternative to more expensive commercial products.

  14. Small-Scale Smart Grid Construction and Analysis

    NASA Astrophysics Data System (ADS)

    Surface, Nicholas James

    The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

  15. A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand

    2017-11-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.

  16. Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State

    NASA Astrophysics Data System (ADS)

    Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.

    2017-06-01

    The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.

  17. Surface heat loads on the ITER divertor vertical targets

    NASA Astrophysics Data System (ADS)

    Gunn, J. P.; Carpentier-Chouchana, S.; Escourbiac, F.; Hirai, T.; Panayotis, S.; Pitts, R. A.; Corre, Y.; Dejarnac, R.; Firdaouss, M.; Kočan, M.; Komm, M.; Kukushkin, A.; Languille, P.; Missirlian, M.; Zhao, W.; Zhong, G.

    2017-04-01

    The heating of tungsten monoblocks at the ITER divertor vertical targets is calculated using the heat flux predicted by three-dimensional ion orbit modelling. The monoblocks are beveled to a depth of 0.5 mm in the toroidal direction to provide magnetic shadowing of the poloidal leading edges within the range of specified assembly tolerances, but this increases the magnetic field incidence angle resulting in a reduction of toroidal wetted fraction and concentration of the local heat flux to the unshadowed surfaces. This shaping solution successfully protects the leading edges from inter-ELM heat loads, but at the expense of (1) temperatures on the main loaded surface that could exceed the tungsten recrystallization temperature in the nominal partially detached regime, and (2) melting and loss of margin against critical heat flux during transient loss of detachment control. During ELMs, the risk of monoblock edge melting is found to be greater than the risk of full surface melting on the plasma-wetted zone. Full surface and edge melting will be triggered by uncontrolled ELMs in the burning plasma phase of ITER operation if current models of the likely ELM ion impact energies at the divertor targets are correct. During uncontrolled ELMs in pre-nuclear deuterium or helium plasmas at half the nominal plasma current and magnetic field, full surface melting should be avoided, but edge melting is predicted.

  18. Quantifying Impacts of Urban Growth Potential on Army Training Capabilities

    DTIC Science & Technology

    2017-09-12

    Capacity” ERDC/CERL TR-17-34 ii Abstract Building on previous studies of urban growth and population effects on U.S. military installations and...combat team studies . CAA has developed an iterative process that builds on Military Value Anal- ysis (MVA) models that include a set of attributes that...Methods and tools were developed to support a nationwide analysis. This study focused on installations operating training areas that were high

  19. Lack of training threatening drilling talent supply

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Von Flatern, R.

    When oil prices crashed in the mid-1980s, the industry tightened budgets. Among the austerity measures taken to survive the consequences of low product prices was an end to expensive, long-term investment training of drilling engineers. In the absence of traditional sources of trained drilling talent, forward-looking contractors are creating their own training programs. The paper describes the activities of some companies who are setting up their own training programs, and an alliance being set up by Chevron and Amoco for training. The paper also discusses training drilling managers, third-party trainers, and the consequences for the industry that does not renewmore » its inventory of people.« less

  20. Using Functional Electrical Stimulation Mediated by Iterative Learning Control and Robotics to Improve Arm Movement for People With Multiple Sclerosis.

    PubMed

    Sampson, Patrica; Freeman, Chris; Coote, Susan; Demain, Sara; Feys, Peter; Meadmore, Katie; Hughes, Ann-Marie

    2016-02-01

    Few interventions address multiple sclerosis (MS) arm dysfunction but robotics and functional electrical stimulation (FES) appear promising. This paper investigates the feasibility of combining FES with passive robotic support during virtual reality (VR) training tasks to improve upper limb function in people with multiple sclerosis (pwMS). The system assists patients in following a specified trajectory path, employing an advanced model-based paradigm termed iterative learning control (ILC) to adjust the FES to improve accuracy and maximise voluntary effort. Reaching tasks were repeated six times with ILC learning the optimum control action from previous attempts. A convenience sample of five pwMS was recruited from local MS societies, and the intervention comprised 18 one-hour training sessions over 10 weeks. The accuracy of tracking performance without FES and the amount of FES delivered during training were analyzed using regression analysis. Clinical functioning of the arm was documented before and after treatment with standard tests. Statistically significant results following training included: improved accuracy of tracking performance both when assisted and unassisted by FES; reduction in maximum amount of FES needed to assist tracking; and less impairment in the proximal arm that was trained. The system was well tolerated by all participants with no increase in muscle fatigue reported. This study confirms the feasibility of FES combined with passive robot assistance as a potentially effective intervention to improve arm movement and control in pwMS and provides the basis for a follow-up study.

  1. 42 CFR 68.9 - What loans qualify for repayment?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., TRAINING NATIONAL INSTITUTES OF HEALTH (NIH) LOAN REPAYMENT PROGRAMS (LRPs) § 68.9 What loans qualify for...) Undergraduate, graduate, and health professional school tuition expenses; (b) Other reasonable educational...

  2. 42 CFR 68.9 - What loans qualify for repayment?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., TRAINING NATIONAL INSTITUTES OF HEALTH (NIH) LOAN REPAYMENT PROGRAMS (LRPs) § 68.9 What loans qualify for...) Undergraduate, graduate, and health professional school tuition expenses; (b) Other reasonable educational...

  3. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  4. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  5. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  6. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  7. Developing European guidelines for training care professionals in mental health promotion.

    PubMed

    Greacen, Tim; Jouet, Emmanuelle; Ryan, Peter; Cserhati, Zoltan; Grebenc, Vera; Griffiths, Chris; Hansen, Bettina; Leahy, Eithne; da Silva, Ksenija Maravic; Sabić, Amra; De Marco, Angela; Flores, Paz

    2012-12-27

    Although mental health promotion is a priority mental health action area for all European countries, high level training resources and high quality skills acquisition in mental health promotion are still relatively rare. The aim of the current paper is to present the results of the DG SANCO-funded PROMISE project concerning the development of European guidelines for training social and health care professionals in mental health promotion. The PROMISE project brought together a multidisciplinary scientific committee from eight European sites representing a variety of institutions including universities, mental health service providers and public health organisations. The committee used thematic content analysis to filter and analyse European and international policy documents, scientific literature reviews on mental health promotion and existing mental health promotion programmes with regard to identifying quality criteria for training care professionals on this subject. The resulting PROMISE Guidelines quality criteria were then subjected to an iterative feedback procedure with local steering groups and training professionals at all sites with the aim of developing resource kits and evaluation tools for using the PROMISE Guidelines. Scientific committees also collected information from European, national and local stakeholder groups and professional organisations on existing training programmes, policies and projects. The process identified ten quality criteria for training care professionals in mental health promotion: embracing the principle of positive mental health; empowering community stakeholders; adopting an interdisciplinary and intersectoral approach; including people with mental health problems; advocating; consulting the knowledge base; adapting interventions to local contexts; identifying and evaluating risks; using the media; evaluating training, implementation processes and outcomes. The iterative feedback process produced resource kits and evaluation checklists linked with each of these quality criteria in all PROMISE languages. The development of generic guidelines based on key quality criteria for training health and social care professionals in mental health promotion should contribute in a significant way to implementing policy in this important area.

  8. Cost Analysis and Effectiveness of Using the Indoor Simulated Marksmanship Trainer (ISMT) for United States Marine Corps (USMC) Marksmanship Training

    DTIC Science & Technology

    2011-06-01

    training continuum. Each table of training requires a minimum amount of ammunition and targets. All of these materials are expensive for the Marine...charge by weight to prevent damage due to overloading. Damage by overloading is still possible with black powder. In the 1300s, handguns from...portable firearm and a forerunner of the handgun , are from several 14th Century Arabic manuscripts (Wuxia Society, n.d.). Today, modern warfare relies

  9. Recognition of genetically modified product based on affinity propagation clustering and terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Liu, Jianjun; Kan, Jianquan

    2018-04-01

    In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.

  10. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    NASA Astrophysics Data System (ADS)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  11. 38 CFR 21.154 - Special transportation assistance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... rehabilitation facility or sheltered workshop, and other reasonable expenses which may be incurred in local... incurred or one-half of the subsistence allowance of a single veteran in full-time institutional training...

  12. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  13. A self-adapting heuristic for automatically constructing terrain appreciation exercises

    NASA Astrophysics Data System (ADS)

    Nanda, S.; Lickteig, C. L.; Schaefer, P. S.

    2008-04-01

    Appreciating terrain is a key to success in both symmetric and asymmetric forms of warfare. Training to enable Soldiers to master this vital skill has traditionally required their translocation to a selected number of areas, each affording a desired set of topographical features, albeit with limited breadth of variety. As a result, the use of such methods has proved to be costly and time consuming. To counter this, new computer-aided training applications permit users to rapidly generate and complete training exercises in geo-specific open and urban environments rendered by high-fidelity image generation engines. The latter method is not only cost-efficient, but allows any given exercise and its conditions to be duplicated or systematically varied over time. However, even such computer-aided applications have shortcomings. One of the principal ones is that they usually require all training exercises to be painstakingly constructed by a subject matter expert. Furthermore, exercise difficulty is usually subjectively assessed and frequently ignored thereafter. As a result, such applications lack the ability to grow and adapt to the skill level and learning curve of each trainee. In this paper, we present a heuristic that automatically constructs exercises for identifying key terrain. Each exercise is created and administered in a unique iteration, with its level of difficulty tailored to the trainee's ability based on the correctness of that trainee's responses in prior iterations.

  14. Developing effective worker health and safety training materials: hazard awareness, identification, recognition, and control for the salon industry.

    PubMed

    Mayer, Annyce S; Brazile, William J; Erb, Samantha; Autenrieth, Daniel A; Serrano, Katherine; Van Dyke, Michael V

    2015-05-01

    In addition to formaldehyde, workers in salons can be exposed to other chemical irritants, sensitizers, carcinogens, reproductive hazards, infectious agents, ergonomic, and other physical hazards. Worker health and safety training is challenging because of current product labeling practices and the myriad of hazards portending risk for a wide variety of health effects. Through a Susan B. Harwood Targeted Topic Training grant from the Occupational Safety and Health Administration and assistance from salon development and training partners, we developed, delivered, and validated a health and safety training program using an iterative five-pronged approach. The training was well received and resulted in knowledge gain, improved workplace safety practices, and increased communication about health and safety. These training materials are available for download from the Occupational Safety and Health Administration's Susan B. Harwood Training Grant Program Web site.

  15. Developing effective health and safety training materials for workers in beryllium-using industries.

    PubMed

    Mayer, A S; Brazile, W J; Erb, S A; Barker, E A; Miller, C M; Mroz, M M; Maier, L A; Van Dyke, M V

    2013-07-01

    Despite reduced workplace exposures, beryllium sensitization and chronic beryllium disease still occur. Effective health and safety training is needed. Through an Occupational Safety and Health Administration (OSHA) Targeted Topic Training grant and company partners, we developed a training program. Evaluation and validation included knowledge and training reaction assessments and training impact survey. We describe herein the iterative, five-pronged approach: (1) needs assessment; (2) materials development; (3) pilot-testing, evaluation, and material revisions; (4) worker training; and (5) evaluation and validation. Mean posttraining test score increased 14% (82% to 96%; P < 0.005) and were unchanged at 90-day follow-up (94%; P = 0.744). In addition, 49% reported making changes in work practices. The use of a five-pronged training program was effective and well received and resulted in improved work practices. These materials are available on the OSHA Web site.

  16. Flexible binding simulation by a novel and improved version of virtual-system coupled adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi

    2016-10-01

    Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.

  17. Digital Inject Book v. 1.7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldridge, Bryce

    2016-10-05

    Digital Inject Book is a software program designed to generate and managed simulated data for radiation detectors, used to increase the realism of training where real radiation sources are impractical, expensive, or simply not available.

  18. Costs of an ostomy self-management training program for cancer survivors.

    PubMed

    Hornbrook, Mark C; Cobb, Martha D; Tallman, Nancy J; Colwell, Janice; McCorkle, Ruth; Ercolano, Elizabeth; Grant, Marcia; Sun, Virginia; Wendel, Christopher S; Hibbard, Judith H; Krouse, Robert S

    2018-03-01

    To measure incremental expenses to an oncologic surgical practice for delivering a community-based, ostomy nurse-led, small-group, behavior skills-training intervention to help bladder and colorectal cancer survivors understand and adjust to their ostomies and improve their health-related quality of life, as well as assist family caregivers to understand survivors' needs and provide appropriate supportive care. The intervention was a 5-session group behavior skills training in ostomy self-management following the principles of the Chronic Care Model. Faculty included Wound, Ostomy, and Continence Nurses (WOCNs) using an ostomy care curriculum. A gender-matched peer-in-time buddy was assigned to each ostomy survivor. The 4-session survivor curriculum included the following: self-management practice and solving immediate ostomy concerns; social well-being; healthy lifestyle; and a booster session. The single family caregiver session was coled by a WOCN and an ostomy peer staff member and covered relevant caregiver and ostomate support issues. Each cohort required 8 weeks to complete the intervention. Nonlabor inputs included ostomy supplies, teaching materials, automobile mileage for WOCNs, mailing, and meeting space rental. Intervention personnel were employed by the University of Arizona. Labor expenses included salaries and fringe benefits. The total incremental expense per intervention cohort of 4 survivors was $7246 or $1812 per patient. A WOCN-led group self-help ostomy survivorship intervention provided affordable, effective, care to cancer survivors with ostomies. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Remarks to Eighth Annual State of Modeling and Simulation

    DTIC Science & Technology

    1999-06-04

    organization, training as well as materiel Discovery vice Verification Tolerance for Surprise Free play Red Team Iterative Process Push to failure...Account for responsive & innovative future adversaries – free play , adaptive strategies and tactics by professional red teams • Address C2 issues & human

  20. Field tests of a participatory ergonomics toolkit for Total Worker Health.

    PubMed

    Nobrega, Suzanne; Kernan, Laura; Plaku-Alakbarova, Bora; Robertson, Michelle; Warren, Nicholas; Henning, Robert

    2017-04-01

    Growing interest in Total Worker Health ® (TWH) programs to advance worker safety, health and well-being motivated development of a toolkit to guide their implementation. Iterative design of a program toolkit occurred in which participatory ergonomics (PE) served as the primary basis to plan integrated TWH interventions in four diverse organizations. The toolkit provided start-up guides for committee formation and training, and a structured PE process for generating integrated TWH interventions. Process data from program facilitators and participants throughout program implementation were used for iterative toolkit design. Program success depended on organizational commitment to regular design team meetings with a trained facilitator, the availability of subject matter experts on ergonomics and health to support the design process, and retraining whenever committee turnover occurred. A two committee structure (employee Design Team, management Steering Committee) provided advantages over a single, multilevel committee structure, and enhanced the planning, communication, and teamwork skills of participants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Gastric precancerous diseases classification using CNN with a concise model.

    PubMed

    Zhang, Xu; Hu, Weiling; Chen, Fei; Liu, Jiquan; Yang, Yuanhang; Wang, Liangjing; Duan, Huilong; Si, Jianmin

    2017-01-01

    Gastric precancerous diseases (GPD) may deteriorate into early gastric cancer if misdiagnosed, so it is important to help doctors recognize GPD accurately and quickly. In this paper, we realize the classification of 3-class GPD, namely, polyp, erosion, and ulcer using convolutional neural networks (CNN) with a concise model called the Gastric Precancerous Disease Network (GPDNet). GPDNet introduces fire modules from SqueezeNet to reduce the model size and parameters about 10 times while improving speed for quick classification. To maintain classification accuracy with fewer parameters, we propose an innovative method called iterative reinforced learning (IRL). After training GPDNet from scratch, we apply IRL to fine-tune the parameters whose values are close to 0, and then we take the modified model as a pretrained model for the next training. The result shows that IRL can improve the accuracy about 9% after 6 iterations. The final classification accuracy of our GPDNet was 88.90%, which is promising for clinical GPD recognition.

  2. The Experience of a Randomized Clinical Trial of Closed-Circuit Television versus Eccentric Viewing Training for People with Age-Related Macular Degeneration

    ERIC Educational Resources Information Center

    Leat, Susan J.; Si, Francis Fengqin; Gold, Deborah; Pickering, Dawn; Gordon, Keith; Hodge, William

    2017-01-01

    Introduction: In addition to optical devices, closed-circuit televisions (CCTVs) and eccentric viewing training are both recognized interventions to improve reading performance in individuals with vision loss secondary to age-related macular degeneration. Both are relatively expensive, however, either in the cost of the device or in the amount of…

  3. Using Colleges and Universities to Meet your Training Department Needs.

    ERIC Educational Resources Information Center

    Broderick, Richard

    1982-01-01

    Industries are turning to higher education to deliver programs that would be prohibitively expensive to develop and academic institutions are responding with a willingness to shape a program tailored to industry's needs. (JOW)

  4. Developing a World-Class Workforce: Transformation, Not Iteration

    ERIC Educational Resources Information Center

    Mosier, Jerrilee K.; Richey, Michael C.; McPherson, Kenneth B.; Eckhol, John O.; Cox, Frank Z.

    2006-01-01

    This article features a "Triad" partnership of a group of Snohomish County organizations representing education, government and industry. Recognizing the need for a training and workforce development effort to address the aerospace manufacturing employers' needs, Triad views themselves as the pivotal cornerstone for deployment of complex…

  5. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.

  6. Impurity re-distribution in the corner regions of the JET divertor

    NASA Astrophysics Data System (ADS)

    Widdowson, A.; Coad, J. P.; Alves, E.; Baron-Wiechec, A.; Barradas, N. P.; Catarino, N.; Corregidor, V.; Heinola, K.; Krat, S.; Likonen, J.; Matthews, G. F.; Mayer, M.; Petersson, P.; Rubel, M.; Contributors, JET

    2017-12-01

    The International Thermonuclear Experimental Reactor (ITER) will use a mixture of deuterium (D) and tritium (T) as the fuel to generate power. Since T is both radioactive and expensive the Joint European Torus (JET) has been at the forefront of research to discover how much T is used and where it may be retained within the main reaction chamber. Until the year 2010 the JET plasma facing components were constructed of carbon fibre composites. During the JET carbon (C) phases impurities accumulated at the corners of the divertor located towards the bottom of the chamber in regions shadowed from the plasma where they are very difficult to reach and remove. This build-up of C and the associated H-isotope (including T) retention were of particular concern for future fusion reactors therefore, in 2010 JET changed the wall protection to (mainly) Be and the divertor to tungsten (W)—the JET ITER-like wall (ILW)—the choice of materials for ITER. This paper reveals that with the JET ILW impurities are still accumulating in the shadowed regions, with Be being the majority element, though the overall quantities are very much reduced from those in the C phases. Material will be transported into the shadowed regions principally when the plasma strike points are on the corner tiles, but particles typically have about a 75% probability of reflection from line-of sight surfaces, and multiple reflection/scattering results in deposition over all surfaces.

  7. Part-task vs. whole-task training on a supervisory control task

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol

    1987-01-01

    The efficacy of a part-task training for the psychomotor portion of a supervisory control simulation was compared to that of the whole-task training, using six subjects in each group, who were asked to perform a task as quickly as possible. Part-task training was provided with the cursor-control device prior to transition to the whole-task. The analysis of both the training and experimental trials demonstrated a significant performance advantage for the part-task group: the tasks were performed better and at higher speed. Although the subjects finally achieved the same level of performance in terms of score, the part-task method was preferable for economic reasons, since simple pretraining systems are significantly less expensive than the whole-task training systems.

  8. Target discrimination method for SAR images based on semisupervised co-training

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Du, Lan; Dai, Hui

    2018-01-01

    Synthetic aperture radar (SAR) target discrimination is usually performed in a supervised manner. However, supervised methods for SAR target discrimination may need lots of labeled training samples, whose acquirement is costly, time consuming, and sometimes impossible. This paper proposes an SAR target discrimination method based on semisupervised co-training, which utilizes a limited number of labeled samples and an abundant number of unlabeled samples. First, Lincoln features, widely used in SAR target discrimination, are extracted from the training samples and partitioned into two sets according to their physical meanings. Second, two support vector machine classifiers are iteratively co-trained with the extracted two feature sets based on the co-training algorithm. Finally, the trained classifiers are exploited to classify the test data. The experimental results on real SAR images data not only validate the effectiveness of the proposed method compared with the traditional supervised methods, but also demonstrate the superiority of co-training over self-training, which only uses one feature set.

  9. Mentored Discussions of Teaching: An Introductory Teaching Development Program for Future STEM Faculty

    ERIC Educational Resources Information Center

    Baiduc, Rachael R.; Linsenmeier, Robert A.; Ruggeri, Nancy

    2016-01-01

    Today's science, technology, engineering, and mathematics (STEM) graduate students and postdoctoral fellows are tomorrow's new faculty members; but these junior academicians often receive limited pedagogical training. We describe four iterations of an entry-level program with a low time commitment, Mentored Discussions of Teaching (MDT). The…

  10. Promoting Parent Engagement in Behavioral Intervention for Young Children with ADHD: Iterative Treatment Development

    ERIC Educational Resources Information Center

    DuPaul, George J.; Kern, Lee; Belk, Georgia; Custer, Beth; Hatfield, Andrea; Daffner, Molly; Peek, Daniel

    2018-01-01

    The most efficacious psychosocial intervention for reducing attention-deficit/hyperactivity disorder (ADHD) symptoms in young children is behavioral parent training (BPT). Potential benefits are hindered by limited accessibility, low session attendance, and poor implementation of prescribed strategies. As a result, only approximately half of…

  11. Towards a Professionalization of Pedagogical Improvisation in Teacher Education

    ERIC Educational Resources Information Center

    Ben-Horin, Oded

    2016-01-01

    The aim of this study is to provide theoretical and practical knowledge about strategies and techniques for training primary school education pre-service teachers (PSTs) for Pedagogical Improvisation (PI). Data was collected during two iterations of cross-disciplinary art/science school interventions in Norwegian 3rd-grade classes, which provided…

  12. Evaluating the iterative development of VR/AR human factors tools for manual work.

    PubMed

    Liston, Paul M; Kay, Alison; Cromie, Sam; Leva, Chiara; D'Cruz, Mirabelle; Patel, Harshada; Langley, Alyson; Sharples, Sarah; Aromaa, Susanna

    2012-01-01

    This paper outlines the approach taken to iteratively evaluate a set of VR/AR (virtual reality / augmented reality) applications for five different manual-work applications - terrestrial spacecraft assembly, assembly-line design, remote maintenance of trains, maintenance of nuclear reactors, and large-machine assembly process design - and examines the evaluation data for evidence of the effectiveness of the evaluation framework as well as the benefits to the development process of feedback from iterative evaluation. ManuVAR is an EU-funded research project that is working to develop an innovative technology platform and a framework to support high-value, high-knowledge manual work throughout the product lifecycle. The results of this study demonstrate the iterative improvements reached throughout the design cycles, observable through the trending of the quantitative results from three successive trials of the applications and the investigation of the qualitative interview findings. The paper discusses the limitations of evaluation in complex, multi-disciplinary development projects and finds evidence of the effectiveness of the use of the particular set of complementary evaluation methods incorporating a common inquiry structure used for the evaluation - particularly in facilitating triangulation of the data.

  13. Augmented reality application for industrial non-destructive inspection training

    NASA Astrophysics Data System (ADS)

    Amza, Catalin Gheorghe; Zapciu, Aurelian; Teodorescu, Octav

    2018-02-01

    Such a technology - Augmented Reality (AR) has great potential of use, especially for training purposes of new operators on using expensive equipment. In this context, the paper presents an augmented reality training system developed for phased-array ultrasonic non-destructive testing (NDT) equipment. The application has been developed using Unity 5.6.0 game-engine platform integrated with Vuforia sdk toolkit for devices with Android operating system. The test results performed by several NDT operators showed good results, thus proving the potential of using the application in the industrial field.

  14. 41 CFR 301-11.300 - When is actual expense reimbursement warranted?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meals are procured at a prearranged place such as a hotel where a meeting, conference or training session is held; (b) Costs have escalated because of special events (e.g., missile launching periods...

  15. Heat-shrinkable film improves adhesive bonds

    NASA Technical Reports Server (NTRS)

    Johns, J. M.; Reed, M. W.

    1980-01-01

    Pressure is applied during adhesive bonding by wrapping parts in heat-shrinkable plastic film. Film eliminates need to vacuum bag or heat parts in expensive autoclave. With procedure, operators are trained quickly, and no special skills are required.

  16. 41 CFR 301-10.162 - When may I use other than coach-class train accommodations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES ALLOWABLE TRAVEL EXPENSES...-class accommodations would endanger your life or Government property; (2) You are an agent on protective...

  17. 78 FR 18625 - Call for Nominations for the California Desert District Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ... without compensation other than travel expenses. Members serve 3-year terms and may be nominated for.... Any group or individual may nominate a qualified person, based upon education, training, and knowledge...

  18. 11 CFR 106.1 - Allocation of expenses between candidates.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... compared to the total receipts by all candidates. In the case of a phone bank, the attribution shall be... campaign seminars, for training of campaign workers, and for registration or get-out-the-vote drives of...

  19. Affordable and personalized lighting using inverse modeling and virtual sensors

    NASA Astrophysics Data System (ADS)

    Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney

    2014-03-01

    Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.

  20. Generation method of synthetic training data for mobile OCR system

    NASA Astrophysics Data System (ADS)

    Chernyshova, Yulia S.; Gayer, Alexander V.; Sheshkus, Alexander V.

    2018-04-01

    This paper addresses one of the fundamental problems of machine learning - training data acquiring. Obtaining enough natural training data is rather difficult and expensive. In last years usage of synthetic images has become more beneficial as it allows to save human time and also to provide a huge number of images which otherwise would be difficult to obtain. However, for successful learning on artificial dataset one should try to reduce the gap between natural and synthetic data distributions. In this paper we describe an algorithm which allows to create artificial training datasets for OCR systems using russian passport as a case study.

  1. An accelerated training method for back propagation networks

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O. (Inventor)

    1993-01-01

    The principal objective is to provide a training procedure for a feed forward, back propagation neural network which greatly accelerates the training process. A set of orthogonal singular vectors are determined from the input matrix such that the standard deviations of the projections of the input vectors along these singular vectors, as a set, are substantially maximized, thus providing an optimal means of presenting the input data. Novelty exists in the method of extracting from the set of input data, a set of features which can serve to represent the input data in a simplified manner, thus greatly reducing the time/expense to training the system.

  2. A qualitative study of the perspectives of key stakeholders on the delivery of clinical academic training in the East Midlands.

    PubMed

    Green, Ruth H; Evans, Val; MacLeod, Sheona; Barratt, Jonathan

    2018-02-01

    Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. The East Midlands School of Clinical Academic Training. Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Our findings help inform the future development of integrated academic training programmes.

  3. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  4. Method for training honeybees to respond to olfactory stimuli and enhancement of memory retention therein

    DOEpatents

    McCade, Kirsten J.; Wingo, Robert M.; Haarmann, Timothy K.; Sutherland, Andrew; Gubler, Walter D.

    2015-12-15

    A specialized conditioning protocol for honeybees that is designed for use within a complex agricultural ecosystem. This method ensures that the conditioned bees will be less likely to exhibit a conditioned response to uninfected plants, a false positive response that would render such a biological sensor unreliable for agricultural decision support. Also described is a superboosting training regime that allows training without the aid of expensive equipment and protocols for training in out in the field. Also described is a memory enhancing cocktail that aids in long term memory retention of a vapor signature. This allows the bees to be used in the field for longer durations and with fewer bees trained overall.

  5. Theorising Teaching and Learning: Pre-Service Teachers' Theoretical Awareness of Learning

    ERIC Educational Resources Information Center

    Brante, Göran; Holmqvist Olander, Mona; Holmquist, Per-Ola; Palla, Marta

    2015-01-01

    We examine pre-service teachers' theoretical learning during one five-week training module, and their educators' learning about better lecture design to foster student learning. The study is iterative: interventions (one per group) were implemented sequentially in student groups A-C, the results of the previous intervention serving as the baseline…

  6. Mission-Driven Adaptability in a Changing National Training System

    ERIC Educational Resources Information Center

    Zoellner, Don; Stephens, Anne; Joseph, Victor; Monro, Davena

    2017-01-01

    This case study of an adult and community education provider based in far north Queensland describes its capacity to balance various iterations of public policy against its vision for the future of Aboriginal and Torres Straits Islanders. Community-controlled organisations wanting to contribute to economic and social development in regional/remote…

  7. Artificial neural network prediction of aircraft aeroelastic behavior

    NASA Astrophysics Data System (ADS)

    Pesonen, Urpo Juhani

    An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.

  8. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  9. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    USGS Publications Warehouse

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  10. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    PubMed

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  11. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  12. Artificial intelligence in medicine: humans need not apply?

    PubMed

    Diprose, William; Buist, Nicholas

    2016-05-06

    Artificial intelligence (AI) is a rapidly growing field with a wide range of applications. Driven by economic constraints and the potential to reduce human error, we believe that over the coming years AI will perform a significant amount of the diagnostic and treatment decision-making traditionally performed by the doctor. Humans would continue to be an important part of healthcare delivery, but in many situations, less expensive fit-for-purpose healthcare workers could be trained to 'fill the gaps' where AI are less capable. As a result, the role of the doctor as an expensive problem-solver would become redundant.

  13. Supplying of Assembly Lines Using Train of Trucks

    NASA Astrophysics Data System (ADS)

    Čujan, Zdeněk; Fedorko, Gabriel

    2016-11-01

    The typical supply system conceptions, i.e. the concepts "Just-in-time" (JIT) and "Just-in-sequence" (JIS) are very important factors with regard to a fluent operation of the assembly lines. Therefore the contemporary intra plant transport systems are being replaced by a new kind of the transportation technology, namely by means of the trains of trucks. The trains of trucks are used in two possible operational modes: either with a driver or without driver (fully automated). The trucks of the logistic trains are also cheaper and they are able to carry a larger volume and mass of the material at once. There are reduced in this way not only the investment costs, but also the operational expenses.

  14. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  15. Adaptive statistical iterative reconstruction use for radiation dose reduction in pediatric lower-extremity CT: impact on diagnostic image quality.

    PubMed

    Shah, Amisha; Rees, Mitchell; Kar, Erica; Bolton, Kimberly; Lee, Vincent; Panigrahy, Ashok

    2018-06-01

    For the past several years, increased levels of imaging radiation and cumulative radiation to children has been a significant concern. Although several measures have been taken to reduce radiation dose during computed tomography (CT) scan, the newer dose reduction software adaptive statistical iterative reconstruction (ASIR) has been an effective technique in reducing radiation dose. To our knowledge, no studies are published that assess the effect of ASIR on extremity CT scans in children. To compare radiation dose, image noise, and subjective image quality in pediatric lower extremity CT scans acquired with and without ASIR. The study group consisted of 53 patients imaged on a CT scanner equipped with ASIR software. The control group consisted of 37 patients whose CT images were acquired without ASIR. Image noise, Computed Tomography Dose Index (CTDI) and dose length product (DLP) were measured. Two pediatric radiologists rated the studies in subjective categories: image sharpness, noise, diagnostic acceptability, and artifacts. The CTDI (p value = 0.0184) and DLP (p value <0.0002) were significantly decreased with the use of ASIR compared with non-ASIR studies. However, the subjective ratings for sharpness (p < 0.0001) and diagnostic acceptability of the ASIR images (p < 0.0128) were decreased compared with standard, non-ASIR CT studies. Adaptive statistical iterative reconstruction reduces radiation dose for lower extremity CTs in children, but at the expense of diagnostic imaging quality. Further studies are warranted to determine the specific utility of ASIR for pediatric musculoskeletal CT imaging.

  16. Prospects for steady-state scenarios on JET

    NASA Astrophysics Data System (ADS)

    Litaudon, X.; Bizarro, J. P. S.; Challis, C. D.; Crisanti, F.; DeVries, P. C.; Lomas, P.; Rimini, F. G.; Tala, T. J. J.; Akers, R.; Andrew, Y.; Arnoux, G.; Artaud, J. F.; Baranov, Yu F.; Beurskens, M.; Brix, M.; Cesario, R.; DeLa Luna, E.; Fundamenski, W.; Giroud, C.; Hawkes, N. C.; Huber, A.; Joffrin, E.; Pitts, R. A.; Rachlew, E.; Reyes-Cortes, S. D. A.; Sharapov, S. E.; Zastrow, K. D.; Zimmermann, O.; JET EFDA contributors, the

    2007-09-01

    In the 2006 experimental campaign, progress has been made on JET to operate non-inductive scenarios at higher applied powers (31 MW) and density (nl ~ 4 × 1019 m-3), with ITER-relevant safety factor (q95 ~ 5) and plasma shaping, taking advantage of the new divertor capabilities. The extrapolation of the performance using transport modelling benchmarked on the experimental database indicates that the foreseen power upgrade (~45 MW) will allow the development of non-inductive scenarios where the bootstrap current is maximized together with the fusion yield and not, as in present-day experiments, at its expense. The tools for the long-term JET programme are the new ITER-like ICRH antenna (~15 MW), an upgrade of the NB power (35 MW/20 s or 17.5 MW/40 s), a new ITER-like first wall, a new pellet injector for edge localized mode control together with improved diagnostic and control capability. Operation with the new wall will set new constraints on non-inductive scenarios that are already addressed experimentally and in the modelling. The fusion performance and driven current that could be reached at high density and power have been estimated using either 0D or 1-1/2D validated transport models. In the high power case (45 MW), the calculations indicate the potential for the operational space of the non-inductive regime to be extended in terms of current (~2.5 MA) and density (nl > 5 × 1019 m-3), with high βN (βN > 3.0) and a fraction of the bootstrap current within 60-70% at high toroidal field (~3.5 T).

  17. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  18. What`s fair is fair

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nachtrieb, R.; Freidberg, J.P.

    The newly elucidated strategy for the magnetic fusion program set forth by the Department of Energy calls for increased emphasis on alternate concepts. This strategy is motivated by the recognition that in spite of its many attractive features, a tokamak tends to be a low power density device, ultimately translating into large and corresponding expensive reactor. ITER, as it is currently envisaged, is a good example of a large, expensive, plain vanilla tokamak. In its defense, ITER rightly claims that its base design is very conservative in order to minimize the risk of failure. In order to increase power densitymore » and reduce cost there are two qualitatively different approaches that one can follow: discover advanced modes of tokamak operation or develop near alternate concepts. To decide which path to follow is a difficult task because of the uncertainties involved in making accurate comparisons between different concepts at different stages of development. One area, however, that most would agree is meaningful is ideal MHD stability. For any given concept to be credible as a reactor, it must at least be stable against macroscopic ideal MHD modes. The TPX design, for instance, goes to considerable trouble to obtain stability against external kinks: a close fitting metallic cage, rotation to stabilize the resistive wall version of the external kink, and, if all else fails, feedback. For credibility any other advanced tokamak or alternate concept should be held to the same standards of ideal MHD stability. As a first step in addressing this requirement we have investigated the stability of the RFP since it can be simply and accurately modeled as a straight cylinder. The RFP is well known to have good stability at high P against internal modes but is very unstable to external modes. We have developed a linear stability code which treats the plasma as an ideal compressible fluid, and includes longitudinal flow and a resistive wall.« less

  19. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  20. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  1. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  2. 17 CFR 202.190 - Public Company Accounting Oversight Board budget approval process.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... information technology projects; and (viii) A statement that the PCAOB has considered relative costs and..., processes, staff skills, information and other technologies, human resources, capital assets, and other... include, among others: personnel, training, recruiting and relocation expenses, information technology...

  3. 17 CFR 202.190 - Public Company Accounting Oversight Board budget approval process.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... information technology projects; and (viii) A statement that the PCAOB has considered relative costs and..., processes, staff skills, information and other technologies, human resources, capital assets, and other... include, among others: personnel, training, recruiting and relocation expenses, information technology...

  4. 48 CFR 237.7202 - Limitations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Limitations. 237.7202 Section 237.7202 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT... funds for tuition or other expenses for training in any legal profession, except in connection with the...

  5. 47 CFR 27.1164 - The cost-sharing formula.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... control equipment; engineering costs (design/path survey); installation; systems testing; FCC filing costs; site acquisition and civil works; zoning costs; training; disposal of old equipment; test equipment... a replacement system, such as equipment and engineering expenses. C may not exceed $250,000 per...

  6. Upper limb stroke rehabilitation: the effectiveness of Stimulation Assistance through Iterative Learning (SAIL).

    PubMed

    Meadmore, Katie L; Cai, Zhonglun; Tong, Daisy; Hughes, Ann-Marie; Freeman, Chris T; Rogers, Eric; Burridge, Jane H

    2011-01-01

    A novel system has been developed which combines robotic therapy with electrical stimulation (ES) for upper limb stroke rehabilitation. This technology, termed SAIL: Stimulation Assistance through Iterative Learning, employs advanced model-based iterative learning control (ILC) algorithms to precisely assist participant's completion of 3D tracking tasks with their impaired arm. Data is reported from a preliminary study with unimpaired participants, and also from a single hemiparetic stroke participant with reduced upper limb function who has used the system in a clinical trial. All participants completed tasks which involved moving their (impaired) arm to follow an image of a slowing moving sphere along a trajectory. The participants' arm was supported by a robot and ES was applied to the triceps brachii and anterior deltoid muscles. During each task, the same tracking trajectory was repeated 6 times and ILC was used to compute the stimulation signals to be applied on the next iteration. Unimpaired participants took part in a single, one hour training session and the stroke participant undertook 18, 1 hour treatment sessions composed of tracking tasks varying in length, orientation and speed. The results reported describe changes in tracking ability and demonstrate feasibility of the SAIL system for upper limb rehabilitation. © 2011 IEEE

  7. Iterative learning-based decentralized adaptive tracker for large-scale systems: a digital redesign approach.

    PubMed

    Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua

    2011-07-01

    In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Learning Efficient Sparse and Low Rank Models.

    PubMed

    Sprechmann, P; Bronstein, A M; Sapiro, G

    2015-09-01

    Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.

  9. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  10. Towards development and validation of an intraoperative assessment tool for robot-assisted radical prostatectomy training: results of a Delphi study.

    PubMed

    Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D

    2017-01-01

    As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Panelist's responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. Copyright® by the International Brazilian Journal of Urology.

  11. Towards development and validation of an intraoperative assessment tool for robot-assisted radical prostatectomy training: results of a Delphi study

    PubMed Central

    Morris, Christopher; Hoogenes, Jen; Shayegan, Bobby; Matsumoto, Edward D.

    2017-01-01

    ABSTRACT Introduction As urology training shifts toward competency-based frameworks, the need for tools for high stakes assessment of trainees is crucial. Validated assessment metrics are lacking for many robot-assisted radical prostatectomy (RARP). As it is quickly becoming the gold standard for treatment of localized prostate cancer, the development and validation of a RARP assessment tool for training is timely. Materials and methods We recruited 13 expert RARP surgeons from the United States and Canada to serve as our Delphi panel. Using an initial inventory developed via a modified Delphi process with urology residents, fellows, and staff at our institution, panelists iteratively rated each step and sub-step on a 5-point Likert scale of agreement for inclusion in the final assessment tool. Qualitative feedback was elicited for each item to determine proper step placement, wording, and suggestions. Results Panelist’s responses were compiled and the inventory was edited through three iterations, after which 100% consensus was achieved. The initial inventory steps were decreased by 13% and a skip pattern was incorporated. The final RARP stepwise inventory was comprised of 13 critical steps with 52 sub-steps. There was no attrition throughout the Delphi process. Conclusions Our Delphi study resulted in a comprehensive inventory of intraoperative RARP steps with excellent consensus. This final inventory will be used to develop a valid and psychometrically sound intraoperative assessment tool for use during RARP training and evaluation, with the aim of increasing competency of all trainees. PMID:28379668

  12. Using Quality Rating Scales for Professional Development: Experiences from the UK

    ERIC Educational Resources Information Center

    Mathers, Sandra; Linskey, Faye; Seddon, Judith; Sylva, Kathy

    2007-01-01

    The ECERS-R and ITERS-R are among two of the most widely used observational measures for describing the characteristics of early childhood education and care. This paper describes a professional development programme currently taking place in seven regions across England, designed to train local government staff in the application of the scales as…

  13. No More "Magic Aprons": Longitudinal Assessment and Continuous Improvement of Customer Service at the University of North Dakota Libraries

    ERIC Educational Resources Information Center

    Clark, Karlene T.; Walker, Stephanie R.

    2017-01-01

    The University of North Dakota (UND) Libraries have developed a multi-award winning Customer Service Program (CSP) involving longitudinal assessment and continuous improvement. The CSP consists of iterative training modules; constant reinforcement of Customer Service Principles with multiple communication strategies and tools, and incentives that…

  14. Iterations of the SafeCare Model: An Evidence-Based Child Maltreatment Prevention Program

    ERIC Educational Resources Information Center

    Edwards, Anna; Lutzker, John R.

    2008-01-01

    SafeCare is an evidenced-based parenting program for at-risk and maltreating parents that addresses the social and family ecology in which child maltreatment occurs. SafeCare home visitors focus on behavioral skills that are trained to predetermined performance criteria. Recent research has stressed the importance of successful dissemination and…

  15. The Grades That Clinical Teachers Give Students Modifies the Grades They Receive

    ERIC Educational Resources Information Center

    Paget, Michael; Brar, Gurbir; Veale, Pamela; Busche, Kevin; Coderre, Sylvain; Woloschuk, Wayne; McLaughlin, Kevin

    2018-01-01

    Prior studies have shown a correlation between the grades students receive and how they rate their teacher in the classroom. In this study, the authors probe this association on clinical rotations and explore potential mechanisms. All In-Training Evaluation Reports (ITERs) for students on mandatory clerkship rotations from April 1, 2013 to January…

  16. Training Final Year Students in Data Presentation Skills with an Iterative Report-Feedback Cycle

    ERIC Educational Resources Information Center

    Verkade, Heather

    2015-01-01

    Although practical laboratory activities are often considered the linchpin of science education, asking students to produce many large practical reports can be problematic. Practical reports require diverse skills, and therefore do not focus the students' attention on any one skill where specific skills need to be enhanced. They are also…

  17. Transcending the Quantitative-Qualitative Divide with Mixed Methods Research: A Multidimensional Framework for Understanding Congruence and Completeness in the Study of Values

    ERIC Educational Resources Information Center

    McLafferty, Charles L., Jr.; Slate, John R.; Onwuegbuzie, Anthony J.

    2010-01-01

    Quantitative research dominates published literature in the helping professions. Mixed methods research, which integrates quantitative and qualitative methodologies, has received a lukewarm reception. The authors address the iterative separation that infuses theory, praxis, philosophy, methodology, training, and public perception and propose a…

  18. Testing the Usability of a Portable DVD Player and Tailored Photo Instructions with Older Adult Veterans

    ERIC Educational Resources Information Center

    Gould, Christine E.; Zapata, Aimee Marie L.; Shinsky, Deanna N.; Goldstein, Mary K.

    2018-01-01

    DVD-delivered behavioral skills training may help disseminate efficacious treatments to older adults independent of internet access. The present study examined the usability of a portable DVD player alongside iterative revisions of accompanying instructions to be used by older adults in a DVD-delivered behavioral skills treatment study. The sample…

  19. Expanding the Role of School Psychologists to Support Early Career Teachers: A Mixed-Method Study

    ERIC Educational Resources Information Center

    Shernoff, Elisa S.; Frazier, Stacy L.; Maríñez-Lora, Ané M.; Lakind, Davielle; Atkins, Marc S.; Jakobsons, Lara; Hamre, Bridget K.; Bhaumik, Dulal K.; Parker-Katz, Michelle; Neal, Jennifer Watling; Smylie, Mark A.; Patel, Darshan A.

    2016-01-01

    School psychologists have training and expertise in consultation and evidence-based interventions that position them well to support early career teachers (ECTs). The current study involved iterative development and pilot testing of an intervention to help ECTs become more effective in classroom management and engaging learners, as well as more…

  20. The Cost of Family Medicine Residency Training: Impacts of Federal and State Funding.

    PubMed

    Pauwels, Judith; Weidner, Amanda

    2018-02-01

    Numerous organizations are calling for the expansion of graduate medical education (GME) positions nationally. Developing new residency programs and expanding existing programs can only happen if financial resources are available to pay for the expenses of training beyond what can be generated in direct clinical income by the residents and faculty in the program. The goal of this study was to evaluate trended data regarding the finances of family medicine residency programs to identify what financial resources are needed to sustain graduate medical education programs. A group of family medicine residency programs have shared their financial data since 2002 through a biennial survey of program revenues, expenses, and staffing. Data sets over 12 years were collected and analyzed, and results compared to analyze trends. Overall expenses increased 70.4% during this period. Centers for Medicare and Medicaid Services (CMS) GME revenue per resident increased by 15.7% for those programs receiving these monies. Overall, total revenue per resident, including clinical revenues, state funding, and any other revenue stream, increased 44.5% from 2006 to 2016. The median cost per resident among these programs, excluding federal GME funds, is currently $179,353; this amount has increased over the 12 years by 93.7%. For this study group of family medicine programs, data suggests a cost per resident per year, excluding federal and state GME funding streams, of about $180,000. This excess expense compared to revenue must be met by other agencies, whether from CMS, the Health Resources and Services Administration (HRSA), state expenditures or other sources, through stable long-term commitments to these funding mechanisms to ensure program viability for these essential family medicine programs in the future.

  1. The State of Sensor Technology and Air Quality Monitoring

    EPA Science Inventory

    Produces data of known value and highly reliableStationary- cannot be easily relocatedInstruments are often large and require a building to support their operationExpensive to purchase and operate (typically > $20K each)Requires frequent visits by highly trained staff to check on...

  2. Developing an Online Certification Program for Nutrition Education Assistants

    ERIC Educational Resources Information Center

    Christofferson, Debra; Christensen, Nedra; LeBlanc, Heidi; Bunch, Megan

    2012-01-01

    Objective: To develop an online certification program for nutrition education paraprofessionals to increase knowledge and confidence and to overcome training barriers of programming time and travel expenses. Design: An online interactive certification course based on Supplemental Nutrition Assistance Program-Education and Expanded Food and…

  3. Neural Network for Positioning Space Station Solar Arrays

    NASA Technical Reports Server (NTRS)

    Graham, Ronald E.; Lin, Paul P.

    1994-01-01

    As a shuttle approaches the Space Station Freedom for a rendezvous, the shuttle's reaction control jet firings pose a risk of excessive plume impingement loads on Freedom solar arrays. The current solution to this problem, in which the arrays are locked in a feathered position prior to the approach, may be neither accurate nor robust, and is also expensive. An alternative solution is proposed here: the active control of Freedom's beta gimbals during the approach, positioning the arrays dynamically in such a way that they remain feathered relative to the shuttle jet most likely to cause an impingement load. An artificial neural network is proposed as a means of determining the gimbal angles that would drive plume angle of attack to zero. Such a network would be both accurate and robust, and could be less expensive to implement than the current solution. A network was trained via backpropagation, and results, which compare favorably to the current solution as well as to some other alternatives, are presented. Other training options are currently being evaluated.

  4. An Iterative Learning Algorithm to Map Oil Palm Plantations from Synthetic Aperture Radar and Crowdsourcing

    NASA Astrophysics Data System (ADS)

    Pinto, N.; Zhang, Z.; Perger, C.; Aguilar-Amuchastegui, N.; Almeyda Zambrano, A. M.; Broadbent, E. N.; Simard, M.; Banerjee, S.

    2017-12-01

    The oil palm Elaeis spp. grows exclusively in the tropics and provides 30% of the world's vegetable oil. While oil palm-derived biodiesel can reduce carbon emissions from fossil fuels, plantation establishment may be associated with peat fires and deforestation. The ability to monitor plantation establishment and their expansion over carbon-rich tropical forests is critical for quantifying the net impact of oil palm commodities on carbon fluxes. Our objective is to develop a robust methodology to map oil palm plantations in tropical biomes, based on Synthetic Aperture Radar (SAR) from Sentinel-1, ALOS/PALSAR2, and UAVSAR. The C- and L-band signal from these instruments are sensitive to vegetation parameters such as canopy volume, trunk shape, and trunk spatial arrangement, that are critical to differentiate crops from forests and native palms. Based on Bayesian statistics, the learning algorithm employed here adapts to growing knowledge as sites and trainning points are added. We will present an iterative approach wherein a model is initially built at the site with the most training points - in our case, Costa Rica. Model posteriors from Costa Rica, depicting polarimetric signatures of oil palm plantations, are then used as priors in a classification exercise taking place in South Kalimantan. Results are evaluated by local researchers using the LACO Wiki interface. All validation points, including missclassified sites, are used in an additional iteration to improve model results to >90% overall accuracy. We report on the impact of plantation age on polarimetric signatures, and we also compare model performance with and without L-band data.

  5. Academic requirements for Certificate of Completion of Training in surgical training: Consensus recommendations from the Association of Surgeons in Training/National Research Collaborative Consensus Group.

    PubMed

    Lee, Mathew J; Bhangu, A; Blencowe, Natalie S; Nepogodiev, D; Gokani, Vimal J; Harries, Rhiannon L; Akinfala, M; Ali, O; Allum, W; Bosanquet, D C; Boyce, K; Bradburn, M; Chapman, S J; Christopher, E; Coulter, I; Dean, B J F; Dickfos, M; El Boghdady, M; Elmasry, M; Fleming, S; Glasbey, J; Healy, C; Kasivisvanathan, V; Khan, K S; Kolias, A G; Lee, S M; Morton, D; O'Beirne, J; Sinclair, P; Sutton, P A

    2016-11-01

    Surgical trainees are expected to demonstrate academic achievement in order to obtain their certificate of completion of training (CCT). These standards are set by the Joint Committee on Surgical Training (JCST) and specialty advisory committees (SAC). The standards are not equivalent across all surgical specialties and recognise different achievements as evidence. They do not recognise changes in models of research and focus on outcomes rather than process. The Association of Surgeons in Training (ASiT) and National Research Collaborative (NRC) set out to develop progressive, consistent and flexible evidence set for academic requirements at CCT. A modified-Delphi approach was used. An expert group consisting of representatives from the ASiT and the NRC undertook iterative review of a document proposing changes to requirements. This was circulated amongst wider stakeholders. After ten iterations, an open meeting was held to discuss these proposals. Voting on statements was performed using a 5-point Likert Scale. Each statement was voted on twice, with ≥80% of votes in agreement meaning the statement was approved. The results of this vote were used to propose core and optional academic requirements for CCT. Online discussion concluded after ten rounds. At the consensus meeting, statements were voted on by 25 delegates from across surgical specialties and training-grades. The group strongly favoured acquisition of 'Good Clinical Practice' training and research methodology training as CCT requirements. The group agreed that higher degrees, publications in any author position (including collaborative authorship), recruiting patients to a study or multicentre audit and presentation at a national or international meeting could be used as evidence for the purpose of CCT. The group agreed on two essential 'core' requirements (GCP and methodology training) and two of a menu of four 'additional' requirements (publication with any authorship position, presentation, recruitment of patients to a multicentre study and completion of a higher degree), which should be completed in order to attain CCT. This approach has engaged stakeholders to produce a progressive set of academic requirements for CCT, which are applicable across surgical specialties. Flexibility in requirements whilst retaining a high standard of evidence is desirable. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  6. A qualitative study of the perspectives of key stakeholders on the delivery of clinical academic training in the East Midlands

    PubMed Central

    Evans, Val; MacLeod, Sheona

    2018-01-01

    Objective Major changes in the design and delivery of clinical academic training in the United Kingdom have occurred yet there has been little exploration of the perceptions of integrated clinic academic trainees or educators. We obtained the views of a range of key stakeholders involved in clinical academic training in the East Midlands. Design A qualitative study with inductive iterative thematic content analysis of findings from trainee surveys and facilitated focus groups. Setting The East Midlands School of Clinical Academic Training. Participants Integrated Clinical Academic Trainees, clinical and academic educators involved in clinical academic training. Main outcome measures The experience, opinions and beliefs of key stakeholders about barriers and enablers in the delivery of clinical academic training. Results We identified key themes many shared by both trainees and educators. These highlighted issues in the systems and process of the integrated academic pathways, career pathways, supervision and support, the assessment process and the balance between clinical and academic training. Conclusions Our findings help inform the future development of integrated academic training programmes. PMID:29487745

  7. Study on longitudinal force simulation of heavy-haul train

    NASA Astrophysics Data System (ADS)

    Chang, Chongyi; Guo, Gang; Wang, Junbiao; Ma, Yingming

    2017-04-01

    The longitudinal dynamics model of heavy-haul trains and air brake model used in the longitudinal train dynamics (LTDs) are established. The dry friction damping hysteretic characteristic of steel friction draft gears is simulated by the equation which describes the suspension forces in truck leaf springs. The model of draft gears introduces dynamic loading force, viscous friction of steel friction and the damping force. Consequently, the numerical model of the draft gears is brought forward. The equation of LTDs is strongly non-linear. In order to solve the response of the strongly non-linear system, the high-precision and equilibrium iteration method based on the Newmark-β method is presented and numerical analysis is made. Longitudinal dynamic forces of the 20,000 tonnes heavy-haul train are tested, and models and solution method provided are verified by the test results.

  8. Human Factors Assessment and Redesign of the ISS Respiratory Support Pack (RSP) Cue Card

    NASA Technical Reports Server (NTRS)

    Byrne, Vicky; Hudy, Cynthia; Whitmore, Mihriban; Smith, Danielle

    2007-01-01

    The Respiratory Support Pack (RSP) is a medical pack onboard the International Space Station (ISS) that contains much of the necessary equipment for providing aid to a conscious or unconscious crewmember in respiratory distress. Inside the RSP lid pocket is a 5.5 by 11 inch paper procedural cue card, which is used by a Crew Medical Officer (CMO) to set up the equipment and deliver oxygen to a crewmember. In training, crewmembers expressed concerns about the readability and usability of the cue card; consequently, updating the cue card was prioritized as an activity to be completed. The Usability Testing and Analysis Facility at the Johnson Space Center (JSC) evaluated the original layout of the cue card, and proposed several new cue card designs based on human factors principles. The approach taken for the assessment was an iterative process. First, in order to completely understand the issues with the RSP cue card, crewmember post training comments regarding the RSP cue card were taken into consideration. Over the course of the iterative process, the procedural information was reorganized into a linear flow after the removal of irrelevant (non-emergency) content. Pictures, color coding, and borders were added to highlight key components in the RSP to aid in quickly identifying those components. There were minimal changes to the actual text content. Three studies were conducted using non-medically trained JSC personnel (total of 34 participants). Non-medically trained personnel participated in order to approximate a scenario of limited CMO exposure to the RSP equipment and training (which can occur six months prior to the mission). In each study, participants were asked to perform two respiratory distress scenarios using one of the cue card designs to simulate resuscitation (using a mannequin along with the hardware). Procedure completion time, errors, and subjective ratings were recorded. The last iteration of the cue card featured a schematic of the RSP, colors, borders, and simplification of the flow of information. The time to complete the RSP procedure was reduced by approximately three minutes with the new design. In an emergency situation, three minutes significantly increases the probability of saving a life. In addition, participants showed the highest preference for this design. The results of the studies and the new design were presented to a focus group of astronauts, flight surgeons, medical trainers, and procedures personnel. The final cue card was presented to a medical control board and approved for flight. The revised RSP cue card is currently onboard ISS.

  9. Computer Conferencing and Electronic Mail.

    ERIC Educational Resources Information Center

    Kaye, Tony

    This paper discusses a number of problems associated with distance education methods used in adult education and training fields, including limited opportunities for dialogue and group interaction among students and between students and tutors; the expense of updating and modifying mass-produced print and audiovisual materials; and the relative…

  10. The Role of Federal Tax Policy in Employment Policy.

    ERIC Educational Resources Information Center

    Hovey, Harold A.

    Federal tax policy could affect employment policy through the following four provisions: the Targeted Jobs Tax Credit (TJTC), employment incentives in enterprise zone legislation, individual training accounts (ITAs), and employer reimbursement of employee educational expenses. The TJTC and enterprise zone proposals are both attempts to cause…

  11. English in the Workplace at Mohawk College.

    ERIC Educational Resources Information Center

    Jones, Jim

    1982-01-01

    Two different projects in ESL training for immigrant workers, one for a garment company and one for metal workers in a turbine plant, are described and compared. The programs were for Indochinese and Eastern Europeans, respectively, and involved no unions. Extensive preparation made both programs expensive. (MSE)

  12. 20 CFR 655.65 - Remedies for violations.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Remedies for violations. 655.65 Section 655.65 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TEMPORARY... employees to pay for fees or expenses prohibited by § 655.22(j), or willfully made impermissible deductions...

  13. 28 CFR 0.76 - Specific functions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., promulgation of policies for travel, transportation, and relocation expenses, and issuance of necessary...) Approving per diem allowances for travel by airplane, train or boat outside the continental United States in accordance with paragraph 1-7.2 of the Federal Travel Regulations (FPMR 101-7). (d) Exercising the claims...

  14. Multi-Protocol LAN Design and Implementation: A Case Study.

    ERIC Educational Resources Information Center

    Hazari, Sunil

    1995-01-01

    Reports on the installation of a local area network (LAN) at East Carolina University. Topics include designing the network; computer labs and electronic mail; Internet connectivity; LAN expenses; and recommendations on planning, equipment, administration, and training. A glossary of networking terms is also provided. (AEF)

  15. Resources for global risk assessment: the International Toxicity Estimates for Risk (ITER) and Risk Information Exchange (RiskIE) databases.

    PubMed

    Wullenweber, Andrea; Kroner, Oliver; Kohrman, Melissa; Maier, Andrew; Dourson, Michael; Rak, Andrew; Wexler, Philip; Tomljanovic, Chuck

    2008-11-15

    The rate of chemical synthesis and use has outpaced the development of risk values and the resolution of risk assessment methodology questions. In addition, available risk values derived by different organizations may vary due to scientific judgments, mission of the organization, or use of more recently published data. Further, each organization derives values for a unique chemical list so it can be challenging to locate data on a given chemical. Two Internet resources are available to address these issues. First, the International Toxicity Estimates for Risk (ITER) database (www.tera.org/iter) provides chronic human health risk assessment data from a variety of organizations worldwide in a side-by-side format, explains differences in risk values derived by different organizations, and links directly to each organization's website for more detailed information. It is also the only database that includes risk information from independent parties whose risk values have undergone independent peer review. Second, the Risk Information Exchange (RiskIE) is a database of in progress chemical risk assessment work, and includes non-chemical information related to human health risk assessment, such as training modules, white papers and risk documents. RiskIE is available at http://www.allianceforrisk.org/RiskIE.htm, and will join ITER on National Library of Medicine's TOXNET (http://toxnet.nlm.nih.gov/). Together, ITER and RiskIE provide risk assessors essential tools for easily identifying and comparing available risk data, for sharing in progress assessments, and for enhancing interaction among risk assessment groups to decrease duplication of effort and to harmonize risk assessment procedures across organizations.

  16. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  17. Improving medical stores management through automation and effective communication.

    PubMed

    Kumar, Ashok; Cariappa, M P; Marwaha, Vishal; Sharma, Mukti; Arora, Manu

    2016-01-01

    Medical stores management in hospitals is a tedious and time consuming chore with limited resources tasked for the purpose and poor penetration of Information Technology. The process of automation is slow paced due to various inherent factors and is being challenged by the increasing inventory loads and escalating budgets for procurement of drugs. We carried out an indepth case study at the Medical Stores of a tertiary care health care facility. An iterative six step Quality Improvement (QI) process was implemented based on the Plan-Do-Study-Act (PDSA) cycle. The QI process was modified as per requirement to fit the medical stores management model. The results were evaluated after six months. After the implementation of QI process, 55 drugs of the medical store inventory which had expired since 2009 onwards were replaced with fresh stock by the suppliers as a result of effective communication through upgraded database management. Various pending audit objections were dropped due to the streamlined documentation and processes. Inventory management improved drastically due to automation, with disposal orders being initiated four months prior to the expiry of drugs and correct demands being generated two months prior to depletion of stocks. The monthly expense summary of drugs was now being done within ten days of the closing month. Improving communication systems within the hospital with vendor database management and reaching out to clinicians is important. Automation of inventory management requires to be simple and user-friendly, utilizing existing hardware. Physical stores monitoring is indispensable, especially due to the scattered nature of stores. Staff training and standardized documentation protocols are the other keystones for optimal medical store management.

  18. A flexible, small positron emission tomography prototype for resource-limited laboratories

    NASA Astrophysics Data System (ADS)

    Miranda-Menchaca, A.; Martínez-Dávalos, A.; Murrieta-Rodríguez, T.; Alva-Sánchez, H.; Rodríguez-Villafuerte, M.

    2015-05-01

    Modern small-animal PET scanners typically consist of a large number of detectors along with complex electronics to provide tomographic images for research in the preclinical sciences that use animal models. These systems can be expensive, especially for resource-limited educational and academic institutions in developing countries. In this work we show that a small-animal PET scanner can be built with a relatively reduced budget while, at the same time, achieving relatively high performance. The prototype consists of four detector modules each composed of LYSO pixelated crystal arrays (individual crystal elements of dimensions 1 × 1 × 10 mm3) coupled to position-sensitive photomultiplier tubes. Tomographic images are obtained by rotating the subject to complete enough projections for image reconstruction. Image quality was evaluated for different reconstruction algorithms including filtered back-projection and iterative reconstruction with maximum likelihood-expectation maximization and maximum a posteriori methods. The system matrix was computed both with geometric considerations and by Monte Carlo simulations. Prior to image reconstruction, Fourier data rebinning was used to increase the number of lines of response used. The system was evaluated for energy resolution at 511 keV (best 18.2%), system sensitivity (0.24%), spatial resolution (best 0.87 mm), scatter fraction (4.8%) and noise equivalent count-rate. The system can be scaled-up to include up to 8 detector modules, increasing detection efficiency, and its price may be reduced as newer solid state detectors become available replacing the traditional photomultiplier tubes. Prototypes like this may prove to be very valuable for educational, training, preclinical and other biological research purposes.

  19. Web-based therapist training on cognitive behavior therapy for anxiety disorders: a pilot study.

    PubMed

    Kobak, Kenneth A; Craske, Michelle G; Rose, Raphael D; Wolitsky-Taylor, Kate

    2013-06-01

    The need for clinicians to use evidence-based practices (such as cognitive behavior therapy [CBT]) is now well recognized. However, a gap exists between the need for empirically based treatments and their availability. This is due, in part, to a shortage of clinicians formally trained on CBT. To address this problem, we developed a Web-based therapist CBT training program, to increase accessibility to this training. The program uses a two-step approach: an interactive multimedia online tutorial for didactic training on CBT concepts, followed by live remote observation through a videoconference of trainees conducting CBT, with immediate feedback in real time during critical moments to enhance learning through iterative guidance and practice. Thirty-nine clinicians from around the county completed the online didactic training and 22 completed the live remote training. Results found a significant increase in knowledge of CBT concepts and a significant increase in clinical skills, as judged by a blind rater. User satisfaction was high for both the online tutorial and the videoconference training. Utilization of CBT by trainees increased after training. Results support the acceptability and effectiveness of this Web-based approach to training.

  20. Committee-Based Active Learning for Surrogate-Assisted Particle Swarm Optimization of Expensive Problems.

    PubMed

    Wang, Handing; Jin, Yaochu; Doherty, John

    2017-09-01

    Function evaluations (FEs) of many real-world optimization problems are time or resource consuming, posing a serious challenge to the application of evolutionary algorithms (EAs) to solve these problems. To address this challenge, the research on surrogate-assisted EAs has attracted increasing attention from both academia and industry over the past decades. However, most existing surrogate-assisted EAs (SAEAs) either still require thousands of expensive FEs to obtain acceptable solutions, or are only applied to very low-dimensional problems. In this paper, a novel surrogate-assisted particle swarm optimization (PSO) inspired from committee-based active learning (CAL) is proposed. In the proposed algorithm, a global model management strategy inspired from CAL is developed, which searches for the best and most uncertain solutions according to a surrogate ensemble using a PSO algorithm and evaluates these solutions using the expensive objective function. In addition, a local surrogate model is built around the best solution obtained so far. Then, a PSO algorithm searches on the local surrogate to find its optimum and evaluates it. The evolutionary search using the global model management strategy switches to the local search once no further improvement can be observed, and vice versa. This iterative search process continues until the computational budget is exhausted. Experimental results comparing the proposed algorithm with a few state-of-the-art SAEAs on both benchmark problems up to 30 decision variables as well as an airfoil design problem demonstrate that the proposed algorithm is able to achieve better or competitive solutions with a limited budget of hundreds of exact FEs.

  1. In-training assessment for specialist registrars: views of trainees and trainers in the Mersey Deanery

    PubMed Central

    Bache, John; Brown, Jeremy; Graham, David

    2002-01-01

    Annual review of specialist registrars and production of a record of in-training assessment (RITA) is a mandatory component of training that has attracted criticism. Mersey Deanery has established a system of review that includes wider evaluation of the trainee's needs and of training requirements. We conducted a survey to ascertain whether this broadened review process was thought beneficial. In one year 1093 questionnaires were distributed to trainees and trainers. 605 (81%) of 744 trainees and 309 (89%) of 349 trainers responded. At least 89% of both groups said that the procedure had been effective in reviewing the previous year and the most recent post and in identifying training requirements. More than 90% rated the overall process positively. Trainees particularly appreciated the advice on future training, on careers and on research. This form of review is expensive in consultant time but was valued by both trainees and trainers. PMID:12461150

  2. Pilot training: What can surgeons learn from it?

    PubMed

    Sommer, Kai-Jörg

    2014-03-01

    To provide healthcare professionals with an insight into training in aviation and its possible transfer into surgery. From research online and into company archives, relevant publications and information were identified. Current airline pilot training consists of two categories, basic training and type-rating. Training methods comprise classroom instruction, computer-based training and practical training, in either the aircraft or a flight-training device, which ranges from a fixed-base flight-training device to a full flight simulator. Pilot training not only includes technical and procedural instruction, but also training in non-technical skills like crisis management, decision-making, leadership and communication. Training syllabuses, training devices and instructors are internationally standardized and these standards are legally binding. Re-qualification and recurrent training are mandatory at all stages of a pilot's and instructor's career. Surgeons and pilots have much in common, i.e., they work in a 'real-time' three-dimensional environment under high physiological and psychological stress, operating expensive equipment, and the ultimate cost for error is measured in human lives. However, their training differs considerably. Transferring these well-tried aviation methods into healthcare will make surgical training more efficient, more effective and ultimately safer.

  3. Training Effectiveness and Cost Iterative Technique (TECIT). Volume 2. Cost Effectiveness Analysis

    DTIC Science & Technology

    1988-07-01

    Moving Tank in a Field Exercise A The task cluster identified as tank commander’s station/tank gunnery and the sub-task of firing an M250 grenade launcher...Firing Procedures, Task Number 171-126-1028. I OBJECTIVE: Given an Ml tank with crew, loaded M250 I grenade launcher, the commander’s station powered up

  4. An IEP for Me: Program Improvement for Rural Teachers of Students with Moderate to Severe Disability and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Pennington, Robert C.

    2017-01-01

    Developing high-quality programming for students with moderate to severe disability (MSD) and/or autism spectrum disorder (ASD) can be challenging for teachers across the range of experience and training including those in rural contexts. This article outlines a process for the iterative refinement of teaching programs comprised of an evaluation…

  5. Mentoring for Innovation: Key Factors Affecting Participant Satisfaction in the Process of Collaborative Knowledge Construction in Teacher Training

    ERIC Educational Resources Information Center

    Dorner, Helga; Karpati, Andrea

    2010-01-01

    This paper presents data about the successful use of the Mentored Innovation Model for professional development for a group of Hungarian teachers (n = 23, n = 20 in two iterations), which was employed in the CALIBRATE project in order to enhance their ICT skills and pedagogical competences needed for participation in a multicultural, multilingual…

  6. Telehealth innovations in health education and training.

    PubMed

    Conde, José G; De, Suvranu; Hall, Richard W; Johansen, Edward; Meglan, Dwight; Peng, Grace C Y

    2010-01-01

    Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences.

  7. A Cost-Effective Virtual Environment for Simulating and Training Powered Wheelchairs Manoeuvres.

    PubMed

    Headleand, Christopher J; Day, Thomas; Pop, Serban R; Ritsos, Panagiotis D; John, Nigel W

    2016-01-01

    Control of a powered wheelchair is often not intuitive, making training of new users a challenging and sometimes hazardous task. Collisions, due to a lack of experience can result in injury for the user and other individuals. By conducting training activities in virtual reality (VR), we can potentially improve driving skills whilst avoiding the risks inherent to the real world. However, until recently VR technology has been expensive and limited the commercial feasibility of a general training solution. We describe Wheelchair-Rift, a cost effective prototype simulator that makes use of the Oculus Rift head mounted display and the Leap Motion hand tracking device. It has been assessed for face validity by a panel of experts from a local Posture and Mobility Service. Initial results augur well for our cost-effective training solution.

  8. Possibilities for measuring cotton in the field and outside the laboratory: for breeding, production, ginning, the warehouse

    USDA-ARS?s Scientific Manuscript database

    Cotton is often classified using high volume instrumentation. Although accurate, these laboratory systems require strict laboratory conditions, well trained operators, and are expensive. Much interest has been shown in non-laboratory measurements in situations not related to classing or commercial...

  9. 75 FR 17930 - Privacy Act of 1974; Report of an Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-08

    ... Training Program; Section 409(b) of the Health Professions Educational Assistance Act of 1976, (42 U.S.C..., performance awards, and adverse or disciplinary actions); commercial credit reports, educational data including tuition and other related education expenses; educational data including academic program and...

  10. 20 CFR 655.65 - Remedies for violations.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Remedies for violations. 655.65 Section 655.65 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TEMPORARY... § 655.22(e) or willfully required employees to pay for fees or expenses prohibited by § 655.22(j), or...

  11. Contract Education: A Background Paper.

    ERIC Educational Resources Information Center

    California Community Colleges, Sacramento. Academic Senate.

    Today contract education is generally thought of as a program or course for which an employer is paying the full cost of instruction for customized training. Contract education can help faculty remain current, encourage industry to make equipment available to the college that might otherwise be too expensive, and provide employment opportunities…

  12. A DIY Ultrasonic Signal Generator for Sound Experiments

    ERIC Educational Resources Information Center

    Riad, Ihab F.

    2018-01-01

    Many physics departments around the world have electronic and mechanical workshops attached to them that can help build experimental setups and instruments for research and the training of undergraduate students. The workshops are usually run by experienced technicians and equipped with expensive lathing, computer numerical control (CNC) machines,…

  13. The Tax Treatment of Training and Educational Expenses. Background Paper No. 14.

    ERIC Educational Resources Information Center

    Quigley, John M.; Smolensky, Eugene

    For those students incurring direct educational expenditures that are high enough, the current personal income tax will discourage investment in human capital, assuming tax rates are essentially proportional over the relevant range. In all probability, however, any distortion between investment in human and physical capital is quantitatively…

  14. 20 CFR 617.28 - Transportation payments.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Transportation payments. 617.28 Section 617... ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Reemployment Services § 617.28 Transportation payments. (a... transportation expenses if the training is outside the commuting area, but may not receive such assistance if...

  15. 20 CFR 617.28 - Transportation payments.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 3 2012-04-01 2012-04-01 false Transportation payments. 617.28 Section 617... ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Reemployment Services § 617.28 Transportation payments. (a... transportation expenses if the training is outside the commuting area, but may not receive such assistance if...

  16. 20 CFR 617.28 - Transportation payments.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 3 2013-04-01 2013-04-01 false Transportation payments. 617.28 Section 617... ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Reemployment Services § 617.28 Transportation payments. (a... transportation expenses if the training is outside the commuting area, but may not receive such assistance if...

  17. 20 CFR 617.28 - Transportation payments.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 3 2014-04-01 2014-04-01 false Transportation payments. 617.28 Section 617... ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Reemployment Services § 617.28 Transportation payments. (a... transportation expenses if the training is outside the commuting area, but may not receive such assistance if...

  18. 20 CFR 617.28 - Transportation payments.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Transportation payments. 617.28 Section 617... ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Reemployment Services § 617.28 Transportation payments. (a... transportation expenses if the training is outside the commuting area, but may not receive such assistance if...

  19. We Care for Kids: A Handbook for Foster Parents.

    ERIC Educational Resources Information Center

    Illinois State Dept. of Children and Family Services, Springfield.

    This handbook outlines essential information for foster parents under these basic headings: (1) legal rights and responsibilities of children, parents and foster parents; (2) recruitment, licensing, training, and evaluation of foster homes; (3) placement and removal of foster children; (4) payments and expenses; (5) medical care; (6)…

  20. Financial Decision Making during Economic Contraction: The Special Case of Community Colleges.

    ERIC Educational Resources Information Center

    Seater, Barbara

    Although faced with declining revenues and increasing enrollments, community colleges have also traditionally provided expensive support services for nontraditional students and maintained costly technological capacities to respond to the training needs of business. Financial decision-makers face unsettling questions as they attempt to achieve…

  1. 36 CFR 1210.25 - Revision of budget and program plans.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may, at its option, restrict the transfer of funds among direct cost categories or programs, functions... or the objective of the project or program (even if there is no associated budget revision requiring... funds allotted for training allowances (direct payment to trainees) to other categories of expense. (8...

  2. 76 FR 63582 - Reporting Requirements for Positive Train Control Expenses and Investments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-13

    .... We propose to adopt supplemental schedules to the R-1 to require financial disclosure with respect to... request. In Class I Railroad & Financial Reporting--Transportation of Hazardous Materials, EP 681 (STB... Board's accounting and reporting requirements, including some of the same schedules raised by UP in this...

  3. Project Newgate: Morehead State University and Federal Youth Center Institutional Coordination and Cooperation.

    ERIC Educational Resources Information Center

    Norfleet, Morris L.

    An experimental prison program on a college campus is discussed. The purpose of the project, Project Newgate, is to find innovative ways of helping society's wrongdoers. Problems discussed are: salaries, travel expenses, communications, supplies, personnel training, admission, staff recruitment, and policy formation. (CK)

  4. 75 FR 56663 - Agency Information Collection (Quarterly Report of State Approving Agency Activities); Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-16

    ... INFORMATION CONTACT: Denise McLamb, Enterprise Records Service (005R1B), Department of Veterans Affairs, 810... a currently approved collection. Abstract: VA reimburses State Approving Agencies (SAAs) for expenses incurred in the approval and supervision of education and training programs. SAAs are required to...

  5. 48 CFR 17.106-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., labor learning, and other nonrecurring costs to be incurred by an “average” prime contractor or..., training, and transportation to and from the job site of a specialized work force, and unrealized labor learning. They shall not include any costs of labor or materials, or other expenses (except as indicated...

  6. Assessor Training: Its Effects on Criterion-Based Assessment in a Medical Context

    ERIC Educational Resources Information Center

    Pell, Godfrey; Homer, Matthew S.; Roberts, Trudie E.

    2008-01-01

    Increasingly, academic institutions are being required to improve the validity of the assessment process; unfortunately, often this is at the expense of reliability. In medical schools (such as Leeds), standardized tests of clinical skills, such as "Objective Structured Clinical Examinations" (OSCEs) are widely used to assess clinical…

  7. New Technologies Extend the Reach of Many College Fund Raisers.

    ERIC Educational Resources Information Center

    Nicklin, Julie L.

    1992-01-01

    Increasingly, colleges are using new technologies, often expensive, to improve fund-raising capacity among small-scale donors. Techniques include computerized screening of prospective donors based on personal information, automatic dialing and phone-bank worker training, and sophisticated direct-mail tactics. Concern about privacy and loss of the…

  8. 5 CFR 410.402 - Paying premium pay.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Paying premium pay. 410.402 Section 410... for Training Expenses § 410.402 Paying premium pay. (a) Prohibitions. Except as provided by paragraph (b) of this section, an agency may not use its funds, appropriated or otherwise available, to pay...

  9. Fetal head detection and measurement in ultrasound images by an iterative randomized Hough transform

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Tan, Jinglu; Floyd, Randall C.

    2004-05-01

    This paper describes an automatic method for measuring the biparietal diameter (BPD) and head circumference (HC) in ultrasound fetal images. A total of 217 ultrasound images were segmented by using a K-Mean classifier, and the head skull was detected in 214 of the 217 cases by an iterative randomized Hough transform developed for detection of incomplete curves in images with strong noise without user intervention. The automatic measurements were compared with conventional manual measurements by sonographers and a trained panel. The inter-run variations and differences between the automatic and conventional measurements were small compared with published inter-observer variations. The results showed that the automated measurements were as reliable as the expert measurements and more consistent. This method has great potential in clinical applications.

  10. Nonlinear Motion Tracking by Deep Learning Architecture

    NASA Astrophysics Data System (ADS)

    Verma, Arnav; Samaiya, Devesh; Gupta, Karunesh K.

    2018-03-01

    In the world of Artificial Intelligence, object motion tracking is one of the major problems. The extensive research is being carried out to track people in crowd. This paper presents a unique technique for nonlinear motion tracking in the absence of prior knowledge of nature of nonlinear path that the object being tracked may follow. We achieve this by first obtaining the centroid of the object and then using the centroid as the current example for a recurrent neural network trained using real-time recurrent learning. We have tweaked the standard algorithm slightly and have accumulated the gradient for few previous iterations instead of using just the current iteration as is the norm. We show that for a single object, such a recurrent neural network is highly capable of approximating the nonlinearity of its path.

  11. Cognitive representation of "musical fractals": Processing hierarchy and recursion in the auditory domain.

    PubMed

    Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh

    2017-04-01

    The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level

    PubMed Central

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352

  13. An iterative approach for the optimization of pavement maintenance management at the network level.

    PubMed

    Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor

    2014-01-01

    Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.

  14. Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework.

    PubMed

    Davila, Juan Carlos; Cretu, Ana-Maria; Zaremba, Marek

    2017-06-07

    The design of multiple human activity recognition applications in areas such as healthcare, sports and safety relies on wearable sensor technologies. However, when making decisions based on the data acquired by such sensors in practical situations, several factors related to sensor data alignment, data losses, and noise, among other experimental constraints, deteriorate data quality and model accuracy. To tackle these issues, this paper presents a data-driven iterative learning framework to classify human locomotion activities such as walk, stand, lie, and sit, extracted from the Opportunity dataset. Data acquired by twelve 3-axial acceleration sensors and seven inertial measurement units are initially de-noised using a two-stage consecutive filtering approach combining a band-pass Finite Impulse Response (FIR) and a wavelet filter. A series of statistical parameters are extracted from the kinematical features, including the principal components and singular value decomposition of roll, pitch, yaw and the norm of the axial components. The novel interactive learning procedure is then applied in order to minimize the number of samples required to classify human locomotion activities. Only those samples that are most distant from the centroids of data clusters, according to a measure presented in the paper, are selected as candidates for the training dataset. The newly built dataset is then used to train an SVM multi-class classifier. The latter will produce the lowest prediction error. The proposed learning framework ensures a high level of robustness to variations in the quality of input data, while only using a much lower number of training samples and therefore a much shorter training time, which is an important consideration given the large size of the dataset.

  15. Assisted annotation of medical free text using RapTAT

    PubMed Central

    Gobbel, Glenn T; Garvin, Jennifer; Reeves, Ruth; Cronin, Robert M; Heavirland, Julia; Williams, Jenifer; Weaver, Allison; Jayaramaraja, Shrimalini; Giuse, Dario; Speroff, Theodore; Brown, Steven H; Xu, Hua; Matheny, Michael E

    2014-01-01

    Objective To determine whether assisted annotation using interactive training can reduce the time required to annotate a clinical document corpus without introducing bias. Materials and methods A tool, RapTAT, was designed to assist annotation by iteratively pre-annotating probable phrases of interest within a document, presenting the annotations to a reviewer for correction, and then using the corrected annotations for further machine learning-based training before pre-annotating subsequent documents. Annotators reviewed 404 clinical notes either manually or using RapTAT assistance for concepts related to quality of care during heart failure treatment. Notes were divided into 20 batches of 19–21 documents for iterative annotation and training. Results The number of correct RapTAT pre-annotations increased significantly and annotation time per batch decreased by ∼50% over the course of annotation. Annotation rate increased from batch to batch for assisted but not manual reviewers. Pre-annotation F-measure increased from 0.5 to 0.6 to >0.80 (relative to both assisted reviewer and reference annotations) over the first three batches and more slowly thereafter. Overall inter-annotator agreement was significantly higher between RapTAT-assisted reviewers (0.89) than between manual reviewers (0.85). Discussion The tool reduced workload by decreasing the number of annotations needing to be added and helping reviewers to annotate at an increased rate. Agreement between the pre-annotations and reference standard, and agreement between the pre-annotations and assisted annotations, were similar throughout the annotation process, which suggests that pre-annotation did not introduce bias. Conclusions Pre-annotations generated by a tool capable of interactive training can reduce the time required to create an annotated document corpus by up to 50%. PMID:24431336

  16. Development and evaluation of a wheelchair service provision training of trainers programme

    PubMed Central

    2017-01-01

    Background In many countries, availability of basic training and continued professional development programmes in wheelchair services is limited. Therefore, many health professionals lack access to formal training opportunities and new approaches to improve wheelchair service provision. To address this need, the World Health Organization (WHO) developed the WHO Wheelchair Service Training of Trainers Programme (WSTPt), aiming to increase the number of trainers who are well prepared to deliver the WHO Wheelchair Service Training Packages. Despite these efforts, there was no recognised method to prepare trainers to facilitate these training programmes in a standardised manner. Objectives To understand if the WSTPt is an effective mechanism to train aspiring wheelchair service provision trainers. Method An action research study was conducted using a mixed-methods approach to data collection and analysis to integrate feedback from questionnaires and focus groups from three WHO WSTPt pilots. Results Trainees were satisfied with the WHO WSTPt and the iterative process appears to have helped to improve each subsequent pilot and the final training package. Conclusion The WHO WSTPt is an effective mechanism to train wheelchair service provision trainers. This programme has potential to increase the number of trainees and may increase the number of qualified service providers. PMID:28936423

  17. Using a virtual training program to train community neurologist on EEG reading skills.

    PubMed

    Ochoa, Juan; Naritoku, Dean K

    2012-01-01

    EEG training requires iterative exposure of different patterns with continuous feedback from the instructor. This training is traditionally acquired through a traditional fellowship program, but only 28% of neurologists in training plan to do a fellowship in EEG. The purpose of this study was to determine the value of online EEG training to improve EEG knowledge among general neurologists. The participants were general neurologists invited through bulk e-mail and paid a fee to enroll in the virtual EEG program. A 40-question pretest exam was performed before training. The training included 4 online learning units about basic EEG principles and 40 online clinical EEG tutorials. In addition there were weekly live teleconferences for Q&A sessions. At the end of the program, the participants were asked to complete a posttest exam. Fifteen of 20 participants successfully completed the program and took both the pre- and posttest exams. All the subjects scored significantly higher in the posttest compared to their baseline score. The average score in the pretest evaluation was 61.7% and the posttest average was 87.8% (p = .0002, two-tailed). Virtual EEG training can improve EEG knowledge among community neurologists.

  18. Availability and Diversity of Training Programs for Responders to International Disasters and Complex Humanitarian Emergencies

    PubMed Central

    Jacquet, Gabrielle A.; Obi, Chioma C.; Chang, Mary P.; Bayram, Jamil D.

    2014-01-01

    Introduction: Volunteers and members of relief organizations increasingly seek formal training prior to international field deployment. This paper identifies training programs for personnel responding to international disasters and complex humanitarian emergencies, and provides concise information – if available- regarding the founding organization, year established, location, cost, duration of training, participants targeted, and the content of each program. Methods: An environmental scan was conducted through a combination of a peer-reviewed literature search and an open Internet search for the training programs. Literature search engines included EMBASE, Cochrane, Scopus, PubMed, Web of Science databases using the search terms “international,” “disaster,” “complex humanitarian emergencies,” “training,” and “humanitarian response”. Both searches were conducted between January 2, 2013 and September 12, 2013. Results: 14 peer-reviewed articles mentioned or described eight training programs, while open Internet search revealed 13 additional programs. In total, twenty-one training programs were identified as currently available for responders to international disasters and CHE. Each of the programs identified has different goals and objectives, duration, expenses, targeted trainees and modules. Each of the programs identified has different goals and objectives, duration, expenses, targeted trainees and modules. Seven programs (33%) are free of charge and four programs (19%) focus on the mental aspects of disasters. The mean duration for each training program is 5 to 7 days. Fourteen of the trainings are conducted in multiple locations (66%), two in Cuba (9%) and two in Australia (9%). The cost-reported in US dollars- ranges from $100 to $2,400 with a mean cost of $480 and a median cost of $135. Most of the programs are open to the public, but some are only available by invitation only, such as the International Mobilization Preparation for Action (IMPACT) and the United Nations Humanitarian Civil-Military Coordination (UN-CMCoord) Field Course. Conclusions: A variety of training programs are available for responders to disasters and complex humanitarian emergencies. These programs vary in their objectives, audiences, modules, geographical locations, eligibility and financial cost. This paper presents an overview of available programs and serves as a resource for potential responders interested in capacity-building training prior to deployment. PMID:24987573

  19. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.

  20. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    PubMed

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  1. Integrating musculoskeletal sonography into rehabilitation: Therapists’ experiences with training and implementation

    PubMed Central

    Gray, Julie McLaughlin; Frank, Gelya; Roll, Shawn C.

    2018-01-01

    Musculoskeletal sonography is rapidly extending beyond radiology; however, best practices for successful integration into new practice contexts are unknown. This study explored non-physician experiences with the processes of training and integration of musculoskeletal sonography into rehabilitation. Qualitative data were captured through multiple sources and iterative thematic analysis was used to describe two occupational therapists’ experiences. The dominant emerging theme was competency, in three domains: technical, procedural and analytical. Additionally, three practice considerations were illuminated: (1) understanding imaging within the dynamics of rehabilitation, (2) navigating nuances of interprofessional care, and (3) implications for post-professional training. Findings indicate that sonography training for rehabilitation providers requires multi-level competency development and consideration of practice complexities. These data lay a foundation on which to explore and develop best practices for incorporating sonographic imaging into the clinic as a means for engaging clients as active participants in the rehabilitation process to improve health and rehabilitation outcomes. PMID:28830315

  2. ANNETTE Project: Contributing to The Nuclearization of Fusion

    NASA Astrophysics Data System (ADS)

    Ambrosini, W.; Cizelj, L.; Dieguez Porras, P.; Jaspers, R.; Noterdaeme, J.; Scheffer, M.; Schoenfelder, C.

    2018-01-01

    The ANNETTE Project (Advanced Networking for Nuclear Education and Training and Transfer of Expertise) is well underway, and one of its work packages addresses the design, development and implementation of nuclear fusion training. A systematic approach is used that leads to the development of new training courses, based on identified nuclear competences needs of the work force of (future) fusion reactors and on the current availability of suitable training courses. From interaction with stakeholders involved in the ITER design and construction or the JET D-T campaign, it became clear that the lack of nuclear safety culture awareness already has an impact on current projects. Through the collaboration between the European education networks in fission (ENEN) and fusion (FuseNet) in the ANNETTE project, this project is well positioned to support the development of nuclear competences for ongoing and future fusion projects. Thereby it will make a clear contribution to the realization of fusion energy.

  3. Not only what you do, but how you do it: working with health care practitioners on gender equality.

    PubMed

    Fonn, Sharon

    2003-01-01

    The Women's Health Project, School of Public Health, Johannesburg, South Africa, has for more than the past decade been running various gender and health training courses for participants from at least 20 different countries. In this paper I interrogate the motivation behind and methods of the gender training and offer three prompts that assist facilitators in promoting participants' understanding of gender theory. (1) Does this program/action take gender into account? (2) Does this program/action challenge gender norms? (3) Does this program/action promote women's autonomy? Examples of training sessions are described to illustrate how our methods iterate with the content of the courses and, in particular, how the training links to actions practitioners may engage in to redress gender inequalities at work. I go on to argue that both structural and inter-relational aspects of health programs are important in addressing gender and health concerns and discuss the impact of such training on participants and health services.

  4. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    PubMed

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  5. A computer-based training system combining virtual reality and multimedia

    NASA Technical Reports Server (NTRS)

    Stansfield, Sharon A.

    1993-01-01

    Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment. The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.

  6. Sea ice classification using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  7. A Priori Estimation of Organic Reaction Yields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.

    2015-07-21

    A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less

  8. Virtual reality and gaming systems to improve walking and mobility for people with musculoskeletal and neuromuscular conditions.

    PubMed

    Deutsch, Judith E

    2009-01-01

    Improving walking for individuals with musculoskeletal and neuromuscular conditions is an important aspect of rehabilitation. The capabilities of clinicians who address these rehabilitation issues could be augmented with innovations such as virtual reality gaming based technologies. The chapter provides an overview of virtual reality gaming based technologies currently being developed and tested to improve motor and cognitive elements required for ambulation and mobility in different patient populations. Included as well is a detailed description of a single VR system, consisting of the rationale for development and iterative refinement of the system based on clinical science. These concepts include: neural plasticity, part-task training, whole task training, task specific training, principles of exercise and motor learning, sensorimotor integration, and visual spatial processing.

  9. Perceptions of the roles of social networking in simulation augmented medical education and training.

    PubMed

    Martin, Rob; Rojas, David; Cheung, Jeffrey J H; Weber, Bryce; Kapralos, Bill; Dubrowski, Adam

    2013-01-01

    Simulation-augmented education and training (SAET) is an expensive educational tool that may be facilitated through social networking technologies or Computer Supported Collaborative Learning (CSCL). This study examined the perceptions of medical undergraduates participating in SAET for knot tying skills to identify perceptions and barriers to implementation of social networking technologies within a broader medical education curriculum. The majority of participants (89%) found CSCL aided their learning of the technical skill and identified privacy and accessibility as major barriers to the tools implementation.

  10. Virtual worlds and team training.

    PubMed

    Dev, Parvati; Youngblood, Patricia; Heinrichs, W Leroy; Kusumoto, Laura

    2007-06-01

    An important component of all emergency medicine residency programs is managing trauma effectively as a member of an emergency medicine team, but practice on live patients is often impractical and mannequin-based simulators are expensive and require all trainees to be physically present at the same location. This article describes a project to develop and evaluate a computer-based simulator (the Virtual Emergency Department) for distance training in teamwork and leadership in trauma management. The virtual environment provides repeated practice opportunities with life-threatening trauma cases in a safe and reproducible setting.

  11. The international charter for space and major disasters--project manager training

    USGS Publications Warehouse

    Jones, Brenda

    2011-01-01

    Regional Project Managers for the Charter are developed through training courses, which typically last between 3 and 5 days and are held in a central location for participants. These courses have resulted in increased activations and broader use of Charter data and information by local emergency management authorities. Project Managers are nominated according to either their regional affiliation or their specific areas of expertise. A normal activation takes 2 to 3 weeks to complete, with all related expenses the responsibility of the PM's home agency.

  12. Building an intelligent tutoring system for procedural domains

    NASA Technical Reports Server (NTRS)

    Warinner, Andrew; Barbee, Diann; Brandt, Larry; Chen, Tom; Maguire, John

    1990-01-01

    Jobs that require complex skills that are too expensive or dangerous to develop often use simulators in training. The strength of a simulator is its ability to mimic the 'real world', allowing students to explore and experiment. A good simulation helps the student develop a 'mental model' of the real world. The closer the simulation is to 'real life', the less difficulties there are transferring skills and mental models developed on the simulator to the real job. As graphics workstations increase in power and become more affordable they become attractive candidates for developing computer-based simulations for use in training. Computer based simulations can make training more interesting and accessible to the student.

  13. Towards an open, collaborative, reusable framework for sharing hands-on bioinformatics training workshops

    PubMed Central

    Revote, Jerico; Suchecki, Radosław; Tyagi, Sonika; Corley, Susan M.; Shang, Catherine A.; McGrath, Annette

    2017-01-01

    Abstract There is a clear demand for hands-on bioinformatics training. The development of bioinformatics workshop content is both time-consuming and expensive. Therefore, enabling trainers to develop bioinformatics workshops in a way that facilitates reuse is becoming increasingly important. The most widespread practice for sharing workshop content is through making PDF, PowerPoint and Word documents available online. While this effort is to be commended, such content is usually not so easy to reuse or repurpose and does not capture all the information required for a third party to rerun a workshop. We present an open, collaborative framework for developing and maintaining, reusable and shareable hands-on training workshop content. PMID:26984618

  14. Moving beyond Smile Sheets: A Case Study on the Evaluation and Iterative Improvement of an Online Faculty Development Program

    ERIC Educational Resources Information Center

    Chen, Ken-Zen; Lowenthal, Patrick R.; Bauer, Christine; Heaps, Allan; Nielsen, Crystal

    2017-01-01

    Institutions of higher education are struggling to meet the growing demand for online courses and programs, partly because many faculty lack experience teaching online. The eCampus Quality Instruction Program (eQIP) is an online faculty development program developed to train faculty to design and teach fully online courses. The purpose of this…

  15. Operational Evaluation of Self-Paced Instruction in U.S. Army Training.

    DTIC Science & Technology

    1979-01-01

    one iteration of each course, and the on -going refinement and adjustment of managerial techniques. Research Approach A quasi - experimental approach was...research design employed experimental and control groups , posttest only with non-random groups . The design dealt with the six major areas identified as...course on Interpersonal Communications were conducted in the conventional, group -paced manner. Experimental course materials. Wherever possible, existing

  16. How to Motivate Students to Work in the Laboratory: A New Approach for an Electrical Machines Laboratory

    ERIC Educational Resources Information Center

    Saavedra Montes, A. J.; Botero Castro, H. A.; Hernandez Riveros, J. A.

    2010-01-01

    Many laboratory courses have become iterative processes in which students only seek to meet the requirements and pass the course. Some students believe these courses are boring and do not give them training as engineers. To provide a solution to the poor motivation of students in laboratories with few resources, this work proposes the method…

  17. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  18. Squeeze film dampers with oil hole feed

    NASA Technical Reports Server (NTRS)

    Chen, P. Y. P.; Hahn, E. J.

    1994-01-01

    To improve the damping capability of squeeze film dampers, oil hole feed rather than circumferential groove feed is a practical proposition. However, circular orbit response can no longer be assumed, significantly complicating the design analysis. This paper details a feasible transient solution procedure for such dampers, with particular emphasis on the additional difficulties due to the introduction of oil holes. It is shown how a cosine power series solution may be utilized to evaluate the oil hole pressure contributions, enabling appropriate tabular data to be compiled. The solution procedure is shown to be applicable even in the presence of flow restrictors, albeit at the expense of introducing an iteration at each time step. Though not of primary interest, the procedure is also applicable to dynamically loaded journal bearings with oil hole feed.

  19. User interface support

    NASA Technical Reports Server (NTRS)

    Lewis, Clayton; Wilde, Nick

    1989-01-01

    Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.

  20. The iterative thermal emission method: A more implicit modification of IMC

    DOE PAGES

    Long, A. R.; Gentile, N. A.; Palmer, T. S.

    2014-08-19

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem.« less

  1. A novel and generalized approach in the inversion of geoelectrical resistivity data using Artificial Neural Networks (ANN)

    NASA Astrophysics Data System (ADS)

    Raj, A. Stanley; Srinivas, Y.; Oliver, D. Hudson; Muthuraj, D.

    2014-03-01

    The non-linear apparent resistivity problem in the subsurface study of the earth takes into account the model parameters in terms of resistivity and thickness of individual subsurface layers using the trained synthetic data by means of Artificial Neural Networks (ANN). Here we used a single layer feed-forward neural network with fast back propagation learning algorithm. So on proper training of back propagation networks it tends to give the resistivity and thickness of the subsurface layer model of the field resistivity data with reference to the synthetic data trained in the appropriate network. During training, the weights and biases of the network are iteratively adjusted to make network performance function level more efficient. On adequate training, errors are minimized and the best result is obtained using the artificial neural networks. The network is trained with more number of VES data and this trained network is demonstrated by the field data. The accuracy of inversion depends upon the number of data trained. In this novel and specially designed algorithm, the interpretation of the vertical electrical sounding has been done successfully with the more accurate layer model.

  2. Patient adaptive control of end-effector based gait rehabilitation devices using a haptic control framework.

    PubMed

    Hussein, Sami; Kruger, Jörg

    2011-01-01

    Robot assisted training has proven beneficial as an extension of conventional therapy to improve rehabilitation outcome. Further facilitation of this positive impact is expected from the application of cooperative control algorithms to increase the patient's contribution to the training effort according to his level of ability. This paper presents an approach for cooperative training for end-effector based gait rehabilitation devices. Thereby it provides the basis to firstly establish sophisticated cooperative control methods in this class of devices. It uses a haptic control framework to synthesize and render complex, task specific training environments, which are composed of polygonal primitives. Training assistance is integrated as part of the environment into the haptic control framework. A compliant window is moved along a nominal training trajectory compliantly guiding and supporting the foot motion. The level of assistance is adjusted via the stiffness of the moving window. Further an iterative learning algorithm is used to automatically adjust this assistance level. Stable haptic rendering of the dynamic training environments and adaptive movement assistance have been evaluated in two example training scenarios: treadmill walking and stair climbing. Data from preliminary trials with one healthy subject is provided in this paper. © 2011 IEEE

  3. Why Training Fails (And What to Do about It).

    ERIC Educational Resources Information Center

    Hendon, David H.; Barlow, Judith L.

    1985-01-01

    Presents (with tongue in cheek) a four-style behavior model guaranteed to produce excellence in the four types of trainees: nematodes (docile but learn nothing), gerbils (fond of noncompetitive games and hugging), warthogs (like to attend expensive seminars), and Cro-Magnon (like to interpret to others as opposed to actually learning anything…

  4. 49 CFR 1242.54 - Other and casualties and insurance (accounts XX-27-99 and 50-27-00).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Other and casualties and insurance (accounts XX-27... RAILROADS 1 Operating Expenses-Equipment § 1242.54 Other and casualties and insurance (accounts XX-27-99 and... administration (account XX-27-01). Operating Expenses—Transportation train operations ...

  5. 49 CFR 1242.72 - Other and casualties and insurance (accounts XX-52-99 and 50-52-00).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 9 2010-10-01 2010-10-01 false Other and casualties and insurance (accounts XX-52... RAILROADS 1 Operating Expenses-Transportation § 1242.72 Other and casualties and insurance (accounts XX-52... separation of administration (account XX-52-01). train and yard operations common ...

  6. The Sixth Bracey Report on the Condition of Public Education.

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    1996-01-01

    American youngsters could beat the socks off Asian kids if they too, studied constantly. Charter schools' ability to boost student achievement is unproven, and choice programs benefit some clients at others' expense. Schools should stress civic responsibility, not vocational training and the work ethic. Scholastic Aptitude Test scores rose in…

  7. 38 CFR 17.154 - Dog-guides and equipment for blind.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Dog-guides and equipment... AFFAIRS MEDICAL Prosthetic, Sensory, and Rehabilitative Aids § 17.154 Dog-guides and equipment for blind... disability may be furnished a trained dog-guide. In addition, they may be furnished necessary travel expense...

  8. 78 FR 51078 - Reporting Requirements for Positive Train Control Expenses and Investments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-20

    ... useful in making regulatory policy and business decisions. The new rule will require a PTC Supplement \\5... interested parties with data useful in making regulatory policy and business decisions. PTC grants. AAR and... useful in regulatory decision making.\\45\\ They also argue that the burden will be on the carriers to...

  9. 78 FR 36248 - Appendix B Guidelines for Reviewing Applications for Compensation and Reimbursement of Expenses...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-17

    ... recruiting and training. k. Non-working travel: Whether the application includes time billed for non-working... professional's full rate for time spent traveling without actively working on the bankruptcy case or while... $50 million or more in liabilities, aggregated for jointly administered cases. Single asset real...

  10. 12 CFR 608.836 - Applicability of regulations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Applicability of regulations. 608.836 Section 608.836 Banks and Banking FARM CREDIT ADMINISTRATION ADMINISTRATIVE PROVISIONS COLLECTION OF CLAIMS... (e.g., travel advances in 5 U.S.C. 5705 and employee training expenses in 5 U.S.C. 4108). (2) Any...

  11. 12 CFR 1408.36 - Applicability of regulations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Applicability of regulations. 1408.36 Section 1408.36 Banks and Banking FARM CREDIT SYSTEM INSURANCE CORPORATION COLLECTION OF CLAIMS OWED THE UNITED... another statute (e.g., travel advances in 5 U.S.C. 5705 and employee training expenses in 5 U.S.C. 4108...

  12. 38 CFR 17.154 - Dog-guides and equipment for blind.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Dog-guides and equipment... AFFAIRS MEDICAL Prosthetic, Sensory, and Rehabilitative Aids § 17.154 Dog-guides and equipment for blind... disability may be furnished a trained dog-guide. In addition, they may be furnished necessary travel expense...

  13. Don't Let College Costs Rain on Your Retirement.

    ERIC Educational Resources Information Center

    Spiers, Joseph

    1995-01-01

    Discusses strategies to keep down the cost of sending children to college and increase retirement savings. Suggestions include looking for any kind of scholarship funds such as academic, athletic, or music; staying in one's own state; joining the Reserve Officer Training Corps (ROTC); haggling; and requiring student employment to defray expenses.…

  14. Helping Doctoral Students Teach: Transitioning to Early Career Academia through Cognitive Apprenticeship

    ERIC Educational Resources Information Center

    Greer, Dominique A.; Cathcart, Abby; Neale, Larry

    2016-01-01

    Doctoral training is strongly focused on honing research skills at the expense of developing teaching competency. As a result, emerging academics are unprepared for the pedagogical requirements of their early-career academic roles. Employing an action research approach, this study investigates the effectiveness of a competency-based teaching…

  15. Beyond Learning by Doing: An Exploration of Critical Incidents in Outdoor Leadership Education

    ERIC Educational Resources Information Center

    Hickman, Mark; Stokes, Peter

    2016-01-01

    This paper argues that outdoor leader education and training is characterized by the development of procedural skills at the expense of crucial but usually ignored non-technical skills (e.g. contextualized decision-making and reflection). This risks producing practitioners with a potentially unsophisticated awareness of the holistic outdoor…

  16. 38 CFR 17.154 - Dog-guides and equipment for blind.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Dog-guides and equipment... AFFAIRS MEDICAL Prosthetic, Sensory, and Rehabilitative Aids § 17.154 Dog-guides and equipment for blind... disability may be furnished a trained dog-guide. In addition, they may be furnished necessary travel expense...

  17. Telehealth Innovations in Health Education and Training

    PubMed Central

    De, Suvranu; Hall, Richard W.; Johansen, Edward; Meglan, Dwight; Peng, Grace C.Y.

    2010-01-01

    Abstract Telehealth applications are increasingly important in many areas of health education and training. In addition, they will play a vital role in biomedical research and research training by facilitating remote collaborations and providing access to expensive/remote instrumentation. In order to fulfill their true potential to leverage education, training, and research activities, innovations in telehealth applications should be fostered across a range of technology fronts, including online, on-demand computational models for simulation; simplified interfaces for software and hardware; software frameworks for simulations; portable telepresence systems; artificial intelligence applications to be applied when simulated human patients are not options; and the development of more simulator applications. This article presents the results of discussion on potential areas of future development, barries to overcome, and suggestions to translate the promise of telehealth applications into a transformed environment of training, education, and research in the health sciences. PMID:20155874

  18. Plils: A Practical Indoor Localization System through Less Expensive Wireless Chips via Subregion Clustering

    PubMed Central

    Cai, Jun; Deng, Yun; Yang, Junfeng; Zhou, Xinmin; Tan, Lina

    2018-01-01

    Reducing costs is a pragmatic method for promoting the widespread usage of indoor localization technology. Conventional indoor localization systems (ILSs) exploit relatively expensive wireless chips to measure received signal strength for positioning. Our work is based on a cheap and widely-used commercial off-the-shelf (COTS) wireless chip, i.e., the Nordic Semiconductor nRF24LE1, which has only several output power levels, and proposes a new power level based-ILS, called Plils. The localization procedure incorporates two phases: an offline training phase and an online localization phase. In the offline training phase, a self-organizing map (SOM) is utilized for dividing a target area into k subregions, wherein their grids in the same subregion have similar fingerprints. In the online localization phase, the support vector machine (SVM) and back propagation (BP) neural network methods are adopted to identify which subregion a tagged object is located in, and calculate its exact location, respectively. The reasonable value for k has been discussed as well. Our experiments show that Plils achieves 75 cm accuracy on average, and is robust to indoor obstacles. PMID:29329226

  19. Plils: A Practical Indoor Localization System through Less Expensive Wireless Chips via Subregion Clustering.

    PubMed

    Li, Xiaolong; Yang, Yifu; Cai, Jun; Deng, Yun; Yang, Junfeng; Zhou, Xinmin; Tan, Lina

    2018-01-12

    Reducing costs is a pragmatic method for promoting the widespread usage of indoor localization technology. Conventional indoor localization systems (ILSs) exploit relatively expensive wireless chips to measure received signal strength for positioning. Our work is based on a cheap and widely-used commercial off-the-shelf (COTS) wireless chip, i.e., the Nordic Semiconductor nRF24LE1, which has only several output power levels, and proposes a new power level based-ILS, called Plils. The localization procedure incorporates two phases: an offline training phase and an online localization phase. In the offline training phase, a self-organizing map (SOM) is utilized for dividing a target area into k subregions, wherein their grids in the same subregion have similar fingerprints. In the online localization phase, the support vector machine (SVM) and back propagation (BP) neural network methods are adopted to identify which subregion a tagged object is located in, and calculate its exact location, respectively. The reasonable value for k has been discussed as well. Our experiments show that Plils achieves 75 cm accuracy on average, and is robust to indoor obstacles.

  20. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  1. Design of Neural Networks for Fast Convergence and Accuracy

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1998-01-01

    A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  2. Fast in-memory elastic full-waveform inversion using consumer-grade GPUs

    NASA Astrophysics Data System (ADS)

    Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge

    2017-04-01

    Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.

  3. A survey of food safety training in small food manufacturers.

    PubMed

    Worsfold, Denise

    2005-08-01

    A survey of food safety training was conducted in small food manufacturing firms in South Wales. Structured interviews with managers were used to collect information on the extent and level of food hygiene and HACCP training and the manager's perceptions of and attitude towards training. All the businesses surveyed had undertaken some hygiene training. Hygiene induction programmes were often unstructured and generally unrecorded. Low-risk production workers were usually trained on the job whilst high-care production staff were trained in hygiene to Level 1. Part-time and temporary staff received less training than full-timers. Regular refresher training was undertaken by less than half of the sample. None of the businesses made use of National Vocational Qualification (NVQ) qualifications. Over half of the managers/senior staff had undertaken higher levels of hygiene training and half had attended a HACCP course. Managers trained the workforce to operate the HACCP system. Formal training-related activities were generally only found in the larger businesses. Few of the manufacturers had made use of training consultants. Managers held positive attitudes towards training but most regarded it as operating expense rather than an investment. Resource poverty, in terms of time and money was perceived to be a major inhibiting factor to continual, systematic training.

  4. Creation of a novel simulator for minimally invasive neurosurgery: fusion of 3D printing and special effects.

    PubMed

    Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R

    2017-07-01

    OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.

  5. Trained standardized patients can train their peers to provide well-rated, cost-effective physical exam skills training to first-year medical students.

    PubMed

    Aamodt, Carla B; Virtue, David W; Dobbie, Alison E

    2006-05-01

    Teaching physical examination skills effectively, consistently, and cost-effectively is challenging. Faculty time is the most expensive resource. One solution is to train medical students using lay physical examination teaching associates. In this study, we investigated the feasibility, acceptability, and cost-effectiveness of training medical students using teaching associates trained by a lay expert instead of a clinician. We used teaching associates to instruct students about techniques of physical examination. We measured students' satisfaction with this teaching approach. We also monitored the financial cost of this approach compared to the previously used approach in which faculty physicians taught physical examination skills. Our program proved practical to accomplish and acceptable to students. Students rated the program highly, and we saved approximately $9,100, compared with our previous faculty-intensive teaching program. We believe that our program is popular with students, cost-effective, and generalizable to other institutions.

  6. Integration of laparoscopic virtual-reality simulation into gynaecology training.

    PubMed

    Burden, C; Oestergaard, J; Larsen, C R

    2011-11-01

    Surgery carries the risk of serious harm, as well as benefit, to patients. For healthcare organisations, theatre time is an expensive commodity and litigation costs for surgical specialities are very high. Advanced laparoscopic surgery, now widely used in gynaecology for improved outcomes and reduced length of stay, involves longer operation times and a higher rate of complications for surgeons in training. Virtual-reality (VR) simulation is a relatively new training method that has the potential to promote surgical skill development before advancing to surgery on patients themselves. VR simulators have now been on the market for more than 10 years and, yet, few countries in the world have fully integrated VR simulation training into their gynaecology surgical training programmes. In this review, we aim to summarise the VR simulators currently available together with evidence of their effectiveness in gynaecology, to understand their limitations and to discuss their incorporation into national training curricula. © 2011 The Authors BJOG An International Journal of Obstetrics and Gynaecology © 2011 RCOG.

  7. Quickprop method to speed up learning process of Artificial Neural Network in money's nominal value recognition case

    NASA Astrophysics Data System (ADS)

    Swastika, Windra

    2017-03-01

    A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.

  8. A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no

    2016-01-01

    A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less

  9. Changes in dynamics of accommodation after accommodative facility training in myopes and emmetropes.

    PubMed

    Allen, Peter M; Charman, W Neil; Radhakrishnan, Hema

    2010-05-12

    This study evaluates the effect of accommodative facility training in myopes and emmetropes. Monocular accommodative facility was measured in nine myopes and nine emmetropes for distance and near. Subjective facility was recorded with automated flippers and objective measurements were simultaneously taken with a PowerRefractor. Accommodative facility training (a sequence of 5 min monocular right eye, 5 min monocular left eye, 5 min binocular) was given on three consecutive days and facility was re-assessed on the fifth day. The results showed that training improved the facility rate in both groups. The improvement in facility rates were linked to the time constants and peak velocity of accommodation. Some changes in amplitude seen in emmetropes indicate an improvement in facility rate at the expense of an accurate accommodation response. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Can YAG screen accept LEReC bunch train?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seletskiy, S.; Thieberger, P.; Miller, T.

    2016-05-18

    LEReC RF diagnostic beamline is supposed to accept 250 us long pulse trains of 1.6 MeV – 2.6 MeV (kinetic energy) electrons. This beamline is equipped with YAG profile monitor. Since we are interested in observing only the last macro bunch in the train, one of the possibilities is to install a fast kicker and a dedicated dump upstream of the YAG screen (and related diagnostics equipment). This approach is expensive and challenging from engineering point of view. Another possibility is to send the whole pulse train to the YAG screen and to use a fast gated camera (such asmore » Imperex B0610 with trigger jitter under 60ns) to observe the image from the last pulse only. In this paper we study the feasibility of the last approach.« less

  11. A roadmap for acute care training of frontline Healthcare workers in LMICs.

    PubMed

    Shah, Nirupa; Bhagwanjee, Satish; Diaz, Janet; Gopalan, P D; Appiah, John Adabie

    2017-10-01

    This 10-step roadmap outlines explicit procedures for developing, implementing and evaluating short focused training programs for acute care in low and middle income countries (LMICs). A roadmap is necessary to develop resilient training programs that achieve equivalent outcomes despite regional variability in human capacity and infrastructure. Programs based on the roadmap should address shortfalls in human capacity and access to care in the short term and establish the ground work for health systems strengthening in the long term. The primary targets for acute care training are frontline healthcare workers at the clinic level. The programs will differ from others currently available with respect to the timelines, triage method, therapeutic interventions and potential for secondary prevention. The roadmap encompasses multiple iterative cycles of the Plan-Do-Study-Act framework. Core features are integration of frontline trainees with the referral system while promoting research, quality improvement and evaluation from the bottom-up. Training programs must be evidence based, developed along action timelines and use adaptive training methods. A systems approach is essential because training programs that take cognizance of all factors that influence health care delivery have the potential to produce health systems strengthening (HSS). Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  13. Design of neural networks for fast convergence and accuracy: dynamics and control.

    PubMed

    Maghami, P G; Sparks, D R

    2000-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  14. VINE: A Variational Inference -Based Bayesian Neural Network Engine

    DTIC Science & Technology

    2018-01-01

    networks are trained using the same dataset and hyper parameter settings as discussed. Table 1 Performance evaluation of the proposed transfer learning...multiplication/addition/subtraction. These operations can be implemented using nested loops in which various iterations of a loop are independent of...each other. This introduces an opportunity for optimization where a loop may be unrolled fully or partially to increase parallelism at the cost of

  15. Robust High Data Rate MIMO Underwater Acoustic Communications

    DTIC Science & Technology

    2010-12-31

    algorithm is referred to as periodic CAN ( PeCAN ). Unlike most existing sequence construction methods which are algebraic and deterministic in nature, we...start the iteration of PeCAN from random phase initializations and then proceed to cyclically minimize the desired metric. In this way, through...by the foe and hence are especially useful as training sequences or as spreading sequences for UAC applications. We will use PeCAN sequences for

  16. System Development and Evaluation Technology: State of the Art of Manned System Measurement

    DTIC Science & Technology

    1985-02-01

    considered " applicable to the assessment of training effectiveness. They include the classic *-: Solomon four - group design; iterative adaptation to...evaluate the performance of infantrymen using small arms weapons (Klein, 1969) were grouped into four areas for purposes of thisevauaton:accuracy...developed for i four naval ratings. This checklist was a detailed comprehensive checklist of the * tasks performed in that rating. For this study

  17. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  18. User-Driven Sampling Strategies in Image Exploitation

    DOE PAGES

    Harvey, Neal R.; Porter, Reid B.

    2013-12-23

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less

  19. Hedging to save face: a linguistic analysis of written comments on in-training evaluation reports.

    PubMed

    Ginsburg, Shiphra; van der Vleuten, Cees; Eva, Kevin W; Lingard, Lorelei

    2016-03-01

    Written comments on residents' evaluations can be useful, yet the literature suggests that the language used by assessors is often vague and indirect. The branch of linguistics called pragmatics argues that much of our day to day language is not meant to be interpreted literally. Within pragmatics, the theory of 'politeness' suggests that non-literal language and other strategies are employed in order to 'save face'. We conducted a rigorous, in-depth analysis of a set of written in-training evaluation report (ITER) comments using Brown and Levinson's influential theory of 'politeness' to shed light on the phenomenon of vague language use in assessment. We coded text from 637 comment boxes from first year residents in internal medicine at one institution according to politeness theory. Non-literal language use was common and 'hedging', a key politeness strategy, was pervasive in comments about both high and low rated residents, suggesting that faculty may be working to 'save face' for themselves and their residents. Hedging and other politeness strategies are considered essential to smooth social functioning; their prevalence in our ITERs may reflect the difficult social context in which written assessments occur. This research raises questions regarding the 'optimal' construction of written comments by faculty.

  20. User-driven sampling strategies in image exploitation

    NASA Astrophysics Data System (ADS)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  1. Using partially labeled data for normal mixture identification with application to class definition

    NASA Technical Reports Server (NTRS)

    Shahshahani, Behzad M.; Landgrebe, David A.

    1992-01-01

    The problem of estimating the parameters of a normal mixture density when, in addition to the unlabeled samples, sets of partially labeled samples are available is addressed. The density of the multidimensional feature space is modeled with a normal mixture. It is assumed that the set of components of the mixture can be partitioned into several classes and that training samples are available from each class. Since for any training sample the class of origin is known but the exact component of origin within the corresponding class is unknown, the training samples as considered to be partially labeled. The EM iterative equations are derived for estimating the parameters of the normal mixture in the presence of partially labeled samples. These equations can be used to combine the supervised and nonsupervised learning processes.

  2. Screening the High-Risk Newborn for Hearing Loss: The Crib-O-Gram v the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Cox, L. Clarke

    1988-01-01

    Presented are a rationale for identifying hearing loss in infancy and a history of screening procedures. The Crib-O-Gram and auditory brainstem response (ABR) tests are evaluated for reliability, validity, and cost-effectiveness. The ABR is recommended, and fully automated ABR instrumentation, which lowers expenses for trained personnel and…

  3. Efforts To Solve Quality Problems. Background Paper No. 36.

    ERIC Educational Resources Information Center

    Smith, Michael J.; And Others

    Producing goods and services of high quality is not expensive, but correcting poor quality costs U.S. companies as much as 20 percent of sales revenues annually. One survey reported that only 1 out of 300 U.S. companies involved management and engineering staff in quality training. The tendency is to have a quality control department, separate…

  4. Interrater Reliability in Large-Scale Assessments--Can Teachers Score National Tests Reliably without External Controls?

    ERIC Educational Resources Information Center

    Pantzare, Anna Lind

    2015-01-01

    In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality…

  5. A Case Study for Teaching Quantitative Biochemical Buffer Problems Using Group Work and "Khan Style" Videos

    ERIC Educational Resources Information Center

    Barreto, Jose; Reilly, John; Brown, David; Frost. Laura; Coticone, Sulekha Rao; Dubetz, Terry Ann; Beharry, Zanna; Davis-McGibony, C. Michele; Ramoutar, Ria; Rudd, Gillian

    2014-01-01

    New technological developments have minimized training, hardware expense, and distribution problems for the production and use of instructional videos, and any science instructor can now make instructional videos for their classes. We created short "Khan style" videos for the topic of buffers in biochemistry and assigned them as…

  6. The Expectation Performance Gap in Accounting Education: A Review of Generic Skills Development in UK Accounting Degrees

    ERIC Educational Resources Information Center

    Webb, Jill; Chaffer, Caroline

    2016-01-01

    Accounting educators are criticised for a focus on the development of technical skills at the expense of generic employability skills. This study considers the perspective of UK graduates training for the CIMA professional accountancy qualification and examines their perceptions of the extent to which opportunities for generic skills development…

  7. 41 CFR 301-74.25 - May we reimburse travelers for an advanced payment of a conference or training registration fee?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 41 Public Contracts and Property Management 4 2011-07-01 2011-07-01 false May we reimburse... Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL... have approved their travel to that event, and they submit a proper claim for the expenses incurred...

  8. Exploring the Efficacy of Behavioral Skills Training to Teach Basic Behavior Analytic Techniques to Oral Care Providers

    ERIC Educational Resources Information Center

    Graudins, Maija M.; Rehfeldt, Ruth Anne; DeMattei, Ronda; Baker, Jonathan C.; Scaglia, Fiorella

    2012-01-01

    Performing oral care procedures with children with autism who exhibit noncompliance can be challenging for oral care professionals. Previous research has elucidated a number of effective behavior analytic procedures for increasing compliance, but some procedures are likely to be too time consuming and expensive for community-based oral care…

  9. 41 CFR 301-74.25 - May we reimburse travelers for an advanced payment of a conference or training registration fee?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false May we reimburse... Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL... have approved their travel to that event, and they submit a proper claim for the expenses incurred...

  10. Simulation Training in Health Care

    DTIC Science & Technology

    2015-06-01

    Harvey simulates cardiac and lung disease , including blood pressure, breathing, pulses, heart sounds, and murmurs. A significant step toward the...fluoroscopy, cardiovascu- lar and pulmonary disease diagnosis and treatment, anesthesia, patient -centered programs, basic vital signs emergency care, infant...effectiveness. However, in health care, ethical considerations and the quantification of the expenses of clinician “learning” on patients is a challenge to

  11. HR Technology Tools: Less Time on Paper and More on People

    ERIC Educational Resources Information Center

    Tillman, Tom

    2009-01-01

    Many human resource managers face a dilemma. They would like to spend more time improving the overall work environment for employees. They want to help their executives save on workforce-related expenses, find and hire better talent, and improve existing talent through training and development. Unfortunately, most days, HR managers are stuck doing…

  12. Modification and optimization of the united-residue (UNRES) potential-energy function for canonical simulations. I. Temperature dependence of the effective energy function and tests of the optimization method with single training proteins

    PubMed Central

    Liwo, Adam; Khalili, Mey; Czaplewski, Cezary; Kalinowski, Sebastian; Ołdziej, Stanisław; Wachucik, Katarzyna; Scheraga, Harold A.

    2011-01-01

    We report the modification and parameterization of the united-residue (UNRES) force field for energy-based protein-structure prediction and protein-folding simulations. We tested the approach on three training proteins separately: 1E0L (β), 1GAB (α), and 1E0G (α + β). Heretofore, the UNRES force field had been designed and parameterized to locate native-like structures of proteins as global minima of their effective potential-energy surfaces, which largely neglected the conformational entropy because decoys composed of only lowest-energy conformations were used to optimize the force field. Recently, we developed a mesoscopic dynamics procedure for UNRES, and applied it with success to simulate protein folding pathways. How ever, the force field turned out to be largely biased towards α-helical structures in canonical simulations because the conformational entropy had been neglected in the parameterization. We applied the hierarchical optimization method developed in our earlier work to optimize the force field, in which the conformational space of a training protein is divided into levels each corresponding to a certain degree of native-likeness. The levels are ordered according to increasing native-likeness; level 0 corresponds to structures with no native-like elements and the highest level corresponds to the fully native-like structures. The aim of optimization is to achieve the order of the free energies of levels, decreasing as their native-likeness increases. The procedure is iterative, and decoys of the training protein(s) generated with the energy-function parameters of the preceding iteration are used to optimize the force field in a current iteration. We applied the multiplexing replica exchange molecular dynamics (MREMD) method, recently implemented in UNRES, to generate decoys; with this modification, conformational entropy is taken into account. Moreover, we optimized the free-energy gaps between levels at temperatures corresponding to a predominance of folded or unfolded structures, as well as to structures at the putative folding-transition temperature, changing the sign of the gaps at the transition temperature. This enabled us to obtain force fields characterized by a single peak in the heat capacity at the transition temperature. Furthermore, we introduced temperature dependence to the UNRES force field; this is consistent with the fact that it is a free-energy and not a potential-energy function. PMID:17201450

  13. Improving medical stores management through automation and effective communication

    PubMed Central

    Kumar, Ashok; Cariappa, M.P.; Marwaha, Vishal; Sharma, Mukti; Arora, Manu

    2016-01-01

    Background Medical stores management in hospitals is a tedious and time consuming chore with limited resources tasked for the purpose and poor penetration of Information Technology. The process of automation is slow paced due to various inherent factors and is being challenged by the increasing inventory loads and escalating budgets for procurement of drugs. Methods We carried out an indepth case study at the Medical Stores of a tertiary care health care facility. An iterative six step Quality Improvement (QI) process was implemented based on the Plan–Do–Study–Act (PDSA) cycle. The QI process was modified as per requirement to fit the medical stores management model. The results were evaluated after six months. Results After the implementation of QI process, 55 drugs of the medical store inventory which had expired since 2009 onwards were replaced with fresh stock by the suppliers as a result of effective communication through upgraded database management. Various pending audit objections were dropped due to the streamlined documentation and processes. Inventory management improved drastically due to automation, with disposal orders being initiated four months prior to the expiry of drugs and correct demands being generated two months prior to depletion of stocks. The monthly expense summary of drugs was now being done within ten days of the closing month. Conclusion Improving communication systems within the hospital with vendor database management and reaching out to clinicians is important. Automation of inventory management requires to be simple and user-friendly, utilizing existing hardware. Physical stores monitoring is indispensable, especially due to the scattered nature of stores. Staff training and standardized documentation protocols are the other keystones for optimal medical store management. PMID:26900225

  14. Calibration of ITER Instant Power Neutron Monitors: Recommended Scenario of Experiments at the Reactor

    NASA Astrophysics Data System (ADS)

    Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.

    2017-12-01

    Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.

  15. [Cost analysis of home care with activity-based costing (ABC)].

    PubMed

    Lee, Su-Jeong

    2004-10-01

    This study was carried out to substantiate the application process of activity-based costing on the current cost of hospital home care (HHC) service. The study materials were documents, 120 client charts, health insurance demand bills, salary of 215 HHC nurses, operating expense, 6 HHC agencies, and 31 HHC nurses. The research was carried out by analyzing the HHC activities and then collecting labor and operating expenses. For resource drivers, HHC activity performance time and workload were studied. For activity drivers, the number of HHC activity performances and the activity number of visits were studied. The HHC activities were classified into 70 activities. In resource, the labor cost was 245 won per minute, operating cost was 9,570 won per visit and traffic expense was an average of 12,750 won. In resource drivers, education and training had the longest time of 67 minutes. Average length of performance for activities was 13.7 minutes. The workload was applied as a relative value. The average cost of HHC was 62,741 won and the cost ranged from 55,560 won to 74,016 won. The fixed base rate for a visit in the current HHC medical fee should be increased. Exclusion from the current fee structure or flexible operation of traveling expenses should be reviewed.

  16. Start-up and incremental practice expenses for behavior change interventions in primary care.

    PubMed

    Dodoo, Martey S; Krist, Alex H; Cifuentes, Maribel; Green, Larry A

    2008-11-01

    If behavior-change services are to be offered routinely in primary care practices, providers must be appropriately compensated. Estimating what is spent by practices in providing such services is a critical component of establishing appropriate payment and was the objective of this study. In-practice expenditure data were collected for ten different interventions, using a standardized instrument in 29 practices nested in ten practice-based research networks across the U.S. during 2006-2007. The data were analyzed using standard templates to create credible estimates of the expenses incurred for both the start-up period and the implementation phase of the interventions. Average monthly start-up expenses were $1860 per practice (SE=$455). Most start-up expenditures were for staff training. Average monthly incremental costs were $58 ($15 for provision of direct care [SE=$5]; $43 in overhead [SE=$17]) per patient participant. The bulk of the intervention expenditures was spent on the recruitment and screening of patient participants. Primary care practices must spend money to address their patients' unhealthy behaviors--at least $1860 to initiate systematic approaches and $58 monthly per participating patient to implement the approaches routinely. Until primary care payment systems incorporate these expenses, it is unlikely that these services will be readily available.

  17. A new analytical method for characterizing nonlinear visual processes with stimuli of arbitrary distribution: Theory and applications.

    PubMed

    Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya

    2017-06-01

    Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.

  18. Effects of tryptophan depletion on the performance of an iterated Prisoner's Dilemma game in healthy adults.

    PubMed

    Wood, Richard M; Rilling, James K; Sanfey, Alan G; Bhagwagar, Zubin; Rogers, Robert D

    2006-05-01

    Adaptive social behavior often necessitates choosing to cooperate with others for long-term gains at the expense of noncooperative behaviors giving larger immediate gains. Although little is know about the neural substrates that support cooperative over noncooperative behaviors, recent research has shown that mutually cooperative behavior in the context of a mixed-motive game, the Prisoner's Dilemma (PD), is associated with increased neural activity within reinforcement circuitry. Other research attests to a role for serotonin in the modulation of social behavior and in reward processing. In this study, we used a within-subject, crossover, double-blind design to investigate performance of an iterated, sequential PD game for monetary reward by healthy human adult participants following ingestion of an amino-acid drink that either did (T+) or did not (T-) contain l-tryptophan. Tryptophan depletion produced significant reductions in the level of cooperation shown by participants when playing the game on the first, but not the second, study days. This effect was accompanied by a significantly diminished probability of cooperative responding given previous mutually cooperative behavior. These data suggest that serotonin plays a significant role in the acquisition of socially cooperative behavior in human adult participants, and suggest novel hypotheses concerning the serotonergic modulation of reward information in socially cooperative behavior in both health and psychiatric illness.

  19. Empirical OPC rule inference for rapid RET application

    NASA Astrophysics Data System (ADS)

    Kulkarni, Anand P.

    2006-10-01

    A given technological node (45 nm, 65 nm) can be expected to process thousands of individual designs. Iterative methods applied at the node consume valuable days in determining proper placement of OPC features, and manufacturing and testing mask correspondence to wafer patterns in a trial-and-error fashion for each design. Repeating this fabrication process for each individual design is a time-consuming and expensive process. We present a novel technique which sidesteps the requirement to iterate through the model-based OPC analysis and pattern verification cycle on subsequent designs at the same node. Our approach relies on the inference of rules from a correct pattern at the wafer surface it relates to the OPC and pre-OPC pattern layout files. We begin with an offline phase where we obtain a "gold standard" design file that has been fab-tested at the node with a prepared, post-OPC layout file that corresponds to the intended on-wafer pattern. We then run an offline analysis to infer rules to be used in this method. During the analysis, our method implicitly identifies contextual OPC strategies for optimal placement of RET features on any design at that node. Using these strategies, we can apply OPC to subsequent designs at the same node with accuracy comparable to the original design file but significantly smaller expected runtimes. The technique promises to offer a rapid and accurate complement to existing RET application strategies.

  20. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  1. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  2. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  3. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  4. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  5. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network.

    PubMed

    Kang, Eunhee; Chang, Won; Yoo, Jaejun; Ye, Jong Chul

    2018-06-01

    Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.

  6. Efficient convolutional sparse coding

    DOEpatents

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  7. Colonoscopy Quality: Metrics and Implementation

    PubMed Central

    Calderwood, Audrey H.; Jacobson, Brian C.

    2013-01-01

    Synopsis Colonoscopy is an excellent area for quality improvement 1 because it is high volume, has significant associated risk and expense, and there is evidence that variability in its performance affects outcomes. The best endpoint for validation of quality metrics in colonoscopy is colorectal cancer incidence and mortality, but because of feasibility issues, a more readily accessible metric is the adenoma detection rate (ADR). Fourteen quality metrics were proposed by the joint American Society of Gastrointestinal Endoscopy/American College of Gastroenterology Task Force on “Quality Indicators for Colonoscopy” in 2006, which are described in further detail below. Use of electronic health records and quality-oriented registries will facilitate quality measurement and reporting. Unlike traditional clinical research, implementation of quality improvement initiatives involves rapid assessments and changes on an iterative basis, and can be done at the individual, group, or facility level. PMID:23931862

  8. GPU-accelerated phase extraction algorithm for interferograms: a real-time application

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoqiang; Wu, Yongqian; Liu, Fengwei

    2016-11-01

    Optical testing, having the merits of non-destruction and high sensitivity, provides a vital guideline for optical manufacturing. But the testing process is often computationally intensive and expensive, usually up to a few seconds, which is sufferable for dynamic testing. In this paper, a GPU-accelerated phase extraction algorithm is proposed, which is based on the advanced iterative algorithm. The accelerated algorithm can extract the right phase-distribution from thirteen 1024x1024 fringe patterns with arbitrary phase shifts in 233 milliseconds on average using NVIDIA Quadro 4000 graphic card, which achieved a 12.7x speedup ratio than the same algorithm executed on CPU and 6.6x speedup ratio than that on Matlab using DWANING W5801 workstation. The performance improvement can fulfill the demand of computational accuracy and real-time application.

  9. A workforce in crisis: a case study to expand allied ophthalmic personnel.

    PubMed

    Astle, William; Simms, Craig; Anderson, Lynn

    2016-08-01

    To examine how the development of allied ophthalmic personnel training programs affects human resource capacity. Using a qualitative case study method conducted at a single Ontario institution, this article describes 6 years of establishing a 2-tiered allied ophthalmic personnel training program. The Kingston Ophthalmic Training Centre participated in the study with 8 leadership and program graduate interviews. To assess regional eye health workforce needs, a case study and iterative process used triangulations of the literature, case study, and qualitative interviews with stakeholders. This research was used to develop a model for establishing allied ophthalmic personnel training programs that would result in expanding human resource capacity. Current human resource capacity development and deployment is inadequate to provide the needed eye care services in Canada. A competency-based curriculum and accreditation model as the platform to develop formal academic training programs is essential. Access to quality eye care and patient services can be met by task-shifting from ophthalmologists to appropriately trained allied ophthalmic personnel. Establishing formal training programs is one important strategy to supplying a well-skilled, trained, and qualified ophthalmic workforce. This initiative meets the criteria required for quality, relevance, equity, and cost-effectiveness to meet the future demands for ophthalmic patient care. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  10. A linear recurrent kernel online learning algorithm with sparse updates.

    PubMed

    Fan, Haijin; Song, Qing

    2014-02-01

    In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  12. Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution

    NASA Astrophysics Data System (ADS)

    Wymeersch, Henk; Moeneclaey, Marc

    2005-12-01

    As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.

  13. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  14. Ground Truth Creation for Complex Clinical NLP Tasks - an Iterative Vetting Approach and Lessons Learned.

    PubMed

    Liang, Jennifer J; Tsou, Ching-Huei; Devarakonda, Murthy V

    2017-01-01

    Natural language processing (NLP) holds the promise of effectively analyzing patient record data to reduce cognitive load on physicians and clinicians in patient care, clinical research, and hospital operations management. A critical need in developing such methods is the "ground truth" dataset needed for training and testing the algorithms. Beyond localizable, relatively simple tasks, ground truth creation is a significant challenge because medical experts, just as physicians in patient care, have to assimilate vast amounts of data in EHR systems. To mitigate potential inaccuracies of the cognitive challenges, we present an iterative vetting approach for creating the ground truth for complex NLP tasks. In this paper, we present the methodology, and report on its use for an automated problem list generation task, its effect on the ground truth quality and system accuracy, and lessons learned from the effort.

  15. Cost of illness among patients with diabetic foot ulcer in Turkey

    PubMed Central

    Oksuz, Ergun; Malhan, Simten; Sonmez, Bilge; Numanoglu Tekin, Rukiye

    2016-01-01

    AIM To evaluate the annual cost of patients with Wagner grade 3-4-5 diabetic foot ulcer (DFU) from the public payer’s perspective in Turkey. METHODS This study was conducted focused on a time frame of one year from the public payer’s perspective. Cost-of-illness (COI) methodology, which was developed by the World Health Organization, was used in the generation of cost data. By following a clinical path with the COI method, the main total expenses were reached by multiplying the number of uses of each expense item, the percentage of cases that used them and unit costs. Clinical guidelines and real data specific to Turkey were used in the calculation of the direct costs. Monte Carlo Simulation was used in the study as a sensitivity analysis. RESULTS The following were calculated in DFU treatment from the public payer’s perspective: The annual average per patient outpatient costs $579.5 (4.1%), imaging test costs $283.2 (2.0%), laboratory test costs $284.8 (2.0%), annual average per patient cost of intervention, rehabilitation and trainings $2291.7 (16.0%), annual average per patient cost of drugs used $2545.8 (17.8%) and annual average per patient cost of medical materials used in DFU treatment $735.0 (5.1%). The average annual per patient cost for hospital admission is $7357.4 (51.5%). The average per patient complication cost for DFU is $210.3 (1.5%). The average annual per patient cost of DFU treatment in Turkey is $14287.70. As a result of the sensitivity analysis, the standard deviation of the analysis was $5706.60 (n = 5000, mean = $14146.8, 95%CI: $13988.6-$14304.9). CONCLUSION The health expenses per person are $-PPP 1045 in 2014 in Turkey and the average annual per patient cost for DFU is 14-fold of said amount. The total health expense in 2014 in Turkey is $-PPP 80.3 billion and the total DFU cost has a 3% share in the total annual health expenses for Turkey. Hospital costs are the highest component in DFU disease costs. In order to prevent DFU, training of the patients at risk and raising consciousness in patients with diabetes mellitus (DM) will provide benefits in terms of economy. Appropriate and efficient treatment of DM is a health intervention that can prevent complications. PMID:27795820

  16. Food hygiene training in the UK: time for a radical re-think?

    PubMed

    MacAuslan, E

    2001-12-01

    Training food handlers in the hospitality industry has been recommended by various organisations as a means of improving food handling practices and thus the safety of food for consumers. It is nearly 20 years since the first examinations for basic level food hygiene certificates were made available to food handlers in the UK. Since then little has changed in the syllabuses and in the way the questions are worded. However, the range of languages spoken by food handlers working in the UK has increased substantially since more employers are recruiting those who speak English as a second language. Training can be an unwelcome expense for managers where there is a high turnover of employees, especially amongst those for whom English is not a first language. To improve practical implementation of food hygiene theory it is time to develop a radical strategy concerning the way training is targeted and delivered in the UK, and perhaps Europe.

  17. Virtual reality-based medical training and assessment: The multidisciplinary relationship between clinicians, educators and developers.

    PubMed

    Lövquist, Erik; Shorten, George; Aboulafia, Annette

    2012-01-01

    The current focus on patient safety and evidence-based medical education has led to an increased interest in utilising virtual reality (VR) for medical training. The development of VR-based systems require experts from different disciplines to collaborate with shared and agreed objectives throughout a system's development process. Both the development of technology as well as the incorporation and evaluation of relevant training have to be given the appropriate attention. The aim of this article is to illustrate how constructive relationships can be established between stakeholders to develop useful and usable VR-based medical training systems. This article reports a case study of two research projects that developed and evaluated a VR-based training system for spinal anaesthesia. The case study illustrates how close relationships can be established by champion clinicians leading research in this area and by closely engaging clinicians and educators in iterative prototype design throughout a system's development process. Clinicians and educators have to strive to get more involved (ideally as champions of innovation) and actively guide the development of VR-based training and assessment systems. System developers have to strive to ensure that clinicians and educators are participating constructively in the developments of such systems.

  18. Assessing the limitations of the Banister model in monitoring training

    PubMed Central

    Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.

    2006-01-01

    The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765

  19. Shooting with sound: optimizing an affordable ballistic gelatin recipe in a graded ultrasound phantom education program.

    PubMed

    Tanious, Shariff F; Cline, Jamie; Cavin, Jennifer; Davidson, Nathan; Coleman, J Keegan; Goodmurphy, Craig W

    2015-06-01

    The goal of this study was to investigate the durability and longevity of gelatin formulas for the production of staged ultrasound phantoms for education. Gelatin phantoms were prepared from Knox gelatin (Kraft Foods, Northfield, IL) and a standard 10%-by-mass ordinance gelatin solution. Phantoms were durability tested by compressing to a 2-cm depth until cracking was visible. Additionally, 16 containers with varying combinations of phenol, container type, and storage location were tested for longevity against desiccation and molding. Once formulation was determined, 4 stages of phantoms from novice to clinically relevant were poured, and clinicians with ultrasound training ranked them on a 7-point Likert scale based on task difficulty, phantom suitability, and fidelity. On durability testing, the ballistic gelatin outperformed the Knox gelatin by more than 200 compressions. On longevity testing, gelatin with a 0.5% phenol concentration stored with a lid and refrigeration lasted longest, whereas containers without a lid had desiccation within 1 month, and those without phenol became moldy within 6 weeks. Ballistic gelatin was more expensive when buying in small quantities but was 7.4% less expensive when buying in bulk. The staged phantoms were deemed suitable for training, but clinicians did not consistently rank the phantoms in the intended order of 1 to 4 (44%). Refrigerated and sealed ballistic gelatin with phenol was a cost-effective method for creating in-house staged ultrasound phantoms suitable for large-scale ultrasound educational training needs. Clinician ranking of phantoms may be influenced by current training methods that favor biological tissue scanning as easier. © 2015 by the American Institute of Ultrasound in Medicine.

  20. Nurse extenders offer a way to trim staff expenses.

    PubMed

    Eastaugh, S R; Regan-Donovan, M

    1990-04-01

    Troubles confronting hospital nursing--from a national shortage of nurses to low morale, high turnover, and rising costs of replacing and retaining staff members--require creative approaches and a rethinking of traditional primary care nursing. Nurse extender programs place non-nursing tasks in the hands of technicians trained to deliver meals, transport patients, take vital signs, and perform other patient care tasks.

  1. Industrial Arts: Call It What You Want, the Need Still Exists

    ERIC Educational Resources Information Center

    Howlett, James

    2008-01-01

    In this article, the author argues that teaching "technological literacy" at the expense of hands-on skills training is wrong for the students, wrong for the economy, and wrong for the nation. Students need not only the opportunity to explore a variety of trade skills but also the opportunity to learn a skill well. It is in the teaching…

  2. Assessing the Impact of Local Agency Traffic Safety Training Using Ethnographic Techniques

    ERIC Educational Resources Information Center

    Colling, Timothy K.

    2010-01-01

    Traffic crashes are a significant source of loss of life, personal injury and financial expense in the United States. In 2008 there were 37,261 people killed and an estimated 2,346,000 people injured nationwide in motor vehicle traffic crashes. State and federal agencies are beginning to focus traffic safety improvement effort on local agency…

  3. Joint Distributed Regional Training Capacity: A Scoping Study

    DTIC Science & Technology

    2007-12-01

    use management mechanisms 4. Develop assessment tools to rapidly quantify temporary land-use dis- turbance risks . The development of such...the Army Environmental Requirements and Technology Assessments (AERTA) process to develop validated requirements upon which to base more focused...conducting a large environmental assessment study each time an exercise is planned is needlessly expensive and does not give the flexibility to

  4. A Rapid and Inexpensive Bioassay to Evaluate the Decontamination of Organophosphates

    DTIC Science & Technology

    2012-01-01

    weather natu- rally over time. Actual chemical degradation of the tox- in often relied on harsh chemicals such as calcium oxide and chlorine dioxide...New decontaminating compounds have been developed that are more effective or more en- vironmentally friendly, including organophosphorous acid ...quires sophisticated instrumental analytical techniques such as liquid or gas chromatography, which involves expensive equipment and trained personnel

  5. Brief Report: The Effect of Delayed Matching to Sample on Stimulus Over-Selectivity

    ERIC Educational Resources Information Center

    Reed, Phil

    2012-01-01

    Stimulus over-selectivity occurs when one aspect of the environment controls behavior at the expense of other equally salient aspects. Participants were trained on a match-to-sample (MTS) discrimination task. Levels of over-selectivity in a group of children (4-18 years) with Autism Spectrum Disorders (ASD) were compared with a mental-aged matched…

  6. Contextual Factors that Foster or Inhibit Para-Teacher Professional Development: The Case of an Indian, Non-Governmental Organization

    ERIC Educational Resources Information Center

    Raval, Harini; McKenney, Susan; Pieters, Jules

    2012-01-01

    The appointment of para-professionals to overcome skill shortages and/or make efficient use of expensive resources is well established in both developing and developed countries. The present research concerns para-teachers in India. The literature on para-teachers is dominated by training for special needs settings, largely in developed societies.…

  7. Effects of Peer Assisted Communication Application Training on the Communicative and Social Behaviors of Children with Autism

    ERIC Educational Resources Information Center

    Strasberger, Sean

    2013-01-01

    Non-verbal children with autism are candidates for augmentative and alternative communication (AAC). One type of AAC device is a voice output communication aid (VOCA). The primary drawbacks of past VOCAs were their expense and portability. Newer iPod-based VOCAs alleviate these concerns. This dissertation sought to extend the iPod-based VOCA…

  8. From Volunteering to Paid Employment: Skills Transfer in the South Australian Country Fire Service. Occasional Paper

    ERIC Educational Resources Information Center

    Keough, Mark

    2015-01-01

    A common complaint from business and industry is that employees entering the workforce are not "job ready." They often lack the practical skills, maturity, and workplace experience to perform well in their roles, leaving employers to fill the gap by providing training either at their own expense or with public funding. In contrast, a new…

  9. Gaussian process regression of chirplet decomposed ultrasonic B-scans of a simulated design case

    NASA Astrophysics Data System (ADS)

    Wertz, John; Homa, Laura; Welter, John; Sparkman, Daniel; Aldrin, John

    2018-04-01

    The US Air Force seeks to implement damage tolerant lifecycle management of composite structures. Nondestructive characterization of damage is a key input to this framework. One approach to characterization is model-based inversion of the ultrasonic response from damage features; however, the computational expense of modeling the ultrasonic waves within composites is a major hurdle to implementation. A surrogate forward model with sufficient accuracy and greater computational efficiency is therefore critical to enabling model-based inversion and damage characterization. In this work, a surrogate model is developed on the simulated ultrasonic response from delamination-like structures placed at different locations within a representative composite layup. The resulting B-scans are decomposed via the chirplet transform, and a Gaussian process model is trained on the chirplet parameters. The quality of the surrogate is tested by comparing the B-scan for a delamination configuration not represented within the training data set. The estimated B-scan has a maximum error of ˜15% for an estimated reduction in computational runtime of ˜95% for 200 function calls. This considerable reduction in computational expense makes full 3D characterization of impact damage tractable.

  10. Multi-media authoring - Instruction and training of air traffic controllers based on ASRS incident reports

    NASA Technical Reports Server (NTRS)

    Armstrong, Herbert B.; Roske-Hofstrand, Renate J.

    1989-01-01

    This paper discusses the use of computer-assisted instructions and flight simulations to enhance procedural and perceptual motor task training. Attention is called to the fact that incorporating the accident and incident data contained in reports filed with the Aviation Safety Reporting System (ASRS) would be a valuable training tool which the learner could apply for other situations. The need to segment the events is emphasized; this would make it possible to modify events in order to suit the needs of the training environment. Methods were developed for designing meaningful scenario development on runway incursions on the basis of analysis of ASRS reports. It is noted that, while the development of interactive training tools using the ASRS and other data bases holds much promise, the design and production of interactive video programs and laser disks are very expensive. It is suggested that this problem may be overcome by sharing the costs of production to develop a library of materials available to a broad range of users.

  11. Transfer of training for aerospace operations: How to measure, validate, and improve it

    NASA Technical Reports Server (NTRS)

    Cohen, Malcolm M.

    1993-01-01

    It has been a commonly accepted practice to train pilots and astronauts in expensive, extremely sophisticated, high fidelity simulators, with as much of the real-world feel and response as possible. High fidelity and high validity have often been assumed to be inextricably interwoven, although this assumption may not be warranted. The Project Mercury rate-damping task on the Naval Air Warfare Center's Human Centrifuge Dynamic Flight Simulator, the shuttle landing task on the NASA-ARC Vertical Motion Simulator, and the almost complete acceptance by the airline industry of full-up Boeing 767 flight simulators, are just a few examples of this approach. For obvious reasons, the classical models of transfer of training have never been adequately evaluated in aerospace operations, and there have been few, if any, scientifically valid replacements for the classical models. This paper reviews some of the earlier work involving transfer of training in aerospace operations, and discusses some of the methods by which appropriate criteria for assessing the validity of training may be established.

  12. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model

    PubMed Central

    Baumann, Ana A.; Domenech Rodríguez, Melanie M.; Amador, Nancy G.; Forgatch, Marion S.; Parra-Cardona, J. Rubén

    2015-01-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States. PMID:26052184

  13. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  14. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model.

    PubMed

    Baumann, Ana A; Domenech Rodríguez, Melanie M; Amador, Nancy G; Forgatch, Marion S; Parra-Cardona, J Rubén

    2014-03-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States.

  15. M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan

    NASA Technical Reports Server (NTRS)

    Johnson, R. H.

    1975-01-01

    M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.

  16. Use of Simulation-Based Training to Aid in Implementing Complex Health Technology.

    PubMed

    Devers, Veffa

    2018-01-01

    Clinicians are adult learners in a complex environment that historically does not invest in training in a way that is conducive to these types of learners. Adult learners are independent, self-directed, and goal oriented. In today's fast-paced clinical setting, a practical need exists for nurses and clinicians to master the technology they use on a daily basis, especially as medical devices have become more interconnected and complex. As hospitals look to embrace new technologies, medical device companies must provide clinical end-user training. This should be a required part of the selection process when considering the purchase of any complex medical technology. However, training busy clinicians in a traditional classroom setting can be difficult and costly. A simple, less expensive solution is online simulation training. This interactive training provides a virtual, "hands-on" end-user experience in advance of implementing new equipment. Online simulation training ensures knowledge retention and comprehension and, most importantly, that the training leads to end-user satisfaction and the ability to confidently operate new equipment. A review of the literature revealed that online simulation, coupled with the use of adult learning principles and experiential learning, may enhance the experience of clinical end users.

  17. High school music classes enhance the neural processing of speech.

    PubMed

    Tierney, Adam; Krizman, Jennifer; Skoe, Erika; Johnston, Kathleen; Kraus, Nina

    2013-01-01

    Should music be a priority in public education? One argument for teaching music in school is that private music instruction relates to enhanced language abilities and neural function. However, the directionality of this relationship is unclear and it is unknown whether school-based music training can produce these enhancements. Here we show that 2 years of group music classes in high school enhance the neural encoding of speech. To tease apart the relationships between music and neural function, we tested high school students participating in either music or fitness-based training. These groups were matched at the onset of training on neural timing, reading ability, and IQ. Auditory brainstem responses were collected to a synthesized speech sound presented in background noise. After 2 years of training, the neural responses of the music training group were earlier than at pre-training, while the neural timing of students in the fitness training group was unchanged. These results represent the strongest evidence to date that in-school music education can cause enhanced speech encoding. The neural benefits of musical training are, therefore, not limited to expensive private instruction early in childhood but can be elicited by cost-effective group instruction during adolescence.

  18. Building Training Curricula for Accelerating the Use of NOAA Climate Products and Tools

    NASA Astrophysics Data System (ADS)

    Timofeyeva-Livezey, M. M.; Meyers, J. C.; Stevermer, A.; Abshire, W. E.; Beller-Simms, N.; Herring, D.

    2016-12-01

    The National Oceanic and Atmospheric Administration (NOAA) plays a leading role in U.S. intergovernmental efforts on the Climate Data Initiative and the Climate Resilience Toolkit (CRT). CRT (http://toolkit.climate.gov/) is a valuable resource that provides tools, information, and subject matter expertise to decision makers in various sectors, such as agriculture, water resources and transportation, to help them build resilience to our changing climate. In order to make best use of the toolkit and all the resources within it, a training component is critical. The training section helps building users' understanding of the data, science, and impacts of climate variability and change. CRT identifies five steps in building resilience that includes use of appropriate tools to support decision makers depending on their needs. One tool that can be potentially integrated into CRT is NOAA's Local Climate Analysis Tool (LCAT), which provides access to trusted NOAA data and scientifically-sound analysis techniques for doing regional and local climate studies on climate variability and climate change. However, in order for LCAT to be used effectively, we have found an iterative learning approach using specific examples to train users. For example, for LCAT application in analysis of water resources, we use existing CRT case studies for Arizona and Florida water supply users. The Florida example demonstrates primary sensitivity to climate variability impacts, whereas the Arizona example takes into account longer- term climate change. The types of analyses included in LCAT are time series analysis of local climate and the estimated rate of change in the local climate. It also provides a composite analysis to evaluate the relationship between local climate and climate variability events such as El Niño Southern Oscillation, the Pacific North American Index, and other modes of climate variability. This paper will describe the development of a training module for use of LCAT and its integration into CRT. An iterative approach was used that incorporates specific examples of decision making while working with subject matter experts within the water supply community. The recommended strategy is to use a "stepping stone" learning structure to build users knowledge of best practices for use of LCAT.

  19. Cultural competency training of GP Registrars-exploring the views of GP Supervisors.

    PubMed

    Watt, Kelly; Abbott, Penny; Reath, Jenny

    2015-10-06

    An equitable multicultural society requires General Practitioners (GPs) to be proficient in providing health care to patients from diverse backgrounds. This requires a certain set of attitudes, knowledge and skills known as cultural competence. While training in cultural competence is an important part of the Australian GP Registrar training curriculum, it is unclear who provides this training apart from in Aboriginal and Torres Strait Islander training posts. The majority of Australian GP Registrar training takes place in a workplace setting facilitated by the GP Supervisor. In view of the central role of GP Supervisors, their views on culturally competent practice, and their role in its development in Registrars, are important to ascertain. We conducted 14 semi-structured interviews with GP Supervisors. These were audiotaped, transcribed verbatim and thematically analyzed using an iterative approach. The Supervisors interviewed frequently viewed cultural competence as adequately covered by using patient-centered approaches. The Supervisor role in promoting cultural competence of Registrars was affirmed, though training was noted to occur opportunistically and focused largely on patient-centered care rather than health disparities. Formal training for both Registrars and Supervisors may be beneficial not only to develop a deeper understanding of cultural competence and its relevance to practice but also to promote more consistency in training from Supervisors in the area, particularly with respect to self-reflection, non-conscious bias and utilizing appropriate cultural knowledge without stereotyping and assumption-making.

  20. Spatial frequency domain spectroscopy of two layer media

    NASA Astrophysics Data System (ADS)

    Yudovsky, Dmitry; Durkin, Anthony J.

    2011-10-01

    Monitoring of tissue blood volume and oxygen saturation using biomedical optics techniques has the potential to inform the assessment of tissue health, healing, and dysfunction. These quantities are typically estimated from the contribution of oxyhemoglobin and deoxyhemoglobin to the absorption spectrum of the dermis. However, estimation of blood related absorption in superficial tissue such as the skin can be confounded by the strong absorption of melanin in the epidermis. Furthermore, epidermal thickness and pigmentation varies with anatomic location, race, gender, and degree of disease progression. This study describes a technique for decoupling the effect of melanin absorption in the epidermis from blood absorption in the dermis for a large range of skin types and thicknesses. An artificial neural network was used to map input optical properties to spatial frequency domain diffuse reflectance of two layer media. Then, iterative fitting was used to determine the optical properties from simulated spatial frequency domain diffuse reflectance. Additionally, an artificial neural network was trained to directly map spatial frequency domain reflectance to sets of optical properties of a two layer medium, thus bypassing the need for iteration. In both cases, the optical thickness of the epidermis and absorption and reduced scattering coefficients of the dermis were determined independently. The accuracy and efficiency of the iterative fitting approach was compared with the direct neural network inversion.

  1. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  2. International standards for programmes of training in intensive care medicine in Europe.

    PubMed

    2011-03-01

    To develop internationally harmonised standards for programmes of training in intensive care medicine (ICM). Standards were developed by using consensus techniques. A nine-member nominal group of European intensive care experts developed a preliminary set of standards. These were revised and refined through a modified Delphi process involving 28 European national coordinators representing national training organisations using a combination of moderated discussion meetings, email, and a Web-based tool for determining the level of agreement with each proposed standard, and whether the standard could be achieved in the respondent's country. The nominal group developed an initial set of 52 possible standards which underwent four iterations to achieve maximal consensus. All national coordinators approved a final set of 29 standards in four domains: training centres, training programmes, selection of trainees, and trainers' profiles. Only three standards were considered immediately achievable by all countries, demonstrating a willingness to aspire to quality rather than merely setting a minimum level. Nine proposed standards which did not achieve full consensus were identified as potential candidates for future review. This preliminary set of clearly defined and agreed standards provides a transparent framework for assuring the quality of training programmes, and a foundation for international harmonisation and quality improvement of training in ICM.

  3. Data-driven train set crash dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2017-02-01

    Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.

  4. Virtual Reality Glasses and "Eye-Hands Blind Technique" for Microsurgical Training in Neurosurgery.

    PubMed

    Choque-Velasquez, Joham; Colasanti, Roberto; Collan, Juhani; Kinnunen, Riina; Rezai Jahromi, Behnam; Hernesniemi, Juha

    2018-04-01

    Microsurgical skills and eye-hand coordination need continuous training to be developed and refined. However, well-equipped microsurgical laboratories are not so widespread as their setup is expensive. Herein, we present a novel microsurgical training system that requires a high-resolution personal computer screen, smartphones, and virtual reality glasses. A smartphone placed on a holder at a height of about 15-20 cm from the surgical target field is used as the webcam of the computer. A specific software is used to duplicate the video camera image. The video may be transferred from the computer to another smartphone, which may be connected to virtual reality glasses. Using the previously described training model, we progressively performed more and more complex microsurgical exercises. It did not take long to set up our system, thus saving time for the training sessions. Our proposed training model may represent an affordable and efficient system to improve eye-hand coordination and dexterity in using not only the operating microscope but also endoscopes and exoscopes. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  6. Project APhiD: A Lorenz-gauged A-Φ decomposition for parallelized computation of ultra-broadband electromagnetic induction in a fully heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Weiss, Chester J.

    2013-08-01

    An essential element for computational hypothesis testing, data inversion and experiment design for electromagnetic geophysics is a robust forward solver, capable of easily and quickly evaluating the electromagnetic response of arbitrary geologic structure. The usefulness of such a solver hinges on the balance among competing desires like ease of use, speed of forward calculation, scalability to large problems or compute clusters, parsimonious use of memory access, accuracy and by necessity, the ability to faithfully accommodate a broad range of geologic scenarios over extremes in length scale and frequency content. This is indeed a tall order. The present study addresses recent progress toward the development of a forward solver with these properties. Based on the Lorenz-gauged Helmholtz decomposition, a new finite volume solution over Cartesian model domains endowed with complex-valued electrical properties is shown to be stable over the frequency range 10-2-1010 Hz and range 10-3-105 m in length scale. Benchmark examples are drawn from magnetotellurics, exploration geophysics, geotechnical mapping and laboratory-scale analysis, showing excellent agreement with reference analytic solutions. Computational efficiency is achieved through use of a matrix-free implementation of the quasi-minimum-residual (QMR) iterative solver, which eliminates explicit storage of finite volume matrix elements in favor of "on the fly" computation as needed by the iterative Krylov sequence. Further efficiency is achieved through sparse coupling matrices between the vector and scalar potentials whose non-zero elements arise only in those parts of the model domain where the conductivity gradient is non-zero. Multi-thread parallelization in the QMR solver through OpenMP pragmas is used to reduce the computational cost of its most expensive step: the single matrix-vector product at each iteration. High-level MPI communicators farm independent processes to available compute nodes for simultaneous computation of multi-frequency or multi-transmitter responses.

  7. What's Cooler Than Being Cool? Ice-Sheet Models Using a Fluidity-Based FOSLS Approach to Nonlinear-Stokes Flow

    NASA Astrophysics Data System (ADS)

    Allen, Jeffery M.

    This research involves a few First-Order System Least Squares (FOSLS) formulations of a nonlinear-Stokes flow model for ice sheets. In Glen's flow law, a commonly used constitutive equation for ice rheology, the viscosity becomes infinite as the velocity gradients approach zero. This typically occurs near the ice surface or where there is basal sliding. The computational difficulties associated with the infinite viscosity are often overcome by an arbitrary modification of Glen's law that bounds the maximum viscosity. The FOSLS formulations developed in this thesis are designed to overcome this difficulty. The first FOSLS formulation is just the first-order representation of the standard nonlinear, full-Stokes and is known as the viscosity formulation and suffers from the problem above. To overcome the problem of infinite viscosity, two new formulation exploit the fact that the deviatoric stress, the product of viscosity and strain-rate, approaches zero as the viscosity goes to infinity. Using the deviatoric stress as the basis for a first-order system results in the the basic fluidity system. Augmenting the basic fluidity system with a curl-type equation results in the augmented fluidity system, which is more amenable to the iterative solver, Algebraic MultiGrid (AMG). A Nested Iteration (NI) Newton-FOSLS-AMG approach is used to solve the nonlinear-Stokes problems. Several test problems from the ISMIP set of benchmarks is examined to test the effectiveness of the various formulations. These test show that the viscosity based method is more expensive and less accurate. The basic fluidity system shows optimal finite-element convergence. However, there is not yet an efficient iterative solver for this type of system and this is the topic of future research. Alternatively, AMG performs better on the augmented fluidity system when using specific scaling. Unfortunately, this scaling results in reduced finite-element convergence.

  8. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  9. Assessing the health care needs of women in rural British Columbia

    PubMed Central

    Guy, Meghan; Norman, Wendy V.; Malhotra, Unjali

    2013-01-01

    Objective To design reliable survey instruments to evaluate needs and expectations for provision of women's health services in rural communities in British Columbia (BC). These tools will aim to plan programming for, and evaluate effectiveness of, a women's health enhanced skills residency program at the University of British Columbia. Design A qualitative design that included administration of written surveys and on-site interviews in several rural communities. Setting Three communities participated in initial questionnaire and interview administration. A fourth community participated in the second interview iteration. Participating communities did not have obstetrician-gynecologists but did have hospitals capable of supporting outpatient specialized women's health procedural care. Participants Community physicians, leaders of community groups serving women, and allied health providers, in Vancouver Island, Southeast Interior BC, and Northern BC. Methods Two preliminary questionnaires were developed to assess local specialized women's health services based on the curriculum of the enhanced skills training program; one was designed for physicians and the other for women's community group leaders and aboriginal health and community group leaders. Interview questions were designed to ensure the survey could be understood and to identify important areas of women's health not included on the initial questionnaires. Results were analyzed using quantitative and qualitative methods, and a second draft of the questionnaires was developed for a second iteration of interviews. Main findings Clarity and comprehension of questionnaires were good; however, nonphysician participants answered that they were unsure on many questions pertaining to specific services. Topics identified as important and missing from questionnaires included violence and mental health. A second version of the questionnaires was shown to have addressed these concerns. Conclusion Through iterations of pilot testing, we created 2 validated survey instruments for implementation as a component of program evaluation. Testing in remote locations highlighted unique rural concerns, such that University of British Columbia health care professional training will now better serve BC community needs. PMID:23418251

  10. Assessing the health care needs of women in rural British Columbia: development and validation of a survey tool.

    PubMed

    Guy, Meghan; Norman, Wendy V; Malhotra, Unjali

    2013-02-01

    To design reliable survey instruments to evaluate needs and expectations for provision of women's health services in rural communities in British Columbia (BC). These tools will aim to plan programming for, and evaluate effectiveness of, a women's health enhanced skills residency program at the University of British Columbia. A qualitative design that included administration of written surveys and on-site interviews in several rural communities. Three communities participated in initial questionnaire and interview administration. A fourth community participated in the second interview iteration. Participating communities did not have obstetrician-gynecologists but did have hospitals capable of supporting outpatient specialized women's health procedural care. Community physicians, leaders of community groups serving women, and allied health providers, in Vancouver Island, Southeast Interior BC, and Northern BC. Two preliminary questionnaires were developed to assess local specialized women's health services based on the curriculum of the enhanced skills training program; one was designed for physicians and the other for women's community group leaders and aboriginal health and community group leaders. Interview questions were designed to ensure the survey could be understood and to identify important areas of women's health not included on the initial questionnaires. Results were analyzed using quantitative and qualitative methods, and a second draft of the questionnaires was developed for a second iteration of interviews. Clarity and comprehension of questionnaires were good; however, nonphysician participants answered that they were unsure on many questions pertaining to specific services. Topics identified as important and missing from questionnaires included violence and mental health. A second version of the questionnaires was shown to have addressed these concerns. Through iterations of pilot testing, we created 2 validated survey instruments for implementation as a component of program evaluation. Testing in remote locations highlighted unique rural concerns, such that University of British Columbia health care professional training will now better serve BC community needs.

  11. Multi-modal classification of neurodegenerative disease by progressive graph-based transductive learning

    PubMed Central

    Wang, Zhengxia; Zhu, Xiaofeng; Adeli, Ehsan; Zhu, Yingying; Nie, Feiping; Munsell, Brent

    2018-01-01

    Graph-based transductive learning (GTL) is a powerful machine learning technique that is used when sufficient training data is not available. In particular, conventional GTL approaches first construct a fixed inter-subject relation graph that is based on similarities in voxel intensity values in the feature domain, which can then be used to propagate the known phenotype data (i.e., clinical scores and labels) from the training data to the testing data in the label domain. However, this type of graph is exclusively learned in the feature domain, and primarily due to outliers in the observed features, may not be optimal for label propagation in the label domain. To address this limitation, a progressive GTL (pGTL) method is proposed that gradually finds an intrinsic data representation that more accurately aligns imaging features with the phenotype data. In general, optimal feature-to-phenotype alignment is achieved using an iterative approach that: (1) refines inter-subject relationships observed in the feature domain by using the learned intrinsic data representation in the label domain, (2) updates the intrinsic data representation from the refined inter-subject relationships, and (3) verifies the intrinsic data representation on the training data to guarantee an optimal classification when applied to testing data. Additionally, the iterative approach is extended to multi-modal imaging data to further improve pGTL classification accuracy. Using Alzheimer’s disease and Parkinson’s disease study data, the classification accuracy of the proposed pGTL method is compared to several state-of-the-art classification methods, and the results show pGTL can more accurately identify subjects, even at different progression stages, in these two study data sets. PMID:28551556

  12. Automatic recognition of holistic functional brain networks using iteratively optimized convolutional neural networks (IO-CNN) with weak label initialization.

    PubMed

    Zhao, Yu; Ge, Fangfei; Liu, Tianming

    2018-07-01

    fMRI data decomposition techniques have advanced significantly from shallow models such as Independent Component Analysis (ICA) and Sparse Coding and Dictionary Learning (SCDL) to deep learning models such Deep Belief Networks (DBN) and Convolutional Autoencoder (DCAE). However, interpretations of those decomposed networks are still open questions due to the lack of functional brain atlases, no correspondence across decomposed or reconstructed networks across different subjects, and significant individual variabilities. Recent studies showed that deep learning, especially deep convolutional neural networks (CNN), has extraordinary ability of accommodating spatial object patterns, e.g., our recent works using 3D CNN for fMRI-derived network classifications achieved high accuracy with a remarkable tolerance for mistakenly labelled training brain networks. However, the training data preparation is one of the biggest obstacles in these supervised deep learning models for functional brain network map recognitions, since manual labelling requires tedious and time-consuming labours which will sometimes even introduce label mistakes. Especially for mapping functional networks in large scale datasets such as hundreds of thousands of brain networks used in this paper, the manual labelling method will become almost infeasible. In response, in this work, we tackled both the network recognition and training data labelling tasks by proposing a new iteratively optimized deep learning CNN (IO-CNN) framework with an automatic weak label initialization, which enables the functional brain networks recognition task to a fully automatic large-scale classification procedure. Our extensive experiments based on ABIDE-II 1099 brains' fMRI data showed the great promise of our IO-CNN framework. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Implementing Embedded Training (ET): Volume 4. Identifying ET Requirements

    DTIC Science & Technology

    1988-11-01

    procedures that support the effective co::sidet·;,tion, definition, development , and integration of e::1 11cddcd trzd;Li:1r; (ET) ;~.:1p:1bilities...an effective ET compo- nent would be impossible or have undesired schedule or cost impacts. Iteration Two: Early System Development Once the new...ET. These needs can sometimes have a significant effect on the design of the prime item system. It is critical that materiel developers be made aware

  14. Use of LANDSAT imagery for wildlife habitat mapping in northeast and east central Alaska

    NASA Technical Reports Server (NTRS)

    Lent, P. C. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Two scenes were analyzed by applying an iterative cluster analysis to a 2% random data sample and then using the resulting clusters as a training set basis for maximum likelihood classification. Twenty-six and twenty-seven categorical classes, respectively resulted from this process. The majority of classes in each case were quite specific vegetation types; each of these types has specific value as moose habitat.

  15. Lessons from a Space Analog on Adaptation for Long-Duration Exploration Missions.

    PubMed

    Anglin, Katlin M; Kring, Jason P

    2016-04-01

    Exploration missions to asteroids and Mars will bring new challenges associated with communication delays and more autonomy for crews. Mission safety and success will rely on how well the entire system, from technology to the human elements, is adaptable and resilient to disruptive, novel, or potentially catastrophic events. The recent NASA Extreme Environment Missions Operations (NEEMO) 20 mission highlighted this need and produced valuable "lessons learned" that will inform future research on team adaptation and resilience. A team of NASA, industry, and academic members used an iterative process to design a tripod shaped structure, called the CORAL Tower, for two astronauts to assemble underwater with minimal tools. The team also developed assembly procedures, administered training to the crew, and provided support during the mission. During the design, training, and assembly of the Tower, the team learned first-hand how adaptation in extreme environments depends on incremental testing, thorough procedures and contingency plans that predict possible failure scenarios, and effective team adaptation and resiliency for the crew and support personnel. Findings from NEEMO 20 provide direction on the design and testing process for future space systems and crews to maximize adaptation. This experience also underscored the need for more research on team adaptation, particularly how input and process factors affect adaption outcomes, the team adaptation iterative process, and new ways to measure the adaptation process.

  16. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    PubMed

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  17. Results-driven approach to improving quality and productivity

    Treesearch

    John Dramm

    2000-01-01

    Quality control (QC) programs do not often realize their full potential. Elaborate and expensive QC programs can easily get side tracked by the process of building a program with promises of “Someday, this will all pay off.” Training employees in QC methods is no guarantee that quality will improve. Several documented cases show that such activity-centered efforts...

  18. Moving Target Information Exploitation Electronic Learning

    DTIC Science & Technology

    2005-10-01

    Traditional - ISR WebTAS – eLearning ................................................................. 6 MTIX e- Learning Technical Requirements...than on a specific schedule, so learners can take the curriculum when needed and at their own pace . E- Learning is a form of instructional authoring...can save training expenses, because it can be used over and over again. E- Learning also allows the user to study at his or her own convenience

  19. Modeling Sustainment Investment

    DTIC Science & Technology

    2015-05-01

    Requests Strategy Staffing Training & Process Funding sustainment capacity sustainment performance gap Bandwagon Effect R1 Limits to Growth B1...Mellon University sustainment capacity sustainment performance gap Bandwagon Effect R1 Limits to Growth B1 S Work Smarter B3 Work Bigger B2 desired... effects of decisions and • Suggests how to prevent problems before they become too expensive. Next: A sample piece of the simulation model

  20. Child Nutrition Programs: Reauthorization and Budget Issues. Hearing before the Subcommittee on Nutrition of the Committee on Agriculture, Nutrition, and Forestry. United States Senate, Ninety-Ninth Congress, First Session.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Senate Committee on Agriculture, Nutrition, and Forestry.

    Witnesses offered testimony bearing on budget issues and the reauthorization of the Women, Infants, and Children (WIC) Programs; the Special Supplemental Summer Food Program; the State Administrative Expense Program; the Commodity Distribution Program; and the Nutrition Education and Training Program. Testimony concerning permanently authorized…

  1. 40 CFR 35.4075 - Are there things my group can't spend TAG money for?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Are there things my group can't spend... Can Pay For § 35.4075 Are there things my group can't spend TAG money for? Your TAG funds cannot be... other training expenses for your group's members or your technical advisor except as § 35.4070(b)(3...

  2. Sprint Interval Training Induces A Sexual Dimorphism but does not Improve Peak Bone Mass in Young and Healthy Mice

    PubMed Central

    Koenen, Kathrin; Knepper, Isabell; Klodt, Madlen; Osterberg, Anja; Stratos, Ioannis; Mittlmeier, Thomas; Histing, Tina; Menger, Michael D.; Vollmar, Brigitte; Bruhn, Sven; Müller-Hilke, Brigitte

    2017-01-01

    Elevated peak bone mass in early adulthood reduces the risk for osteoporotic fractures at old age. As sports participation has been correlated with elevated peak bone masses, we aimed to establish a training program that would efficiently stimulate bone accrual in healthy young mice. We combined voluntary treadmill running with sprint interval training modalities that were tailored to the individual performance limits and were of either high or intermediate intensity. Adolescent male and female STR/ort mice underwent 8 weeks of training before the hind legs were analyzed for cortical and trabecular bone parameters and biomechanical strength. Sprint interval training led to increased running speeds, confirming an efficient training. However, males and females responded differently. The males improved their running speeds in response to intermediate intensities only and accrued cortical bone at the expense of mechanical strength. High training intensities induced a significant loss of trabecular bone. The female bones showed neither adverse nor beneficial effects in response to either training intensities. Speculations about the failure to improve geometric alongside mechanical bone properties include the possibility that our training lacked sufficient axial loading, that high cardio-vascular strains adversely affect bone growth and that there are physiological limits to bone accrual. PMID:28303909

  3. SYMPTEK homemade foam models for client education and emergency obstetric care skills training in low-resource settings.

    PubMed

    Deganus, Sylvia A

    2009-10-01

    Clinical training for health care workers using anatomical models and simulation has become an established norm. A major requirement for this approach is the availability of lifelike training models or simulators for skills practice. Manufactured sophisticated human models such as the resuscitation neonatal dolls, the Zoë gynaecologic simulator, and other pelvic models are very expensive, and are beyond the budgets of many training programs or activities in low-resource countries. Clinical training programs in many low-resource countries suffer greatly because of this cost limitation. Yet it is also in these same poor countries that the need for skilled human resources in reproductive health is greatest. The SYMPTEK homemade models were developed in response to the need for cheaper, more readily available humanistic models for training in emergency obstetric skills and also for client education. With minimal training, a variety of cheap SYMPTEK models can easily be made, by both trainees and facilitators, from high-density latex foam material commonly used for furnishings. The models are reusable, durable, portable, and easily maintained. The uses, advantages, disadvantages, and development of the SYMPTEK foam models are described in this article.

  4. A methodology for analysing lateral coupled behavior of high speed railway vehicles and structures

    NASA Astrophysics Data System (ADS)

    Antolín, P.; Goicolea, J. M.; Astiz, M. A.; Alonso, A.

    2010-06-01

    Continuous increment of the speed of high speed trains entails the increment of kinetic energy of the trains. The main goal of this article is to study the coupled lateral behavior of vehicle-structure systems for high speed trains. Non linear finite element methods are used for structures whereas multibody dynamics methods are employed for vehicles. Special attention must be paid when dealing with contact rolling constraints for coupling bridge decks and train wheels. The dynamic models must include mixed variables (displacements and creepages). Additionally special attention must be paid to the contact algorithms adequate to wheel-rail contact. The coupled vehicle-structure system is studied in a implicit dynamic framework. Due to the presence of very different systems (trains and bridges), different frequencies are involved in the problem leading to stiff systems. Regarding to contact methods, a main branch is studied in normal contact between train wheels and bridge decks: penalty method. According to tangential contact FastSim algorithm solves the tangential contact at each time step solving a differential equation involving relative displacements and creepage variables. Integration for computing the total forces in the contact ellipse domain is performed for each train wheel and each solver iteration. Coupling between trains and bridges requires a special treatment according to the kinetic constraints imposed in the wheel-rail pair and the load transmission. A numerical example is performed.

  5. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE PAGES

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    2018-01-09

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  6. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    PubMed

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  7. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  8. Numerical optimization in Hilbert space using inexact function and gradient evaluations

    NASA Technical Reports Server (NTRS)

    Carter, Richard G.

    1989-01-01

    Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.

  9. Higher-Order Extended Lagrangian Born-Oppenheimer Molecular Dynamics for Classical Polarizable Models.

    PubMed

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M N

    2018-02-13

    Generalized extended Lagrangian Born-Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate "shadow" potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential to any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.

  10. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  11. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  12. Higher-Order Extended Lagrangian Born–Oppenheimer Molecular Dynamics for Classical Polarizable Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albaugh, Alex; Head-Gordon, Teresa; Niklasson, Anders M. N.

    Generalized extended Lagrangian Born−Oppenheimer molecular dynamics (XLBOMD) methods provide a framework for fast iteration-free simulations of models that normally require expensive electronic ground state optimizations prior to the force evaluations at every time step. XLBOMD uses dynamically driven auxiliary degrees of freedom that fluctuate about a variationally optimized ground state of an approximate “shadow” potential which approximates the true reference potential. While the requirements for such shadow potentials are well understood, constructing such potentials in practice has previously been ad hoc, and in this work, we present a systematic development of XLBOMD shadow potentials that match the reference potential tomore » any order. We also introduce a framework for combining friction-like dissipation for the auxiliary degrees of freedom with general-order integration, a combination that was not previously possible. These developments are demonstrated with a simple fluctuating charge model and point induced dipole polarization models.« less

  13. A Framework to Debug Diagnostic Matrices

    NASA Technical Reports Server (NTRS)

    Kodal, Anuradha; Robinson, Peter; Patterson-Hine, Ann

    2013-01-01

    Diagnostics is an important concept in system health and monitoring of space operations. Many of the existing diagnostic algorithms utilize system knowledge in the form of diagnostic matrix (D-matrix, also popularly known as diagnostic dictionary, fault signature matrix or reachability matrix) gleaned from physical models. But, sometimes, this may not be coherent to obtain high diagnostic performance. In such a case, it is important to modify this D-matrix based on knowledge obtained from other sources such as time-series data stream (simulated or maintenance data) within the context of a framework that includes the diagnostic/inference algorithm. A systematic and sequential update procedure, diagnostic modeling evaluator (DME) is proposed to modify D-matrix and wrapper logic considering least expensive solution first. This iterative procedure includes conditions ranging from modifying 0s and 1s in the matrix, or adding/removing the rows (failure sources) columns (tests). We will experiment this framework on datasets from DX challenge 2009.

  14. The Zernike expansion--an example of a merit function for 2D/3D registration based on orthogonal functions.

    PubMed

    Dong, Shuo; Kettenbach, Joachim; Hinterleitner, Isabella; Bergmann, Helmar; Birkfellner, Wolfgang

    2008-01-01

    Current merit functions for 2D/3D registration usually rely on comparing pixels or small regions of images using some sort of statistical measure. Problems connected to this paradigm the sometimes problematic behaviour of the method if noise or artefacts (for instance a guide wire) are present on the projective image. We present a merit function for 2D/3D registration which utilizes the decomposition of the X-ray and the DRR under comparison into orthogonal Zernike moments; the quality of the match is assessed by an iterative comparison of expansion coefficients. Results in a imaging study on a physical phantom show that--compared to standard cross--correlation the Zernike moment based merit function shows better robustness if histogram content in images under comparison is different, and that time expenses are comparable if the merit function is constructed out of a few significant moments only.

  15. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  16. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  17. A hybrid formalism of aerosol gas phase interaction for 3-D global models

    NASA Astrophysics Data System (ADS)

    Benduhn, F.

    2009-04-01

    Aerosol chemical composition is a relevant factor to the global climate system with respect to both atmospheric chemistry and the aerosol direct and indirect effects. Aerosol chemical composition determines the capacity of aerosol particles to act as cloud condensation nuclei both explicitly via particle size and implicitly via the aerosol hygroscopic property. Due to the primary role of clouds in the climate system and the sensitivity of cloud formation and radiative properties to the cloud droplet number it is necessary to determine with accuracy the chemical composition of the aerosol. Dissolution, although a formally fairly well known process, may be subject to numerically prohibitive properties that result from the chemical interaction of the species engaged. So-far approaches to model the dissolution of inorganics into the aerosol liquid phase in the framework of a 3-D global model were based on an equilibrium, transient or hybrid equilibrium-transient approach. All of these methods present the disadvantage of a priori assumptions with respect to the mechanism and/or are numerically not manageable in the context of a global climate system model. In this paper a new hybrid formalism to aerosol gas phase interaction is presented within the framework of the H2SO4/HNO3/HCl/NH3 system and a modal approach of aerosol size discretisation. The formalism is distinct from prior hybrid approaches in as much as no a priori assumption on the nature of the regime a particular aerosol mode is in is made. Whether a particular mode is set to be in the equilibrium or the transitory regime is continuously determined during each time increment against relevant criteria considering the estimated equilibration time interval and the interdependence of the aerosol modes relative to the partitioning of the dissolving species. Doing this the aerosol composition range of numerical stiffness due to species interaction during transient dissolution is effectively eluded, and the numerical expense of dissolution in the transient regime is reduced through the minimisation of the number of modes in this regime and a larger time step. Containment of the numerical expense of the modes in the equilibrium regime is ensured through the usage of either an analytical equilibrium solver that requires iteration among the equilibrium modes, or a simple numerical solver based on a differential approach that requires iteration among the chemical species. Both equilibrium solvers require iteration over the water content and the activity coefficients. Decision for using either one or the other solver is made upon the consideration of the actual equilibrating mechanism, either chemical interaction or gas phase partial pressure variation, respectively. The formalism should thus ally appropriate process simplification resulting in reasonable computation time to a high degree of real process conformity as it is ensured by a transitory representation of dissolution. The resulting effectiveness and limits of the formalism are illustrated with numerical examples.

  18. Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications

    NASA Astrophysics Data System (ADS)

    Mirkovic, Bojana; Debener, Stefan; Jaeger, Manuela; De Vos, Maarten

    2015-08-01

    Objective. Recent studies have provided evidence that temporal envelope driven speech decoding from high-density electroencephalography (EEG) and magnetoencephalography recordings can identify the attended speech stream in a multi-speaker scenario. The present work replicated the previous high density EEG study and investigated the necessary technical requirements for practical attended speech decoding with EEG. Approach. Twelve normal hearing participants attended to one out of two simultaneously presented audiobook stories, while high density EEG was recorded. An offline iterative procedure eliminating those channels contributing the least to decoding provided insight into the necessary channel number and optimal cross-subject channel configuration. Aiming towards the future goal of near real-time classification with an individually trained decoder, the minimum duration of training data necessary for successful classification was determined by using a chronological cross-validation approach. Main results. Close replication of the previously reported results confirmed the method robustness. Decoder performance remained stable from 96 channels down to 25. Furthermore, for less than 15 min of training data, the subject-independent (pre-trained) decoder performed better than an individually trained decoder did. Significance. Our study complements previous research and provides information suggesting that efficient low-density EEG online decoding is within reach.

  19. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  20. FM-MAP: A Novel In-Training Examination Predicts Success on Family Medicine Certification Examination.

    PubMed

    Iglar, Karl; Leung, Fok-Han; Moineddin, Rahim; Herold, Jodi

    2017-05-01

    The objective of our study was to assess the correlation between a locally developed In-Training Examination (ITE) and the certification examination in family medicine in Canada. The ITE was taken twice yearly, which corresponded for most residents to the fifth, ninth, 17th, and 21st month of training. The results for the ITE were correlated to the CFPC certification examination taken in the 23rd month of residency. The scores on each of the four iterations of the ITE correlated moderately well with performance relating to problem solving skills and knowledge on the certification examination. The ITE showed a trend to an increased correlation with duration in the training program with a Spearman correlation coefficient increasing from 0.45 on the first test to 0.54 on the fourth test. The correlation of the ITE with performance on the component assessing the doctor- patient relationship on the certification examination was poor (r=0.26 on the last test). Our in-training examination is a useful predictor of performance in problem solving and knowledge domains of the family medicine expert role on the certification examination.

Top