Sample records for reduced processing time

  1. DART system analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.

    2005-08-01

    The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less

  2. Effects of computerized prescriber order entry on pharmacy order-processing time.

    PubMed

    Wietholter, Jon; Sitterson, Susan; Allison, Steven

    2009-08-01

    The effect of computerized prescriber order entry (CPOE) on the efficiency of medication-order-processing time was evaluated. This study was conducted at a 761-bed, tertiary care hospital. A total of 2988 medication orders were collected and analyzed before (n = 1488) and after CPOE implementation (n = 1500). Data analyzed included the time the prescriber ordered the medication, the time the pharmacy received the order, and the time the order was completed by a pharmacist. The mean order-processing time before CPOE implementation was 115 minutes from prescriber composition to pharmacist verification. After CPOE implementation, the mean order-processing time was reduced to 3 minutes (p < 0.0001). The time that an order was received by the pharmacy to the time it was verified by a pharmacist was reduced from 31 minutes before CPOE implementation to 3 minutes after CPOE implementation (p < 0.0001). The implementation of CPOE reduced the order-processing time (from order composition to verification) by 97%. Additionally, pharmacy-specific order-processing time (from order receipt in the pharmacy to pharmacist verification) was reduced by 90%. This reduction in order-processing time improves patient care by shortening the interval between physician prescribing and medication availability and may allow pharmacists to explore opportunities for enhanced clinical activities that will further positively impact patient care. CPOE implementation reduced the mean pharmacy order-processing time from composition to verification by 97%. After CPOE implementation, a new medication order was verified as appropriate by a pharmacist in three minutes, on average.

  3. Logistics Control Facility: A Normative Model for Total Asset Visibility in the Air Force Logistics System

    DTIC Science & Technology

    1994-09-01

    IIssue Computers, information systems, and communication systems are being increasingly used in transportation, warehousing, order processing , materials...inventory levels, reduced order processing times, reduced order processing costs, and increased customer satisfaction. While purchasing and transportation...process, the speed in which crders are processed would increase significantly. Lowering the order processing time in turn lowers the lead time, which in

  4. Reducing Design Cycle Time and Cost Through Process Resequencing

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  5. Voronoi Tessellation for reducing the processing time of correlation functions

    NASA Astrophysics Data System (ADS)

    Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio

    2018-01-01

    The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.

  6. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    PubMed

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  7. Innovating the Standard Procurement System Through Electronic Commerce Technologies

    DTIC Science & Technology

    1999-12-01

    commerce are emerging almost daily as businesses continue to realize the overwhelming ability of agent applications to reduce costs and improve ...processed using the SPS. The result may reduce cycle time, assist contracting professionals, improve the acquisition process, save money and aid...of innovation processes, and it offers enormous potential for helping organizations achieve major improvements in terms of process cost , time

  8. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  9. The use of discrete-event simulation modelling to improve radiation therapy planning processes.

    PubMed

    Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven

    2009-07-01

    The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.

  10. Reducing lumber thickness variation using real-time statistical process control

    Treesearch

    Thomas M. Young; Brian H. Bond; Jan Wiedenbeck

    2002-01-01

    A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...

  11. Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques

    NASA Astrophysics Data System (ADS)

    Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.

    2017-08-01

    In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.

  12. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  13. Six Sigma process utilization in reducing door-to-balloon time at a single academic tertiary care center.

    PubMed

    Kelly, Elizabeth W; Kelly, Jonathan D; Hiestand, Brian; Wells-Kiser, Kathy; Starling, Stephanie; Hoekstra, James W

    2010-01-01

    Rapid reperfusion in patients with ST-elevation myocardial infarction (STEMI) is associated with lower mortality. Reduction in door-to-balloon (D2B) time for percutaneous coronary intervention requires multidisciplinary cooperation, process analysis, and quality improvement methodology. Six Sigma methodology was used to reduce D2B times in STEMI patients presenting to a tertiary care center. Specific steps in STEMI care were determined, time goals were established, and processes were changed to reduce each step's duration. Outcomes were tracked, and timely feedback was given to providers. After process analysis and implementation of improvements, mean D2B times decreased from 128 to 90 minutes. Improvement has been sustained; as of June 2010, the mean D2B was 56 minutes, with 100% of patients meeting the 90-minute window for the year. Six Sigma methodology and immediate provider feedback result in significant reductions in D2B times. The lessons learned may be extrapolated to other primary percutaneous coronary intervention centers. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Lean manufacturing analysis to reduce waste on production process of fan products

    NASA Astrophysics Data System (ADS)

    Siregar, I.; Nasution, A. A.; Andayani, U.; Sari, R. M.; Syahputri, K.; Anizar

    2018-02-01

    This research is based on case study that being on electrical company. One of the products that will be researched is the fan, which when running the production process there is a time that is not value-added, among others, the removal of material which is not efficient in the raw materials and component molding fan. This study aims to reduce waste or non-value added activities and shorten the total lead time by using the tools Value Stream Mapping. Lean manufacturing methods used to analyze and reduce the non-value added activities, namely the value stream mapping analysis tools, process mapping activity with 5W1H, and tools 5 whys. Based on the research note that no value-added activities in the production process of a fan of 647.94 minutes of total lead time of 725.68 minutes. Process cycle efficiency in the production process indicates that the fan is still very low at 11%. While estimates of the repair showed a decrease in total lead time became 340.9 minutes and the process cycle efficiency is greater by 24%, which indicates that the production process has been better.

  15. Activating clinical trials: a process improvement approach.

    PubMed

    Martinez, Diego A; Tsalatsanis, Athanasios; Yalcin, Ali; Zayas-Castro, José L; Djulbegovic, Benjamin

    2016-02-24

    The administrative process associated with clinical trial activation has been criticized as costly, complex, and time-consuming. Prior research has concentrated on identifying administrative barriers and proposing various solutions to reduce activation time, and consequently associated costs. Here, we expand on previous research by incorporating social network analysis and discrete-event simulation to support process improvement decision-making. We searched for all operational data associated with the administrative process of activating industry-sponsored clinical trials at the Office of Clinical Research of the University of South Florida in Tampa, Florida. We limited the search to those trials initiated and activated between July 2011 and June 2012. We described the process using value stream mapping, studied the interactions of the various process participants using social network analysis, and modeled potential process modifications using discrete-event simulation. The administrative process comprised 5 sub-processes, 30 activities, 11 decision points, 5 loops, and 8 participants. The mean activation time was 76.6 days. Rate-limiting sub-processes were those of contract and budget development. Key participants during contract and budget development were the Office of Clinical Research, sponsors, and the principal investigator. Simulation results indicate that slight increments on the number of trials, arriving to the Office of Clinical Research, would increase activation time by 11 %. Also, incrementing the efficiency of contract and budget development would reduce the activation time by 28 %. Finally, better synchronization between contract and budget development would reduce time spent on batching documentation; however, no improvements would be attained in total activation time. The presented process improvement analytic framework not only identifies administrative barriers, but also helps to devise and evaluate potential improvement scenarios. The strength of our framework lies in its system analysis approach that recognizes the stochastic duration of the activation process and the interdependence between process activities and entities.

  16. Improvement for enhancing effectiveness of universal power system (UPS) continuous testing process

    NASA Astrophysics Data System (ADS)

    Sriratana, Lerdlekha

    2018-01-01

    This experiment aims to enhance the effectiveness of the Universal Power System (UPS) continuous testing process of the Electrical and Electronic Institute by applying work scheduling and time study methods. Initially, the standard time of testing process has not been considered that results of unaccurate testing target and also time wasting has been observed. As monitoring and reducing waste time for improving the efficiency of testing process, Yamazumi chart and job scheduling theory (North West Corner Rule) were applied to develop new work process. After the improvements, the overall efficiency of the process possibly increased from 52.8% to 65.6% or 12.7%. Moreover, the waste time could reduce from 828.3 minutes to 653.6 minutes or 21%, while testing units per batch could increase from 3 to 4 units. Therefore, the number of testing units would increase from 12 units up to 20 units per month that also contribute to increase of net income of UPS testing process by 72%.

  17. Time phased alternate blending of feed coals for liquefaction

    DOEpatents

    Schweigharett, Frank; Hoover, David S.; Garg, Diwaker

    1985-01-01

    The present invention is directed to a method for reducing process performance excursions during feed coal or process solvent changeover in a coal hydroliquefaction process by blending of feedstocks or solvents over time. ,

  18. A Controlled Agitation Process for Improving Quality of Canned Green Beans during Agitation Thermal Processing.

    PubMed

    Singh, Anika; Pratap Singh, Anubhav; Ramaswamy, Hosahalli S

    2016-06-01

    This work introduces the concept of a controlled agitation thermal process to reduce quality damage in liquid-particulate products during agitation thermal processing. Reciprocating agitation thermal processing (RA-TP) was used as the agitation thermal process. In order to reduce the impact of agitation, a new concept of "stopping agitations after sufficient development of cold-spot temperature" was proposed. Green beans were processed in No. 2 (307×409) cans filled with liquids of various consistency (0% to 2% CMC) at various frequencies (1 to 3 Hz) of RA-TP using a full-factorial design and heat penetration results were collected. Corresponding operator's process time to impart a 10-min process lethality (Fo ) and agitation time (AT) were calculated using heat penetration results. Accordingly, products were processed again by stopping agitations as per 3 agitation regimes, namely; full time agitation, equilibration time agitation, and partial time agitation. Processed products were photographed and tested for visual quality, color, texture, breakage of green beans, turbidity, and percentage of insoluble solids in can liquid. Results showed that stopping agitations after sufficient development of cold-spot temperatures is an effective way of reducing product damages caused by agitation (for example, breakage of beans and its leaching into liquid). Agitations till one-log temperature difference gave best color, texture and visual product quality for low-viscosity liquid-particulate mixture and extended agitations till equilibration time was best for high-viscosity products. Thus, it was shown that a controlled agitation thermal process is more effective in obtaining high product quality as compared to a regular agitation thermal process. © 2016 Institute of Food Technologists®

  19. A case study of printing industry plant layout for effective production

    NASA Astrophysics Data System (ADS)

    Viswajit, T.; Teja, T. Ravi; Deepthi, Y. P.

    2017-07-01

    This paper presents the overall picture of the processes happening in printing industry. This research is aimed to improve the plant layout of existing plant. The travel time was reduced by relocating machinery. Relocation is based on systematic layout planning (SLP). The complete process of raw material entering the industry to dispatching of finished product is shown in 3-D Flow diagram. The process happening in each floor explained in detail using Flow Process chart. Travel time is reduced by 25% after modifying existing plant layout.

  20. Near Real-Time Processing of Proteomics Data Using Hadoop.

    PubMed

    Hillman, Chris; Ahmad, Yasmeen; Whitehorn, Mark; Cobley, Andy

    2014-03-01

    This article presents a near real-time processing solution using MapReduce and Hadoop. The solution is aimed at some of the data management and processing challenges facing the life sciences community. Research into genes and their product proteins generates huge volumes of data that must be extensively preprocessed before any biological insight can be gained. In order to carry out this processing in a timely manner, we have investigated the use of techniques from the big data field. These are applied specifically to process data resulting from mass spectrometers in the course of proteomic experiments. Here we present methods of handling the raw data in Hadoop, and then we investigate a process for preprocessing the data using Java code and the MapReduce framework to identify 2D and 3D peaks.

  1. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  2. Administrative Preparedness Strategies: Expediting Procurement and Contracting Cycle Times During an Emergency.

    PubMed

    Hurst, David; Sharpe, Sharon; Yeager, Valerie A

    We assessed whether administrative preparedness processes that were intended to expedite the acquisition of goods and services during a public health emergency affect estimated procurement and contracting cycle times. We obtained data from 2014-2015 applications to the Hospital Preparedness Program and Public Health Emergency Preparedness (HPP-PHEP) cooperative agreements. We compared the estimated procurement and contracting cycle times of 61 HPP-PHEP awardees that did and did not have certain administrative processes in place. Certain processes, such as statutes allowing for procuring and contracting on the open market, had an effect on reducing the estimated cycle times for obtaining goods and services. Other processes, such as cooperative purchasing agreements, also had an effect on estimated procurement time. For example, awardees with statutes that permitted them to obtain goods and services in the open market had an average procurement cycle time of 6 days; those without such statutes had a cycle time of 17 days ( P = .04). PHEP awardees should consider adopting these or similar processes in an effort to reduce cycle times.

  3. SU-E-I-37: Low-Dose Real-Time Region-Of-Interest X-Ray Fluoroscopic Imaging with a GPU-Accelerated Spatially Different Bilateral Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H; Lee, J; Pua, R

    2014-06-01

    Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less

  4. Eye Movements while Reading Biased Homographs: Effects of Prior Encounter and Biasing Context on Reducing the Subordinate Bias Effect

    PubMed Central

    Leinenger, Mallorie; Rayner, Keith

    2013-01-01

    Readers experience processing difficulties when reading biased homographs preceded by subordinate-biasing contexts. Attempts to overcome this processing deficit have often failed to reduce the subordinate bias effect (SBE). In the present studies, we examined the processing of biased homographs preceded by single-sentence, subordinate-biasing contexts, and varied whether this preceding context contained a prior instance of the homograph or a control word/phrase. Having previously encountered the homograph earlier in the sentence reduced the SBE for the subsequent encounter, while simply instantiating the subordinate meaning produced processing difficulty. We compared these reductions in reading times to differences in processing time between dominant-biased repeated and non-repeated conditions in order to verify that the reductions observed in the subordinate cases did not simply reflect a general repetition benefit. Our results indicate that a strong, subordinate-biasing context can interact during lexical access to overcome the activation from meaning frequency and reduce the SBE during reading. PMID:24073328

  5. Reducing RN Vacancy Rate: A Nursing Recruitment Office Process Improvement Project.

    PubMed

    Hisgen, Stephanie A; Page, Nancy E; Thornlow, Deirdre K; Merwin, Elizabeth I

    2018-06-01

    The aim of this study was to reduce the RN vacancy rate at an academic medical center by improving the hiring process in the Nursing Recruitment Office. Inability to fill RN positions can lead to higher vacancy rates and negatively impact staff and patient satisfaction, quality outcomes, and the organization's bottom line. The Model for Improvement was used to design and implement a process improvement project to improve the hiring process from time of interview through the position being filled. Number of days to interview and check references decreased significantly, but no change in overall time to hire and time to fill positions was noted. RN vacancy rate also decreased significantly. Nurse manager satisfaction with the hiring process increased significantly. Redesigning the recruitment process supported operational efficiencies of the organization related to RN recruitment.

  6. Surgical scheduling: a lean approach to process improvement.

    PubMed

    Simon, Ross William; Canacari, Elena G

    2014-01-01

    A large teaching hospital in the northeast United States had an inefficient, paper-based process for scheduling orthopedic surgery that caused delays and contributed to site/side discrepancies. The hospital's leaders formed a team with the goals of developing a safe, effective, patient-centered, timely, efficient, and accurate orthopedic scheduling process; smoothing the schedule so that block time was allocated more evenly; and ensuring correct site/side. Under the resulting process, real-time patient information is entered into a database during the patient's preoperative visit in the surgeon's office. The team found the new process reduced the occurrence of site/side discrepancies to zero, reduced instances of changing the sequence of orthopedic procedures by 70%, and increased patient satisfaction. Copyright © 2014 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  7. FT-NIR: A Tool for Process Monitoring and More.

    PubMed

    Martoccia, Domenico; Lutz, Holger; Cohen, Yvan; Jerphagnon, Thomas; Jenelten, Urban

    2018-03-30

    With ever-increasing pressure to optimize product quality, to reduce cost and to safely increase production output from existing assets, all combined with regular changes in terms of feedstock and operational targets, process monitoring with traditional instruments reaches its limits. One promising answer to these challenges is in-line, real time process analysis with spectroscopic instruments, and above all Fourier-Transform Near Infrared spectroscopy (FT-NIR). Its potential to afford decreased batch cycle times, higher yields, reduced rework and minimized batch variance is presented and application examples in the field of fine chemicals are given. We demonstrate that FT-NIR can be an efficient tool for improved process monitoring and optimization, effective process design and advanced process control.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  9. Analysis of holding time variations to Ni and Fe content and morphology in nickel laterite limonitic reduction process by using coal-dolomite bed

    NASA Astrophysics Data System (ADS)

    Abdul, Fakhreza; Pintowantoro, Sungging; Yuwandono, Ridwan Bagus

    2018-04-01

    With the depletion of nickel sulfide ore resources, the nickel laterit processing become an attention to fulfill nickel world demans. Reducing laterite nickel by using a low cost carbonaceous reductan has proved produces high grade ferronickel alloy. In this research, reduction was carried out to low grade laterite nickel (limonite) with 1.25% nikel content by using CO gas reductant formed by reaction between coal and dolomite. Reduction process preceded by forming brickets mixture from limonit ore, coal, and Na2SO4, then the brickets placed inside crucible bed together with dolomit and reduced at temperature 1400 °C with holding time variations 4, 6, and 8 hours. EDX, XRD, and SEM test were carried out to find out the Ni and nickel grade after reduced, the phases that formed, and the morphology brickets after reduced. The reduction results shows that the highest increase on nickel grade was obtained by 8 hours holding time increasing 5.84 % from initial grade, and the highest recovery was obtained by 6 hours holding time with recovery 88.51 %. While the higest increase on Fe grade was obtained by 4 hours holding time, and the highest recovery Fe was obtained by 4 hours holding time with recovery 85.41%.

  10. Information Technology Project Processes: Understanding the Barriers to Improvement and Adoption

    ERIC Educational Resources Information Center

    Williams, Bernard L.

    2009-01-01

    Every year, organizations lose millions of dollars due to IT (Information Technology) project failures. Over time, organizations have developed processes and procedures to help reduce the incidence of challenged IT projects. Research has shown that IT project processes can work to help reduce the number of challenged projects. The research in this…

  11. Applying industrial process improvement techniques to increase efficiency in a surgical practice.

    PubMed

    Reznick, David; Niazov, Lora; Holizna, Eric; Siperstein, Allan

    2014-10-01

    The goal of this study was to examine how industrial process improvement techniques could help streamline the preoperative workup. Lean process improvement was used to streamline patient workup at an endocrine surgery service at a tertiary medical center utilizing multidisciplinary collaboration. The program consisted of several major changes in how patients are processed in the department. The goal was to shorten the wait time between initial call and consult visit and between consult and surgery. We enrolled 1,438 patients enrolled in the program. The wait time from the initial call until consult was reduced from 18.3 ± 0.7 to 15.4 ± 0.9 days. Wait time from consult until operation was reduced from 39.9 ± 1.5 to 33.9 ± 1.3 days for the overall practice and to 15.0 ± 4.8 days for low-risk patients. Patient cancellations were reduced from 27.9 ± 2.4% to 17.3 ± 2.5%. Overall patient flow increased from 30.9 ± 5.1 to 52.4 ± 5.8 consults per month (all P < .01). Utilizing process improvement methodology, surgery patients can benefit from an improved, streamlined process with significant reduction in wait time from call to initial consult and initial consult to surgery, with reduced cancellations. This generalized process has resulted in increased practice throughput and efficiency and is applicable to any surgery practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anameric, B.; Kawatra, S.K.

    The pig iron nugget process is gaining in importance as an alternative to the traditional blast furnace. Throughout the process, self-reducing-fluxing dried greenballs composed of iron ore concentrate, reducing-carburizing agent (coal), flux (limestone) and binder (bentonite) are heat-treated. During the heat treatment, dried greenballs are first transformed into direct reduced iron (DRI), then to transition direct reduced iron (TDRI) and finally to pig iron nuggets. The furnace temperature and/or residence time and the corresponding levels of carburization, reduction and metallization dictate these transformations. This study involved the determination of threshold furnace temperatures and residence times for completion of all ofmore » the transformation reactions and pig iron nugget production. The experiments involved the heat treatment of self-reducing-fluxing dried greenballs at various furnace temperatures and residence times. The products of these heat treatments were identified by utilizing optical microscopy, apparent density and microhardness measurements.« less

  13. ED Triage Process Improvement: Timely Vital Signs for Less Acute Patients.

    PubMed

    Falconer, Stella S; Karuppan, Corinne M; Kiehne, Emily; Rama, Shravan

    2018-06-13

    Vital signs can result in an upgrade of patients' Emergency Severity Index (ESI) levels. It is therefore preferable to obtain vital signs early in the triage process, particularly for ESI level 3 patients. Emergency departments have an opportunity to redesign triage processes to meet required protocols while enhancing the quality and experience of care. We performed process analyses to redesign the door-to-vital signs process. We also developed spaghetti diagrams to reconfigure the patient arrival area. The door-to-vital signs time was reduced from 43.1 minutes to 6.44 minutes. Both patients and triage staff seemed more satisfied with the new process. The patient arrival area was less congested and more welcoming. Performing activities in parallel reduces flow time with no additional resources. Staff involvement in process planning, redesign, and control ensures engagement and early buy-in. One should anticipate how changes to one process might affect other processes. Copyright © 2018. Published by Elsevier Inc.

  14. Design of forging process variables under uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2005-02-01

    Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.

  15. The Real-Time IRB: A Collaborative Innovation to Decrease IRB Review Time.

    PubMed

    Spellecy, Ryan; Eve, Ann Marie; Connors, Emily R; Shaker, Reza; Clark, David C

    2018-06-01

    Lengthy review times for institutional review boards (IRBs) are a well-known barrier to research. In response to numerous calls to reduce review times, we devised "Real-Time IRB," a process that drastically reduces IRB review time. In this, investigators and study staff attend the IRB meeting and make changes to the protocol while the IRB continues its meeting, so that final approval can be issued at the meeting. This achieved an overall reduction in time from submission to the IRB to final approval of 40%. While this process is time and resource intensive, and cannot address all delays in research, it shows great promise for increasing the pace by which research is translated to patient care.

  16. Effect of nucleation time on bending response of ionic polymer–metal composite actuators

    DOE PAGES

    Kim, Suran; Hong, Seungbum; Choi, Yoon-Young; ...

    2013-07-02

    We attempted an autocatalytic electro-less plating of nickel in order to replace an electroless impregnation-reduction (IR) method in ionic polymer–metal composite (IPMC) actuators to reduce cost and processing time. Because nucleation time of Pd–Sn colloids is the determining factor of overall processing time, we used the nucleation time as our control parameter. In order to optimize nucleation time and investigate its effect on the performance of IPMC actuators, we analyzed the relationship between the nucleation time, interface morphology and electrical properties. The optimized nucleation time was 10 h. Furthermore, the trends of the performance and electrical properties as a functionmore » of nucleation time were attributed to the fact that the Ni penetration depth was determined by the minimum diffusion length of either Pd–Sn colloids or reducing agent ions. The Ni-IPMC actuators can be fabricated less than 14 h processing time without deteriorating performance of the actuators, which is comparable to Pt-IPMC prepared by IR method.« less

  17. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    PubMed

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.K. Kawatra; B. Anamerie; T.C. Eisele

    The pig iron nugget process was developed as an alternative to the traditional blast furnace process by Kobe Steel. The process aimed to produce pig iron nuggets, which have similar chemical and physical properties to blast furnace pig iron, in a single step. The pig iron nugget process utilizes coal instead of coke and self reducing and fluxing dried green balls instead of pellets and sinters. In this process the environmental emissions caused by coke and sinter production, and energy lost between pellet induration (heat hardening) and transportation to the blast furnace can be eliminated. The objectives of this researchmore » were to (1) produce pig iron nuggets in the laboratory, (2) characterize the pig iron nugget produced and compare them with blast furnace pig iron, (3) investigate the furnace temperature and residence time effects on the pig iron nugget production, and (4) optimize the operational furnace temperatures and residence times. The experiments involved heat treatment of self reducing and fluxing dried green balls at various furnace temperatures and residence times. Three chemically and physically different products were produced after the compete reduction of iron oxides to iron depending on the operational furnace temperatures and/or residence times. These products were direct reduced iron (DRI), transition direct reduced iron (TDRI), and pig iron nuggets. The increase in the carbon content of the system as a function of furnace temperature and/or residence time dictated the formation of these products. The direct reduced iron, transition direct reduced iron, and pig iron nuggets produced were analyzed for their chemical composition, degree of metallization, apparent density, microstructure and microhardness. In addition, the change in the carbon content of the system with the changing furnace temperature and/or residence time was detected by optical microscopy and Microhardness measurements. The sufficient carbon dissolution required for the production of pig iron nuggets was determined. It was determined that pig iron nuggets produced had a high apparent density (6.7-7.2 gr/cm3), highly metallized, slag free structure, high iron content (95-97%), high microhardness values (> 325 HVN) and microstructure similar to white cast iron. These properties made them a competitive alternative to blast furnace pig iron.« less

  19. Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey

    NASA Astrophysics Data System (ADS)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.

    2018-01-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  20. Process for using surface strain measurements to obtain operational loads for complex structures

    NASA Technical Reports Server (NTRS)

    Ko, William L. (Inventor); Richards, William Lance (Inventor)

    2010-01-01

    The invention is an improved process for using surface strain data to obtain real-time, operational loads data for complex structures that significantly reduces the time and cost versus current methods.

  1. Effects of extrusion temperature and dwell time on aflatoxin levels in cottonseed.

    PubMed

    Buser, Michael D; Abbas, Hamed K

    2002-04-24

    Cottonseed is an economical source of protein and is commonly used in balancing livestock rations; however, its use is typically limited by protein, fat, gossypol, and aflatoxin contents. Whole cottonseed was extruded to determine if the temperature and dwell time (multiple stages of processing) associated with the process affected aflatoxin levels. The extrusion temperature study showed that aflatoxin levels were reduced by an additional 33% when the cottonseed was extruded at 160 degrees C as compared to 104 degrees C. Furthermore, the multiple-pass extrusion study indicated that aflatoxin levels were reduced by an additional 55% when the cottonseed was extruded four times as compared to one time. To estimate the aflatoxin reductions due to extrusion temperature and dwell time, the least mean fits obtained for the individual studies were combined. Total estimated reductions of 55% (three stages of processing at 104 degrees C), 50% (two stages of processing at 132 degrees C), and 47% (one stage of processing at 160 degrees C) were obtained from the combined equations. If the extreme conditions (four stages of processing at 160 degrees C) of the evaluation studies are applied to the combined temperature and processing equation, the resulting aflatoxin reduction would be 76%.

  2. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention.

    PubMed

    Ho, Chi-Kung; Chen, Fu-Cheng; Chen, Yung-Lung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te; Cheng, Cheng-I

    2017-01-01

    This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time.

  3. Ultrasound assisted chrome tanning: Towards a clean leather production technology.

    PubMed

    Mengistie, Embialle; Smets, Ilse; Van Gerven, Tom

    2016-09-01

    Nowadays, there is a growing demand for a cleaner, but still effective alternative for production processes like in the leather industry. Ultrasound (US) assisted processing of leather might be promising in this sense. In the present paper, the use of US in the conventional chrome tanning process has been studied at different pH, temperature, tanning time, chrome dose and US exposure time by exposing the skin before tanning and during tanning operation. Both prior exposure of the skin to US and US during tanning improves the chrome uptake and reduces the shrinkage significantly. Prior exposure of the skin to US increase the chrome uptake by 13.8% or reduces the chrome dose from 8% to 5% (% based on skin weight) and shorten the process time by half while US during tanning increases the chrome uptake by 28.5% or reduces the chrome dose from 8% to 4% (half) and the tanning time to one third compared to the control without US. Concomitantly, the resulting leather quality (measured as skin shrinkage) improved from 5.2% to 3.2% shrinkage in the skin exposed to US prior tanning and to 1.3% in the skin exposed to US during the tanning experiment. This study confirms that US chrome tanning is an effective and eco-friendly tanning process which can produce a better quality leather product in a shorter process time with a lower chromium dose. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Effectiveness of employer financial incentives in reducing time to report worker injury: an interrupted time series study of two Australian workers' compensation jurisdictions.

    PubMed

    Lane, Tyler J; Gray, Shannon; Hassani-Mahmooei, Behrooz; Collie, Alex

    2018-01-05

    Early intervention following occupational injury can improve health outcomes and reduce the duration and cost of workers' compensation claims. Financial early reporting incentives (ERIs) for employers may shorten the time between injury and access to compensation benefits and services. We examined ERI effect on time spent in the claim lodgement process in two Australian states: South Australia (SA), which introduced them in January 2009, and Tasmania (TAS), which introduced them in July 2010. Using administrative records of 1.47 million claims lodged between July 2006 and June 2012, we conducted an interrupted time series study of ERI impact on monthly median days in the claim lodgement process. Time periods included claim reporting, insurer decision, and total time. The 18-month gap in implementation between the states allowed for a multiple baseline design. In SA, we analysed periods within claim reporting: worker and employer reporting times (similar data were not available in TAS). To account for external threats to validity, we examined impact in reference to a comparator of other Australian workers' compensation jurisdictions. Total time in the process did not immediately change, though trend significantly decreased in both jurisdictions (SA: -0.36 days per month, 95% CI -0.63 to -0.09; TAS: 0.35, -0.50 to -0.20). Claim reporting time also decreased in both (SA: -1.6 days, -2.4 to -0.8; TAS: -5.4, -7.4 to -3.3). In TAS, there was a significant increase in insurer decision time (4.6, 3.9 to 5.4) and a similar but non-significant pattern in SA. In SA, worker reporting time significantly decreased (-4.7, -5.8 to -3.5), but employer reporting time did not (-0.3, -0.8 to 0.2). The results suggest that ERIs reduced claim lodgement time and, in the long-term, reduced total time in the claim lodgement process. However, only worker reporting time significantly decreased in SA, indicating that ERIs may not have shortened the process through the intended target of employer reporting time. Lack of similar data in Tasmania limited our ability to determine whether this was a result of ERIs or another component of the legislative changes. Further, increases in insurer decision time highlight possible unintended negative effects.

  5. Reducing Bottlenecks to Improve the Efficiency of the Lung Cancer Care Delivery Process: A Process Engineering Modeling Approach to Patient-Centered Care.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U

    2017-12-01

    The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.

  6. Processes of Overall Similarity Sorting in Free Classification

    ERIC Educational Resources Information Center

    Milton, Fraser; Longmore, Christopher A.; Wills, A. J.

    2008-01-01

    The processes of overall similarity sorting were investigated in 5 free classification experiments. Experiments 1 and 2 demonstrated that increasing time pressure can reduce the likelihood of overall similarity categorization. Experiment 3 showed that a concurrent load also reduced overall similarity sorting. These findings suggest that overall…

  7. The concept of value stream mapping to reduce of work-time waste as applied the smart construction management

    NASA Astrophysics Data System (ADS)

    Elizar, Suripin, Wibowo, Mochamad Agung

    2017-11-01

    Delays in construction sites occur due to systematic additions of time waste in various activities that are part of the construction process. Work-time waste is non-adding value activity which used to differentiate between physical construction waste found on site and other waste which occurs during the construction process. The aim of this study is identification using the concept of Value Stream Mapping (VSM) to reduce of work-time waste as applied the smart construction management.VSM analysis is a method of business process improvement. The application of VSM began in the manufacturing community. The research method base on theoretically informed case study and literature review. The data have collected using questionnaire through personal interviews from 383 respondents on construction project in Indonesia. The results show that concept of VSM can identify causes of work-time waste. Base on result of questioners and quantitative approach analysis was obtained 29 variables that influence of work-time waste or non-value-adding activities. Base on three cases of construction project founded that average 14.88% of working time was classified as waste. Finally, the concept of VSM can recommend to identification of systematic for reveal current practices and opportunities for improvement towards global challenges. The concept of value stream mapping can help optimize to reduce work-time waste and improve quality standard of construction management. The concept is also can help manager to make a decision to reduce work-time waste so as to obtain of result in more efficient for performance and sustainable construction project.

  8. Ninety to Nothing: a PDSA quality improvement project.

    PubMed

    Prybutok, Gayle Linda

    2018-05-14

    Purpose The purpose of this paper is to present a case study of a successful quality improvement project in an acute care hospital focused on reducing the time of the total patient visit in the emergency department. Design/methodology/approach A multidisciplinary quality improvement team, using the PDSA (Plan, Do, Study, Act) Cycle, analyzed the emergency department care delivery process and sequentially made process improvements that contributed to project success. Findings The average turnaround time goal of 90 minutes or less per visit was achieved in four months, and the organization enjoyed significant collateral benefits both internal to the organization and for its customers. Practical implications This successful PDSA process can be duplicated by healthcare organizations of all sizes seeking to improve a process related to timely, high-quality patient care delivery. Originality/value Extended wait time in hospital emergency departments is a universal problem in the USA that reduces the quality of the customer experience and that delays necessary patient care. This case study demonstrates that a structured quality improvement process implemented by a multidisciplinary team with the authority to make necessary process changes can successfully redefine the norm.

  9. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  10. Mistake proofing: changing designs to reduce error

    PubMed Central

    Grout, J R

    2006-01-01

    Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609

  11. Course Development Cycle Time: A Framework for Continuous Process Improvement.

    ERIC Educational Resources Information Center

    Lake, Erinn

    2003-01-01

    Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)

  12. Recent National Transonic Facility Test Process Improvements (Invited)

    NASA Technical Reports Server (NTRS)

    Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.

    2001-01-01

    This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feed-forward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.

  13. Recent National Transonic Facility Test Process Improvements (Invited)

    NASA Technical Reports Server (NTRS)

    Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W., Jr.; Adcock, J. B.

    2001-01-01

    This paper describes the results of two recent process improvements; drag feed-forward Mach number control and simultaneous force/moment and pressure testing, at the National Transonic Facility. These improvements have reduced the duration and cost of testing. The drag feedforward Mach number control reduces the Mach number settling time by using measured model drag in the Mach number control algorithm. Simultaneous force/moment and pressure testing allows simultaneous collection of force/moment and pressure data without sacrificing data quality thereby reducing the overall testing time. Both improvements can be implemented at any wind tunnel. Additionally the NTF is working to develop and implement continuous pitch as a testing option as an additional method to reduce costs and maintain data quality.

  14. Process techniques of charge transfer time reduction for high speed CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhongxiang, Cao; Quanliang, Li; Ye, Han; Qi, Qin; Peng, Feng; Liyuan, Liu; Nanjian, Wu

    2014-11-01

    This paper proposes pixel process techniques to reduce the charge transfer time in high speed CMOS image sensors. These techniques increase the lateral conductivity of the photo-generated carriers in a pinned photodiode (PPD) and the voltage difference between the PPD and the floating diffusion (FD) node by controlling and optimizing the N doping concentration in the PPD and the threshold voltage of the reset transistor, respectively. The techniques shorten the charge transfer time from the PPD diode to the FD node effectively. The proposed process techniques do not need extra masks and do not cause harm to the fill factor. A sub array of 32 × 64 pixels was designed and implemented in the 0.18 μm CIS process with five implantation conditions splitting the N region in the PPD. The simulation and measured results demonstrate that the charge transfer time can be decreased by using the proposed techniques. Comparing the charge transfer time of the pixel with the different implantation conditions of the N region, the charge transfer time of 0.32 μs is achieved and 31% of image lag was reduced by using the proposed process techniques.

  15. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  16. The automated data processing architecture for the GPI Exoplanet Survey

    NASA Astrophysics Data System (ADS)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Graham, James R.; Macintosh, Bruce

    2017-09-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the GPIES Data Cruncher, combines multiple data reduction pipelines together to intelligently process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  17. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  18. 77 FR 72332 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... receiving reports in a real-time, paperless environment, resulting in complete transaction visibility, fewer interest penalties and reduced processing time. WAWF provides the Department and its suppliers the single..., Travel and Miscellaneous Expenses. WAWF captures and processes invoices and vouchers. The complete list...

  19. Peering into the secrets of food and agricultural co-products

    NASA Astrophysics Data System (ADS)

    Wood, Delilah; Williams, Tina; Glenn, Gregory; Pan, Zhongli; Orts, William; McHugh, Tara

    2010-06-01

    Scanning electron microscopy is a useful tool for understanding food contamination and directing product development of food and industrial products. The current trend in food research is to produce foods that are fast to prepare and/or ready to eat. At the same time, these processed foods must be safe, high quality and maintain all or most of the nutritional value of the original whole foods. Minimally processed foods, is the phrase used to characterize these "new" foods. New techniques are needed which take advantage of minimal processing or processing which enhances the fresh properties and characteristics of whole foods while spending less time on food preparation. The added benefit coupled to less cooking time in an individual kitchen translates to an overall energy savings and reduces the carbon emissions to the environment. Food processing changes the microstructure, and therefore, the quality, texture and flavor, of the resulting food product. Additionally, there is the need to reduce waste, transportation costs and product loss during transportation and storage. Unlike food processing, structural changes are desirable in co-products as function follows form for food packaging films and boxes as well as for building materials and other industrial products. Thus, the standard materials testing procedures are coupled with SEM to provide direction in the development of products from agricultural residues or what would otherwise be considered waste materials. The use of agricultural residues reduces waste and adds value to a currently underutilized or unutilized product. The product might be biodegradable or compostable, thus reducing landfill requirements. Manufacturing industrial and packaging products from biological materials also reduces the amount of petroleum products currently standard in the industry.

  20. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention

    PubMed Central

    Ho, Chi-Kung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te

    2017-01-01

    Background This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). Methods A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. Results There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p < 0.05). There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. Conclusions This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time. PMID:28900621

  1. A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.

    PubMed

    Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K

    2012-08-01

    Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.

  2. Use of Six Sigma Methodology to Reduce Appointment Lead-Time in Obstetrics Outpatient Department.

    PubMed

    Ortiz Barrios, Miguel A; Felizzola Jiménez, Heriberto

    2016-10-01

    This paper focuses on the issue of longer appointment lead-time in the obstetrics outpatient department of a maternal-child hospital in Colombia. Because of extended appointment lead-time, women with high-risk pregnancy could develop severe complications in their health status and put their babies at risk. This problem was detected through a project selection process explained in this article and to solve it, Six Sigma methodology has been used. First, the process was defined through a SIPOC diagram to identify its input and output variables. Second, six sigma performance indicators were calculated to establish the process baseline. Then, a fishbone diagram was used to determine the possible causes of the problem. These causes were validated with the aid of correlation analysis and other statistical tools. Later, improvement strategies were designed to reduce appointment lead-time in this department. Project results evidenced that average appointment lead-time reduced from 6,89 days to 4,08 days and the deviation standard dropped from 1,57 days to 1,24 days. In this way, the hospital will serve pregnant women faster, which represents a risk reduction of perinatal and maternal mortality.

  3. Results of a 1-year quality-improvement process to reduce door-to-needle time in acute ischemic stroke with MRI screening.

    PubMed

    Sablot, D; Gaillard, N; Colas, C; Smadja, P; Gely, C; Dutray, A; Bonnec, J-M; Jurici, S; Farouil, G; Ferraro-Allou, A; Jantac, M; Allou, T; Pujol, C; Olivier, N; Laverdure, A; Fadat, B; Mas, J; Dumitrana, A; Garcia, Y; Touzani, H; Perucho, P; Moulin, T; Richard, C; Heroum, C; Bouly, S; Sagnes-Raffy, C; Heve, D

    To determine the effects of a 1-year quality-improvement (QI) process to reduce door-to-needle (DTN) time in a secondary general hospital in which multimodal MRI screening is used before tissue plasminogen activator (tPA) administration in patients with acute ischemic stroke (AIS). The QI process was initiated in January 2015. Patients who received intravenous (iv) tPA<4.5h after AIS onset between 26 February 2015 to 25 February 2016 (during implementation of the QI process; the "2015 cohort") were identified (n=130), and their demographic and clinical characteristics and timing metrics compared with those of patients treated by iv tPA in 2014 (the "2014 cohort", n=135). Of the 130 patients in the 2015 cohort, 120 (92.3%) of them were screened by MRI. The median DTN time was significantly reduced by 30% (from 84min in 2014 to 59min; P<0.003), while the proportion of treated patients with a DTN time≤60min increased from 21% to 52% (P<0.0001). Demographic and baseline characteristics did not significantly differ between cohorts, and the improvement in DTN time was associated with better outcomes after discharge (patients with a 0-2 score on the modified rankin scale: 59% in the 2015 cohort vs 42.4% in the 2014 cohort; P<0.01). During the 1-year QI process, the median DTN time decreased by 15% (from 65min in the first trimester to 55min in the last trimester; P≤0.04) with a non-significant 1.5-fold increase in the proportion of treated patients with a DTN time≤60min (from 41% to 62%; P=0.09). It is feasible to deliver tPA to patients with AIS within 60min in a general hospital, using MRI as the routine screening modality, making this QI process to reduce DTN time widely applicable to other secondary general hospitals. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  4. [Applying dose banding to the production of antineoplastic drugs: a narrative review of the literature].

    PubMed

    Pérez Huertas, Pablo; Cueto Sola, Margarita; Escobar Cava, Paloma; Borrell García, Carmela; Albert Marí, Asunción; López Briz, Eduardo; Poveda Andrés, José Luis

    2015-07-01

    The dosage of antineoplastic drugs has historically been based on individualized prescription and preparation according to body surface area or patient´s weight. Lack of resources and increased assistance workload in the areas where chemotherapy is made, are leading to the development of new systems to optimize the processing without reducing safety. One of the strategies that has been proposed is the elaboration by dose banding. This new approach standardizes the antineoplastic agents doses by making ranges or bands accepting a percentage of maximum variation. It aims to reduce processing time with the consequent reduction in waiting time for patients; to reduce errors in the manufacturing process and to promote the rational drug use. In conclusion, dose banding is a suitable method for optimizing the development of anticancer drugs, obtaining reductions in oncologic patients waiting time but without actually causing a favorable impact on direct or indirect costs. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  5. Napping Reduces Emotional Attention Bias during Early Childhood

    ERIC Educational Resources Information Center

    Cremone, Amanda; Kurdziel, Laura B. F.; Fraticelli-Torres, Ada; McDermott, Jennifer M.; Spencer, Rebecca M. C.

    2017-01-01

    Sleep loss alters processing of emotional stimuli in preschool-aged children. However, the mechanism by which sleep modifies emotional processing in early childhood is unknown. We tested the hypothesis that a nap, compared to an equivalent time spent awake, reduces biases in attention allocation to affective information. Children (n = 43;…

  6. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  7. Reduction of aerobic and lactic acid bacteria in dairy desludge using an integrated compressed CO2 and ultrasonic process.

    PubMed

    Overton, Tim W; Lu, Tiejun; Bains, Narinder; Leeke, Gary A

    Current treatment routes are not suitable to reduce and stabilise bacterial content in some dairy process streams such as separator and bactofuge desludges which currently present a major emission problem faced by dairy producers. In this study, a novel method for the processing of desludge was developed. The new method, elevated pressure sonication (EPS), uses a combination of low frequency ultrasound (20 kHz) and elevated CO 2 pressure (50 to 100 bar). Process conditions (pressure, sonicator power, processing time) were optimised for batch and continuous EPS processes to reduce viable numbers of aerobic and lactic acid bacteria in bactofuge desludge by ≥3-log fold. Coagulation of proteins present in the desludge also occurred, causing separation of solid (curd) and liquid (whey) fractions. The proposed process offers a 10-fold reduction in energy compared to high temperature short time (HTST) treatment of milk.

  8. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  9. Subsidence related to groundwater pumping for breweries in Merchtem area (Belgium), highlighted by Persistent Scaterrer Interferometry

    NASA Astrophysics Data System (ADS)

    Declercq, Pierre-Yves; Gerard, Pierre; Pirard, Eric; Perissin, Daniele; Walstra, Jan; Devleeschouwer, Xavier

    2017-12-01

    ERS, ENVISAT and TerraSAR-X Synthetic Aperture Radar scenes covering the time span 1992-2014 were processed using a Persistent Scatterer technique to study the ground movements in Merchtem (25 km NW of Brussels, Belgium). The processed datasets, covering three consecutive time intervals, reveal that the investigated area is affected by a global subsidence trend related to the extraction of groundwater in the deeper Cambro-Silurian aquifer. Through time the subsidence pattern is reduced and replaced by an uplift related to the rising water table attested by piezometers located in this aquifer. The subsidence is finally reduced to a zone where currently three breweries are very active and pump groundwater in the Ledo-Paniselian aquifer and in the Cambro-Silurian for process water for the production.

  10. INTEGRATION OF COST MODELS AND PROCESS SIMULATION TOOLS FOR OPTIMUM COMPOSITE MANUFACTURING PROCESS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pack, Seongchan; Wilson, Daniel; Aitharaju, Venkat

    Manufacturing cost of resin transfer molded composite parts is significantly influenced by the cycle time, which is strongly related to the time for both filling and curing of the resin in the mold. The time for filling can be optimized by various injection strategies, and by suitably reducing the length of the resin flow distance during the injection. The curing time can be reduced by the usage of faster curing resins, but it requires a high pressure injection equipment, which is capital intensive. Predictive manufacturing simulation tools that are being developed recently for composite materials are able to provide variousmore » scenarios of processing conditions virtually well in advance of manufacturing the parts. In the present study, we integrate the cost models with process simulation tools to study the influence of various parameters such as injection strategies, injection pressure, compression control to minimize high pressure injection, resin curing rate, and demold time on the manufacturing cost as affected by the annual part volume. A representative automotive component was selected for the study and the results are presented in this paper« less

  11. Emergency medicine: an operations management view.

    PubMed

    Soremekun, Olan A; Terwiesch, Christian; Pines, Jesse M

    2011-12-01

    Operations management (OM) is the science of understanding and improving business processes. For the emergency department (ED), OM principles can be used to reduce and alleviate the effects of crowding. A fundamental principle of OM is the waiting time formula, which has clear implications in the ED given that waiting time is fundamental to patient-centered emergency care. The waiting time formula consists of the activity time (how long it takes to complete a process), the utilization rate (the proportion of time a particular resource such a staff is working), and two measures of variation: the variation in patient interarrival times and the variation in patient processing times. Understanding the waiting time formula is important because it presents the fundamental parameters that can be managed to reduce waiting times and length of stay. An additional useful OM principle that is applicable to the ED is the efficient frontier. The efficient frontier compares the performance of EDs with respect to two dimensions: responsiveness (i.e., 1/wait time) and utilization rates. Some EDs may be "on the frontier," maximizing their responsiveness at their given utilization rates. However, most EDs likely have opportunities to move toward the frontier. Increasing capacity is a movement along the frontier and to truly move toward the frontier (i.e., improving responsiveness at a fixed capacity), we articulate three possible options: eliminating waste, reducing variability, or increasing flexibility. When conceptualizing ED crowding interventions, these are the major strategies to consider. © 2011 by the Society for Academic Emergency Medicine.

  12. Flat-plate solar array project process development area: Process research of non-CZ silicon material

    NASA Technical Reports Server (NTRS)

    Campbell, R. B.

    1986-01-01

    Several different techniques to simultaneously diffuse the front and back junctions in dendritic web silicon were investigated. A successful simultaneous diffusion reduces the cost of the solar cell by reducing the number of processing steps, the amount of capital equipment, and the labor cost. The three techniques studied were: (1) simultaneous diffusion at standard temperatures and times using a tube type diffusion furnace or a belt furnace; (2) diffusion using excimer laser drive-in; and (3) simultaneous diffusion at high temperature and short times using a pulse of high intensity light as the heat source. The use of an excimer laser and high temperature short time diffusion experiment were both more successful than the diffusion at standard temperature and times. The three techniques are described in detail and a cost analysis of the more successful techniques is provided.

  13. BIOREACTOR ECONOMICS, SIZE AND TIME OF OPERATION (BEST) COMPUTER SIMULATOR FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS

    EPA Science Inventory

    BEST (bioreactor economics, size and time of operation) is an Excel™ spreadsheet-based model that is used in conjunction with the public domain geochemical modeling software, PHREEQCI. The BEST model is used in the design process of sulfate-reducing bacteria (SRB) field bioreacto...

  14. Reduced cost and improved figure of sapphire optical components

    NASA Astrophysics Data System (ADS)

    Walters, Mark; Bartlett, Kevin; Brophy, Matthew R.; DeGroote Nelson, Jessica; Medicus, Kate

    2015-10-01

    Sapphire presents many challenges to optical manufacturers due to its high hardness and anisotropic properties. Long lead times and high prices are the typical result of such challenges. The cost of even a simple 'grind and shine' process can be prohibitive. The high precision surfaces required by optical sensor applications further exacerbate the challenge of processing sapphire thereby increasing cost further. Optimax has demonstrated a production process for such windows that delivers over 50% time reduction as compared to traditional manufacturing processes for sapphire, while producing windows with less than 1/5 wave rms figure error. Optimax's sapphire production process achieves significant improvement in cost by implementation of a controlled grinding process to present the best possible surface to the polishing equipment. Following the grinding process is a polishing process taking advantage of chemical interactions between slurry and substrate to deliver excellent removal rates and surface finish. Through experiments, the mechanics of the polishing process were also optimized to produce excellent optical figure. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. In addition to reducing the cost of producing large sapphire sensor windows, the grinding and polishing technology Optimax has developed aids in producing spherical sapphire components to better figure quality. Through specially developed polishing slurries, the peak-to-valley figure error of spherical sapphire parts is reduced by over 80%.

  15. The effect of elevations in internal temperature on event-related potentials during a simple cognitive task in humans.

    PubMed

    Shibasaki, Manabu; Namba, Mari; Oshiro, Misaki; Crandall, Craig G; Nakata, Hiroki

    2016-07-01

    The effect of hyperthermia on cognitive function remains equivocal, perhaps because of methodological discrepancy. Using electroencephalographic event-related potentials (ERPs), we tested the hypothesis that a passive heat stress impairs cognitive processing. Thirteen volunteers performed repeated auditory oddball paradigms under two thermal conditions, normothermic time control and heat stress, on different days. For the heat stress trial, these paradigms were performed at preheat stress (i.e., normothermic) baseline, when esophageal temperature had increased by ∼0.8°C, when esophageal temperature had increased by ∼2.0°C, and during cooling following the heat stress. The reaction time and ERPs were recorded in each session. For the time control trial, subjects performed the auditory oddball paradigms at approximately the same time interval as they did in the heat stress trial. The peak latency and amplitude of an indicator of auditory processing (N100) were not altered regardless of thermal conditions. An indicator of stimulus classification/evaluation time (latency of P300) and the reaction time were shortened during heat stress; moreover an indicator of cognitive processing (the amplitude of P300) was significantly reduced during severe heat stress (8.3 ± 1.3 μV) relative to the baseline (12.2 ± 1.0 μV, P < 0.01). No changes in these indexes occurred during the time control trial. During subsequent whole body cooling, the amplitude of P300 remained reduced, and the reaction time and latency of P300 remained shortened. These results suggest that excessive elevations in internal temperature reduce cognitive processing but promote classification time. Copyright © 2016 the American Physiological Society.

  16. Reverse time migration by Krylov subspace reduced order modeling

    NASA Astrophysics Data System (ADS)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  17. Stepwise drying of medicinal plants as alternative to reduce time and energy processing

    NASA Astrophysics Data System (ADS)

    Cuervo-Andrade, S. P.; Hensel, O.

    2016-07-01

    The objective of drying medicinal plants is to extend the shelf life and conserving the fresh characteristics. This is achieved by reducing the water activity (aw) of the product to a value which will inhibit the growth and development of pathogenic and spoilage microorganisms, significantly reducing enzyme activity and the rate at which undesirable chemical reactions occur. The technical drying process requires an enormous amount of thermal and electrical energy. An improvement in the quality of the product to be dried and at the same time a decrease in the drying cost and time are achieved through the utilization of a controlled conventional drying method, which is based on a good utilization of the renewable energy or looking for other alternatives which achieve lower processing times without sacrificing the final product quality. In this work the method of stepwise drying of medicinal plants is presented as an alternative to the conventional drying that uses a constant temperature during the whole process. The objective of stepwise drying is the decrease of drying time and reduction in energy consumption. In this process, apart from observing the effects on decreases the effective drying process time and energy, the influence of the different combinations of drying phases on several characteristics of the product are considered. The tests were carried out with Melissa officinalis L. variety citronella, sowed in greenhouse. For the stepwise drying process different combinations of initial and final temperature, 40/50°C, are evaluated, with different transition points associated to different moisture contents (20, 30, 40% and 50%) of the product during the process. Final quality of dried foods is another important issue in food drying. Drying process has effect in quality attributes drying products. This study was determining the color changes and essential oil loses by reference the measurement of the color and essential oil content of the fresh product was used. Drying curves were obtained to observe the dynamics of the process for different combinations of temperature and points of change, corresponding to different conditions of moisture content of the product.

  18. Time-based management of patient processes.

    PubMed

    Kujala, Jaakko; Lillrank, Paul; Kronström, Virpi; Peltokorpi, Antti

    2006-01-01

    The purpose of this paper is to present a conceptual framework that would enable the effective application of time based competition (TBC) and work in process (WIP) concepts in the design and management of effective and efficient patient processes. This paper discusses the applicability of time-based competition and work-in-progress concepts to the design and management of healthcare service production processes. A conceptual framework is derived from the analysis of both existing research and empirical case studies. The paper finds that a patient episode is analogous to a customer order-to-delivery chain in industry. The effective application of TBC and WIP can be achieved by focusing on through put time of a patient episode by reducing the non-value adding time components and by minimizing time categories that are main cost drivers for all stakeholders involved in the patient episode. The paper shows that an application of TBC in managing patient processes can be limited if there is no consensus about optimal care episode in the medical community. In the paper it is shown that managing patient processes based on time and cost analysis enables one to allocate the optimal amount of resources, which would allow a healthcare system to minimize the total cost of specific episodes of illness. Analysing the total cost of patient episodes can provide useful information in the allocation of limited resources among multiple patient processes. This paper introduces a framework for health care managers and researchers to analyze the effect of reducing through put time to the total cost of patient episodes.

  19. Dissecting delays in trauma care using corporate lean six sigma methodology.

    PubMed

    Parks, Jennifer K; Klein, Jorie; Frankel, Heidi L; Friese, Randall S; Shafi, Shahid

    2008-11-01

    The Institute of Medicine has identified trauma center overcrowding as a crisis. We applied corporate Lean Six Sigma methodology to reduce overcrowding by quantifying patient dwell times in trauma resuscitation units (TRU) and to identify opportunities for reducing them. TRU dwell time of all patients treated at a Level I trauma center were measured prospectively during a 3-month period (n = 1,184). Delays were defined as TRU dwell time >6 hours. Using personnel trained in corporate Lean Six Sigma methodology, we created a detailed process map of patient flow through our TRU and measured time spent at each step prospectively during a 24/7 week-long time study (n = 43). Patients with TRU dwell time below the median (3 hours) were compared with those with longer dwell times to identify opportunities for improvement. TRU delays occurred in 183 of 1,184 trauma patients (15%), and peaked on days with >15 patients or with presence of five simultaneous patients. However, 135 delays (74%) occurred on days when

  20. An experimental study on providing a scientific evidence for seven-time alcohol-steaming of Rhei Rhizoma when clinically used.

    PubMed

    Sim, Yeomoon; Oh, Hyein; Oh, Dal-Seok; Kim, Namkwon; Gu, Pil Sung; Choi, Jin Gyu; Kim, Hyo Geun; Kang, Tong Ho; Oh, Myung Sook

    2015-10-27

    Rhei Rhizoma (RR) has been widely used as laxative and processed to alter its therapeutic actions or reduce its side effects. In this study, we evaluated experimentally the clinical application guideline that RR should be alcohol-steamed seven times before being used in elderly patients, as described in Dongeuibogam, the most famous book on Korean traditional medicine. Unprocessed RR (RR-U) was soaked in rice wine, steamed and then fully dried (RR-P1). The process was repeated four (RR-P4) or seven times (RR-P7). Reversed-phase high-performance liquid chromatography was used to determine the RR-U, RR-P1, RR-P4 and RR-P7 (RRs) constituents. To evaluate the effect of RRs on liver toxicity, human hepatoma cells (HepG2) were treated with RRs at 100 μg/mL for 4 h and then cell viabilities were measured using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide method. To confirm the effects in vivo, 5-week-old male Sprague-Dawley rats were treated with RRs at 3 g/kg/day for 21 days. Body weight and serum biochemical parameters were measured and liver histology was assessed. The levels of sennosides decreased in processed RRs in an iteration-dependent manner, while the emodin level was unaffected. In HepG2 cells, cell viability was reduced with RR-U, while the toxicity decreased according to the number of processing cycles. The changes in body weight, relative liver weight and liver enzymes of RR-U-treated rats were reduced in processed RRs-treated rats. Histopathological analysis indicated swelling and cholestasis improved following seven times alcohol-steaming cycles. These results provide experimental evidence that RR-P7 almost completely reduces RR hepatotoxicity.

  1. A Systematic Approach to Optimize Organizations Operating in Uncertain Environments: Design Methodology and Applications

    DTIC Science & Technology

    2002-09-01

    sub-goal can lead to achieving different goals (e.g., automation of on-line order processing may lead to both reducing the storage cost and reducing...equipment Introduce new technology Find cheaper supplier Sign a contract Introduce cheaper materials Set up and automate on-line order processing Integrate... order processing with inventory and shipping Set up company’s website Freight consolidation Just-in-time versus pre-planned balance

  2. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell; Nettles, Mindy

    2015-01-01

    The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.

  3. Improving Service Delivery in a County Health Department WIC Clinic: An Application of Statistical Process Control Techniques

    PubMed Central

    Boe, Debra Thingstad; Parsons, Helen

    2009-01-01

    Local public health agencies are challenged to continually improve service delivery, yet they frequently operate with constrained resources. Quality improvement methods and techniques such as statistical process control are commonly used in other industries, and they have recently been proposed as a means of improving service delivery and performance in public health settings. We analyzed a quality improvement project undertaken at a local Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) clinic to reduce waiting times and improve client satisfaction with a walk-in nutrition education service. We used statistical process control techniques to evaluate initial process performance, implement an intervention, and assess process improvements. We found that implementation of these techniques significantly reduced waiting time and improved clients' satisfaction with the WIC service. PMID:19608964

  4. A new pulping process for wheat straw to reduce problems with the discharge of black liquor.

    PubMed

    Huang, Guolin; Shi, Jeffrey X; Langrish, Tim A G

    2007-11-01

    Aqueous ammonia mixed with caustic potash as wheat straw pulping liquor was investigated. The caustic potash did not only reduce the NH3 usage and cooking time, but also provided a potassium source as a fertilizer in the black liquor. Excess NH3 in the black liquor was recovered and reused by batch distillation with a 98% recovery rate of free NH3. The black liquor was further treated for reuse by coagulation under alkaline conditions. The effects of different flocculation conditions, such as the dosage of 10% aluminium polychloride, the dosage of 0.1% polyacrylamide, the reaction temperature and the pH of the black liquor on the flocculating process were studied. The supernatant was recycled as cooking liquor by adding extra NH4OH and KOH. The amount of delignification and the pulp yield for the process remained steady at 82-85% and 48-50%, respectively, when reusing the supernatant four times. The coagulated residues could be further processed as solid fertilizers. This study provided a new pulping process for wheat straw to reduce problems of discharge black liquor.

  5. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  6. Capabilities and constraints of NASA's ground-based reduced gravity facilities

    NASA Technical Reports Server (NTRS)

    Lekan, Jack; Neumann, Eric S.; Sotos, Raymond G.

    1993-01-01

    The ground-based reduced gravity facilities of NASA have been utilized to support numerous investigations addressing various processes and phenomina in several disciplines for the past 30 years. These facilities, which include drop towers, drop tubes, aircraft, and sounding rockets are able to provide a low gravity environment (gravitational levels that range from 10(exp -2)g to 10(exp -6)g) by creating a free fall or semi-free fall condition where the force of gravity on an experiment is offset by its linear acceleration during the 'fall' (drop or parabola). The low gravity condition obtained on the ground is the same as that of an orbiting spacecraft which is in a state of perpetual free fall. The gravitational levels and associated duration times associated with the full spectrum of reduced gravity facilities including spaced-based facilities are summarized. Even though ground-based facilities offer a relatively short experiment time, this available test time has been found to be sufficient to advance the scientific understanding of many phenomena and to provide meaningful hardware tests during the flight experiment development process. Also, since experiments can be quickly repeated in these facilities, multistep phenomena that have longer characteristic times associated with them can sometimes be examined in a step-by-step process. There is a large body of literature which has reported the study results achieved through using reduced-gravity data obtained from the facilities.

  7. Optimizing process and equipment efficiency using integrated methods

    NASA Astrophysics Data System (ADS)

    D'Elia, Michael J.; Alfonso, Ted F.

    1996-09-01

    The semiconductor manufacturing industry is continually riding the edge of technology as it tries to push toward higher design limits. Mature fabs must cut operating costs while increasing productivity to remain profitable and cannot justify large capital expenditures to improve productivity. Thus, they must push current tool production capabilities to cut manufacturing costs and remain viable. Working to continuously improve mature production methods requires innovation. Furthermore, testing and successful implementation of these ideas into modern production environments require both supporting technical data and commitment from those working with the process daily. At AMD, natural work groups (NWGs) composed of operators, technicians, engineers, and supervisors collaborate to foster innovative thinking and secure commitment. Recently, an AMD NWG improved equipment cycle time on the Genus tungsten silicide (WSi) deposition system. The team used total productive manufacturing (TPM) to identify areas for process improvement. Improved in-line equipment monitoring was achieved by constructing a real time overall equipment effectiveness (OEE) calculator which tracked equipment down, idle, qualification, and production times. In-line monitoring results indicated that qualification time associated with slow Inspex turn-around time and machine downtime associated with manual cleans contributed greatly to reduced availability. Qualification time was reduced by 75% by implementing a new Inspex monitor pre-staging technique. Downtime associated with manual cleans was reduced by implementing an in-situ plasma etch back to extend the time between manual cleans. A designed experiment was used to optimize the process. Time between 18 hour manual cleans has been improved from every 250 to every 1500 cycles. Moreover defect density realized a 3X improvement. Overall, the team achieved a 35% increase in tool availability. This paper details the above strategies and accomplishments.

  8. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  9. On Cognitive Strategies for Facilitating Acquisition, Retention, and Retrieval in Training and Education

    DTIC Science & Technology

    1976-05-01

    real time task that calls for flexibility in allocating attention to differ- ent tasks and the ability to cope with divided attention , or to time... attentional and intentional processes. Self-directional resources include self-programming and self-monitoring processes. Possibilities for teaching...students better control over attentional and intentional processes, by using neurophysiological indicators, particularly to reduce self-generated

  10. Rightsizing HVAC Systems to Reduce Capital Costs and Save Energy

    ERIC Educational Resources Information Center

    Sebesta, James

    2010-01-01

    Nearly every institution is faced with the situation of having to reduce the cost of a construction project from time to time through a process generally referred to as "value engineering." Just the mention of those words, however, gives rise to all types of connotations, thoughts, and memories (usually negative) for those in the…

  11. The design of red-blue 3D video fusion system based on DM642

    NASA Astrophysics Data System (ADS)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  12. Prospects for reducing the processing cost of lithium ion batteries

    DOE PAGES

    Wood III, David L.; Li, Jianlin; Daniel, Claus

    2014-11-06

    A detailed processing cost breakdown is given for lithium-ion battery (LIB) electrodes, which focuses on: elimination of toxic, costly N-methylpyrrolidone (NMP) dispersion chemistry; doubling the thicknesses of the anode and cathode to raise energy density; and, reduction of the anode electrolyte wetting and SEI-layer formation time. These processing cost reduction technologies generically adaptable to any anode or cathode cell chemistry and are being implemented at ORNL. This paper shows step by step how these cost savings can be realized in existing or new LIB manufacturing plants using a baseline case of thin (power) electrodes produced with NMP processing and amore » standard 10-14-day wetting and formation process. In particular, it is shown that aqueous electrode processing can cut the electrode processing cost and energy consumption by an order of magnitude. Doubling the thickness of the electrodes allows for using half of the inactive current collectors and separators, contributing even further to the processing cost savings. Finally wetting and SEI-layer formation cost savings are discussed in the context of a protocol with significantly reduced time. These three benefits collectively offer the possibility of reducing LIB pack cost from $502.8 kWh-1-usable to $370.3 kWh-1-usable, a savings of $132.5/kWh (or 26.4%).« less

  13. Aqua-Aura QuickDAM (QDAM) 2.0 Ops Concept

    NASA Technical Reports Server (NTRS)

    Nidhiry, John

    2015-01-01

    The presentation describes the Quick Debris Avoidance Maneuver (QDAM) 2.0 process used the Aqua and Aura flight teams to (a) reduce the work load and dependency on staff and systems; (b) reduce turn-around time and provide emergency last minute capabilities; and (c) increase burn parameter flexibility. The presentation also compares the QDAM 2.0 process to previous approaches.

  14. Scalloping minimization in deep Si etching on Unaxis DSE tools

    NASA Astrophysics Data System (ADS)

    Lai, Shouliang; Johnson, Dave J.; Westerman, Russ J.; Nolan, John J.; Purser, David; Devre, Mike

    2003-01-01

    Sidewall smoothness is often a critical requirement for many MEMS devices, such as microfludic devices, chemical, biological and optical transducers, while fast silicon etch rate is another. For such applications, the time division multiplex (TDM) etch processes, so-called "Bosch" processes are widely employed. However, in the conventional TDM processes, rough sidewalls result due to scallop formation. To date, the amplitude of the scalloping has been directly linked to the silicon etch rate. At Unaxis USA Inc., we have developed a proprietary fast gas switching technique that is effective for scalloping minimization in deep silicon etching processes. In this technique, process cycle times can be reduced from several seconds to as little as a fraction of second. Scallop amplitudes can be reduced with shorter process cycles. More importantly, as the scallop amplitude is progressively reduced, the silicon etch rate can be maintained relatively constant at high values. An optimized experiment has shown that at etch rate in excess of 7 μm/min, scallops with length of 116 nm and depth of 35 nm were obtained. The fast gas switching approach offers an ideal manufacturing solution for MEMS applications where extremely smooth sidewall and fast etch rate are crucial.

  15. High-throughput process development: determination of dynamic binding capacity using microtiter filter plates filled with chromatography resin.

    PubMed

    Bergander, Tryggve; Nilsson-Välimaa, Kristina; Oberg, Katarina; Lacki, Karol M

    2008-01-01

    Steadily increasing demand for more efficient and more affordable biomolecule-based therapies put a significant burden on biopharma companies to reduce the cost of R&D activities associated with introduction of a new drug to the market. Reducing the time required to develop a purification process would be one option to address the high cost issue. The reduction in time can be accomplished if more efficient methods/tools are available for process development work, including high-throughput techniques. This paper addresses the transitions from traditional column-based process development to a modern high-throughput approach utilizing microtiter filter plates filled with a well-defined volume of chromatography resin. The approach is based on implementing the well-known batch uptake principle into microtiter plate geometry. Two variants of the proposed approach, allowing for either qualitative or quantitative estimation of dynamic binding capacity as a function of residence time, are described. Examples of quantitative estimation of dynamic binding capacities of human polyclonal IgG on MabSelect SuRe and of qualitative estimation of dynamic binding capacity of amyloglucosidase on a prototype of Capto DEAE weak ion exchanger are given. The proposed high-throughput method for determination of dynamic binding capacity significantly reduces time and sample consumption as compared to a traditional method utilizing packed chromatography columns without sacrificing the accuracy of data obtained.

  16. Brainstem timing: implications for cortical processing and literacy.

    PubMed

    Banai, Karen; Nicol, Trent; Zecker, Steven G; Kraus, Nina

    2005-10-26

    The search for a unique biological marker of language-based learning disabilities has so far yielded inconclusive findings. Previous studies have shown a plethora of auditory processing deficits in learning disabilities at both the perceptual and physiological levels. In this study, we investigated the association among brainstem timing, cortical processing of stimulus differences, and literacy skills. To that end, brainstem timing and cortical sensitivity to acoustic change [mismatch negativity (MMN)] were measured in a group of children with learning disabilities and normal-learning children. The learning-disabled (LD) group was further divided into two subgroups with normal and abnormal brainstem timing. MMNs, literacy, and cognitive abilities were compared among the three groups. LD individuals with abnormal brainstem timing were more likely to show reduced processing of acoustic change at the cortical level compared with both normal-learning individuals and LD individuals with normal brainstem timing. This group was also characterized by a more severe form of learning disability manifested by poorer reading, listening comprehension, and general cognitive ability. We conclude that abnormal brainstem timing in learning disabilities is related to higher incidence of reduced cortical sensitivity to acoustic change and to deficient literacy skills. These findings suggest that abnormal brainstem timing may serve as a reliable marker of a subgroup of individuals with learning disabilities. They also suggest that faulty mechanisms of neural timing at the brainstem may be the biological basis of malfunction in this group.

  17. Clinical evaluation of reducing acquisition time on single-photon emission computed tomography image quality using proprietary resolution recovery software.

    PubMed

    Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B

    2013-11-01

    A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.

  18. The Design, Development and Testing of a Multi-process Real-time Software System

    DTIC Science & Technology

    2007-03-01

    programming large systems stems from the complexity of dealing with many different details at one time. A sound engineering approach is to break...controls and 3) is portable to other OS platforms such as Microsoft Windows. Next, to reduce the complexity of the programming tasks, the system...processes depending on how often the process has to check to see if common data was modified. A good method for one process to quickly notify another

  19. Characterization of Ti and Co based biomaterials processed via laser based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Sahasrabudhe, Himanshu

    Titanium and Cobalt based metallic materials are currently the most ideal materials for load-bearing metallic bio medical applications. However, the long term tribological degradation of these materials still remains a problem that needs a solution. To improve the tribological performance of these two metallic systems, three different research approaches were adapted, stemming out four different research projects. First, the simplicity of laser gas nitriding was utilized with a modern LENS(TM) technology to form an in situ nitride rich later in titanium substrate material. This nitride rich composite coating improved the hardness by as much as fifteen times and reduced the wear rate by more than a magnitude. The leaching of metallic ions during wear was also reduced by four times. In the second research project, a mixture of titanium and silicon were processed on a titanium substrate in a nitrogen rich environment. The results of this reactive, in situ additive manufacturing process were Ti-Si-Nitride coatings that were harder than the titanium substrate by more than twenty times. These coatings also reduced the wear rate by more than two magnitudes. In the third research approach, composites of CoCrMo alloy and Calcium phosphate (CaP) bio ceramic were processed using LENS(TM) based additive manufacturing. These composites were effective in reducing the wear in the CoCrMo alloy by more than three times as well as reduce the leaching of cobalt and chromium ions during wear. The novel composite materials were found to develop a tribofilm during wear. In the final project, a combination of hard nitride coating and addition of CaP bioceramic was investigated by processing a mixture of Ti6Al4V alloy and CaP in a nitrogen rich environment using the LENS(TM) technology. The resultant Ti64-CaP-Nitride coatings significantly reduced the wear damage on the substrate. There was also a drastic reduction in the metal ions leached during wear. The results indicate that the three tested approaches for reducing the wear damage in Ti and Co based were successful. These approaches and the associated research investigations could pave the way for future work in alleviating wear and corrosion related damage, especially via the additive manufacturing route.

  20. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  1. Signal quality and Bayesian signal processing in neurofeedback based on real-time fMRI.

    PubMed

    Koush, Yury; Zvyagintsev, Mikhail; Dyck, Miriam; Mathiak, Krystyna A; Mathiak, Klaus

    2012-01-02

    Real-time fMRI allows analysis and visualization of the brain activity online, i.e. within one repetition time. It can be used in neurofeedback applications where subjects attempt to control an activation level in a specified region of interest (ROI) of their brain. The signal derived from the ROI is contaminated with noise and artifacts, namely with physiological noise from breathing and heart beat, scanner drift, motion-related artifacts and measurement noise. We developed a Bayesian approach to reduce noise and to remove artifacts in real-time using a modified Kalman filter. The system performs several signal processing operations: subtraction of constant and low-frequency signal components, spike removal and signal smoothing. Quantitative feedback signal quality analysis was used to estimate the quality of the neurofeedback time series and performance of the applied signal processing on different ROIs. The signal-to-noise ratio (SNR) across the entire time series and the group event-related SNR (eSNR) were significantly higher for the processed time series in comparison to the raw data. Applied signal processing improved the t-statistic increasing the significance of blood oxygen level-dependent (BOLD) signal changes. Accordingly, the contrast-to-noise ratio (CNR) of the feedback time series was improved as well. In addition, the data revealed increase of localized self-control across feedback sessions. The new signal processing approach provided reliable neurofeedback, performed precise artifacts removal, reduced noise, and required minimal manual adjustments of parameters. Advanced and fast online signal processing algorithms considerably increased the quality as well as the information content of the control signal which in turn resulted in higher contingency in the neurofeedback loop. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Development of CVD Diamond for Industrial Applications Final Report CRADA No. TC-2047-02

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caplan, M.; Olstad, R.; Jory, H.

    2017-09-08

    This project was a collaborative effort to develop and demonstrate a new millimeter microwave assisted chemical vapor deposition(CVD) process for manufacturing large diamond disks with greatly reduced processing times and costs from those now available. In the CVD process, carbon based gases (methane) and hydrogen are dissociated into plasma using microwave discharge and then deposited layer by layer as polycrystalline diamond onto a substrate. The available low frequency (2.45GHz) microwave sources used elsewhere (De Beers) result in low density plasmas and low deposition rates: 4 inch diamond disks take 6-8 weeks to process. The new system developed in this projectmore » uses a high frequency 30GHz Gyrotron as the microwave source and a quasi-optical CVD chamber resulting in a much higher density plasma which greatly reduced the diamond processing times (1-2 weeks)« less

  3. A Self-Aligned a-IGZO Thin-Film Transistor Using a New Two-Photo-Mask Process with a Continuous Etching Scheme.

    PubMed

    Fan, Ching-Lin; Shang, Ming-Chi; Li, Bo-Jyun; Lin, Yu-Zuo; Wang, Shea-Jue; Lee, Win-Der

    2014-08-11

    Minimizing the parasitic capacitance and the number of photo-masks can improve operational speed and reduce fabrication costs. Therefore, in this study, a new two-photo-mask process is proposed that exhibits a self-aligned structure without an etching-stop layer. Combining the backside-ultraviolet (BUV) exposure and backside-lift-off (BLO) schemes can not only prevent the damage when etching the source/drain (S/D) electrodes but also reduce the number of photo-masks required during fabrication and minimize the parasitic capacitance with the decreasing of gate overlap length at same time. Compared with traditional fabrication processes, the proposed process yields that thin-film transistors (TFTs) exhibit comparable field-effect mobility (9.5 cm²/V·s), threshold voltage (3.39 V), and subthreshold swing (0.3 V/decade). The delay time of an inverter fabricated using the proposed process was considerably decreased.

  4. A Self-Aligned a-IGZO Thin-Film Transistor Using a New Two-Photo-Mask Process with a Continuous Etching Scheme

    PubMed Central

    Fan, Ching-Lin; Shang, Ming-Chi; Li, Bo-Jyun; Lin, Yu-Zuo; Wang, Shea-Jue; Lee, Win-Der

    2014-01-01

    Minimizing the parasitic capacitance and the number of photo-masks can improve operational speed and reduce fabrication costs. Therefore, in this study, a new two-photo-mask process is proposed that exhibits a self-aligned structure without an etching-stop layer. Combining the backside-ultraviolet (BUV) exposure and backside-lift-off (BLO) schemes can not only prevent the damage when etching the source/drain (S/D) electrodes but also reduce the number of photo-masks required during fabrication and minimize the parasitic capacitance with the decreasing of gate overlap length at same time. Compared with traditional fabrication processes, the proposed process yields that thin-film transistors (TFTs) exhibit comparable field-effect mobility (9.5 cm2/V·s), threshold voltage (3.39 V), and subthreshold swing (0.3 V/decade). The delay time of an inverter fabricated using the proposed process was considerably decreased. PMID:28788159

  5. Time lens assisted photonic sampling extraction

    NASA Astrophysics Data System (ADS)

    Petrillo, Keith Gordon

    Telecommunication bandwidth demands have dramatically increased in recent years due to Internet based services like cloud computing and storage, large file sharing, and video streaming. Additionally, sensing systems such as wideband radar, magnetic imaging resonance systems, and complex modulation formats to handle large data transfer in telecommunications require high speed, high resolution analog-to-digital converters (ADCs) to interpret the data. Accurately processing and acquiring the information at next generation data rates from these systems has become challenging for electronic systems. The largest contributors to the electronic bottleneck are bandwidth and timing jitter which limit speed and reduce accuracy. Optical systems have shown to have at least three orders of magnitude increase in bandwidth capabilities and state of the art mode locked lasers have reduced timing jitters into thousands of attoseconds. Such features have encouraged processing signals without the use of electronics or using photonics to assist electronics. All optical signal processing has allowed the processing of telecommunication line rates up to 1.28 Tb/s and high resolution analog-to-digital converters in the 10s of gigahertz. The major drawback to these optical systems is the high cost of the components. The application of all optical processing techniques such as a time lens and chirped processing can greatly reduce bandwidth and cost requirements of optical serial to parallel converters and push photonically assisted ADCs into the 100s of gigahertz. In this dissertation, the building blocks to a high speed photonically assisted ADC are demonstrated, each providing benefits to its own respective application. A serial to parallel converter using a continuously operating time lens as an optical Fourier processor is demonstrated to fully convert a 160-Gb/s optical time division multiplexed signal to 16 10-Gb/s channels with error free operation. Using chirped processing, an optical sample and hold concept is demonstrated and analyzed as a resolution improvement to existing photonically assisted ADCs. Simulations indicate that the application of a continuously operating time lens to a photonically assisted sampling system can increase photonically sampled systems by an order of magnitude while acquiring properties similar to an optical sample and hold system.

  6. Toward zero waste to landfill: an effective method for recycling zeolite waste from refinery industry

    NASA Astrophysics Data System (ADS)

    Homchuen, K.; Anuwattana, R.; Limphitakphong, N.; Chavalparit, O.

    2017-07-01

    One-third of landfill waste of refinery plant in Thailand was spent chloride zeolite, which wastes a huge of land, cost and time for handling. Toward zero waste to landfill, this study was aimed at determining an effective method for recycling zeolite waste by comparing the chemical process with the electrochemical process. To investigate the optimum conditions of both processes, concentration of chemical solution and reaction time were carried out for the former, while the latter varied in term of current density, initial pH of water, and reaction time. The results stated that regenerating zeolite waste from refinery industry in Thailand should be done through the chemical process with alkaline solution because it provided the best chloride adsorption efficiency with cost the least. A successful recycling will be beneficial not only in reducing the amount of landfill waste but also in reducing material and disposal costs and consumption of natural resources as well.

  7. Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture

    DTIC Science & Technology

    2016-07-10

    different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the

  8. Application of time-variable process noise in terrestrial reference frames determined from VLBI data

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard S.; Abbondanza, Claudio; Chin, Toshio M.; Heflin, Michael B.; Parker, Jay W.; Wu, Xiaoping; Balidakis, Kyriakos; Nilsson, Tobias; Glaser, Susanne; Karbon, Maria; Heinkelmann, Robert; Schuh, Harald

    2018-05-01

    In recent years, Kalman filtering has emerged as a suitable technique to determine terrestrial reference frames (TRFs), a prime example being JTRF2014. The time series approach allows variations of station coordinates that are neither reduced by observational corrections nor considered in the functional model to be taken into account. These variations are primarily due to non-tidal geophysical loading effects that are not reduced according to the current IERS Conventions (2010). It is standard practice that the process noise models applied in Kalman filter TRF solutions are derived from time series of loading displacements and account for station dependent differences. So far, it has been assumed that the parameters of these process noise models are constant over time. However, due to the presence of seasonal and irregular variations, this assumption does not truly reflect reality. In this study, we derive a station coordinate process noise model allowing for such temporal variations. This process noise model and one that is a parameterized version of the former are applied in the computation of TRF solutions based on very long baseline interferometry data. In comparison with a solution based on a constant process noise model, we find that the station coordinates are affected at the millimeter level.

  9. IoGET: Internet of Geophysical and Environmental Things

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar

    The objective of this project is to provide novel and fast reduced-order models for onboard computation at sensor nodes for real-time analysis. The approach will require that LANL perform high-fidelity numerical simulations, construct simple reduced-order models (ROMs) using machine learning and signal processing algorithms, and use real-time data analysis for ROMs and compressive sensing at sensor nodes.

  10. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    PubMed

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Current warming will reduce yields unless maize breeding and seed systems adapt immediately

    NASA Astrophysics Data System (ADS)

    Challinor, A. J.; Koehler, A.-K.; Ramirez-Villegas, J.; Whitfield, S.; Das, B.

    2016-10-01

    The development of crop varieties that are better suited to new climatic conditions is vital for future food production. Increases in mean temperature accelerate crop development, resulting in shorter crop durations and reduced time to accumulate biomass and yield. The process of breeding, delivery and adoption (BDA) of new maize varieties can take up to 30 years. Here, we assess for the first time the implications of warming during the BDA process by using five bias-corrected global climate models and four representative concentration pathways with realistic scenarios of maize BDA times in Africa. The results show that the projected difference in temperature between the start and end of the maize BDA cycle results in shorter crop durations that are outside current variability. Both adaptation and mitigation can reduce duration loss. In particular, climate projections have the potential to provide target elevated temperatures for breeding. Whilst options for reducing BDA time are highly context dependent, common threads include improved recording and sharing of data across regions for the whole BDA cycle, streamlining of regulation, and capacity building. Finally, we show that the results have implications for maize across the tropics, where similar shortening of duration is projected.

  12. A collaborative computing framework of cloud network and WBSN applied to fall detection and 3-D motion reconstruction.

    PubMed

    Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh

    2014-03-01

    As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.

  13. Development of materials for the rapid manufacture of die cast tooling

    NASA Astrophysics Data System (ADS)

    Hardro, Peter Jason

    The focus of this research is to develop a material composition that can be processed by rapid prototyping (RP) in order to produce tooling for the die casting process. Where these rapidly produced tools will be superior to traditional tooling production methods by offering one or more of the following advantages: reduced tooling cost, shortened tooling creation time, reduced man-hours for tool creation, increased tool life, and shortened die casting cycle time. By utilizing RP's additive build process and vast material selection, there was a prospect that die cast tooling may be produced quicker and with superior material properties. To this end, the material properties that influence die life and cycle time were determined, and a list of materials that fulfill these "optimal" properties were highlighted. Physical testing was conducted in order to grade the processability of each of the material systems and to optimize the manufacturing process for the downselected material system. Sample specimens were produced and microscopy techniques were utilized to determine a number of physical properties of the material system. Additionally, a benchmark geometry was selected and die casting dies were produced from traditional tool materials (H13 steel) and techniques (machining) and from the newly developed materials and RP techniques (selective laser sintering (SLS) and laser engineered net shaping (LENS)). Once the tools were created, a die cast alloy was selected and a preset number of parts were shot into each tool. During tool creation, the manufacturing time and cost was closely monitored and an economic model was developed to compare traditional tooling to RP tooling. This model allows one to determine, in the early design stages, when it is advantageous to implement RP tooling and when traditional tooling would be best. The results of the physical testing and economic analysis has shown that RP tooling is able to achieve a number of the research objectives, namely, reduce tooling cost, shorten tooling creation time, and reduce the man-hours needed for tool creation. Though identifying the appropriate time to use RP tooling appears to be the most important aspect in achieving successful implementation.

  14. Applying Toyota Production System principles to a psychiatric hospital: making transfers safer and more timely.

    PubMed

    Young, John Q; Wachter, Robert M

    2009-09-01

    Health care organizations have increasingly embraced industrial methods, such as the Toyota Production System (TPS), to improve quality, safety, timeliness, and efficiency. However, the use of such methods in psychiatric hospitals has been limited. A psychiatric hospital applied TPS principles to patient transfers to the outpatient medication management clinics (MMCs) from all other inpatient and outpatient services within the hospital's system. Sources of error and delay were identified, and a new process was designed to improve timely access (measured by elapsed time from request for transfer to scheduling of an appointment and to the actual visit) and patient safety by decreasing communication errors (measured by number of failed transfers). Complexity was substantially reduced, with one streamlined pathway replacing five distinct and more complicated pathways. To assess sustainability, the postintervention period was divided into Period 1 (first 12 months) and Period 2 (next 24 months). Time required to process the transfer and schedule the first appointment was reduced by 74.1% in Period 1 (p < .001) and by an additional 52.7% in Period 2 (p < .0001) for an overall reduction of 87% (p < .0001). Similarly, time to the actual appointment was reduced 31.2% in Period 1 (p < .0001), but was stable in Period 2 (p = .48). The number of transfers per month successfully processed and scheduled increased 95% in the postintervention period compared with the pre-implementation period (p = .015). Finally, data for failed transfers were only available for the postintervention period, and the rate decreased 89% in Period 2 compared with Period 1 (p = .017). The application of TPS principles enhanced access and safety through marked and sustained improvements in the transfer process's timeliness and reliability. Almost all transfer processes have now been standardized.

  15. The Application of Six Sigma Methodologies to University Processes: The Use of Student Teams

    ERIC Educational Resources Information Center

    Pryor, Mildred Golden; Alexander, Christine; Taneja, Sonia; Tirumalasetty, Sowmya; Chadalavada, Deepthi

    2012-01-01

    The first student Six Sigma team (activated under a QEP Process Sub-team) evaluated the course and curriculum approval process. The goal was to streamline the process and thereby shorten process cycle time and reduce confusion about how the process works. Members of this team developed flowcharts on how the process is supposed to work (by…

  16. Applying Systems Engineering Reduces Radiology Transport Cycle Times in the Emergency Department.

    PubMed

    White, Benjamin A; Yun, Brian J; Lev, Michael H; Raja, Ali S

    2017-04-01

    Emergency department (ED) crowding is widespread, and can result in care delays, medical errors, increased costs, and decreased patient satisfaction. Simultaneously, while capacity constraints on EDs are worsening, contributing factors such as patient volume and inpatient bed capacity are often outside the influence of ED administrators. Therefore, systems engineering approaches that improve throughput and reduce waste may hold the most readily available gains. Decreasing radiology turnaround times improves ED patient throughput and decreases patient waiting time. We sought to investigate the impact of systems engineering science targeting ED radiology transport delays and determine the most effective techniques. This prospective, before-and-after analysis of radiology process flow improvements in an academic hospital ED was exempt from institutional review board review as a quality improvement initiative. We hypothesized that reorganization of radiology transport would improve radiology cycle time and reduce waste. The intervention included systems engineering science-based reorganization of ED radiology transport processes, largely using Lean methodologies, and adding no resources. The primary outcome was average transport time between study order and complete time. All patients presenting between 8/2013-3/2016 and requiring plain film imaging were included. We analyzed electronic medical record data using Microsoft Excel and SAS version 9.4, and we used a two-sample t-test to compare data from the pre- and post-intervention periods. Following the intervention, average transport time decreased significantly and sustainably. Average radiology transport time was 28.7 ± 4.2 minutes during the three months pre-intervention. It was reduced by 15% in the first three months (4.4 minutes [95% confidence interval [CI] 1.5-7.3]; to 24.3 ± 3.3 min, P=0.021), 19% in the following six months (5.4 minutes, 95% CI [2.7-8.2]; to 23.3 ± 3.5 min, P=0.003), and 26% one year following the intervention (7.4 minutes, 95% CI [4.8-9.9]; to 21.3 ± 3.1 min, P=0.0001). This result was achieved without any additional resources, and demonstrated a continual trend towards improvement. This innovation demonstrates the value of systems engineering science to increase efficiency in ED radiology processes. In this study, reorganization of the ED radiology transport process using systems engineering science significantly increased process efficiency without additional resource use.

  17. Applying Systems Engineering Reduces Radiology Transport Cycle Times in the Emergency Department

    PubMed Central

    White, Benjamin A.; Yun, Brian J.; Lev, Michael H.; Raja, Ali S.

    2017-01-01

    Introduction Emergency department (ED) crowding is widespread, and can result in care delays, medical errors, increased costs, and decreased patient satisfaction. Simultaneously, while capacity constraints on EDs are worsening, contributing factors such as patient volume and inpatient bed capacity are often outside the influence of ED administrators. Therefore, systems engineering approaches that improve throughput and reduce waste may hold the most readily available gains. Decreasing radiology turnaround times improves ED patient throughput and decreases patient waiting time. We sought to investigate the impact of systems engineering science targeting ED radiology transport delays and determine the most effective techniques. Methods This prospective, before-and-after analysis of radiology process flow improvements in an academic hospital ED was exempt from institutional review board review as a quality improvement initiative. We hypothesized that reorganization of radiology transport would improve radiology cycle time and reduce waste. The intervention included systems engineering science-based reorganization of ED radiology transport processes, largely using Lean methodologies, and adding no resources. The primary outcome was average transport time between study order and complete time. All patients presenting between 8/2013–3/2016 and requiring plain film imaging were included. We analyzed electronic medical record data using Microsoft Excel and SAS version 9.4, and we used a two-sample t-test to compare data from the pre- and post-intervention periods. Results Following the intervention, average transport time decreased significantly and sustainably. Average radiology transport time was 28.7 ± 4.2 minutes during the three months pre-intervention. It was reduced by 15% in the first three months (4.4 minutes [95% confidence interval [CI] 1.5–7.3]; to 24.3 ± 3.3 min, P=0.021), 19% in the following six months (5.4 minutes, 95% CI [2.7–8.2]; to 23.3 ± 3.5 min, P=0.003), and 26% one year following the intervention (7.4 minutes, 95% CI [4.8–9.9]; to 21.3 ± 3.1 min, P=0.0001). This result was achieved without any additional resources, and demonstrated a continual trend towards improvement. This innovation demonstrates the value of systems engineering science to increase efficiency in ED radiology processes. Conclusion In this study, reorganization of the ED radiology transport process using systems engineering science significantly increased process efficiency without additional resource use. PMID:28435492

  18. Real time microcontroller implementation of an adaptive myoelectric filter.

    PubMed

    Bagwell, P J; Chappell, P H

    1995-03-01

    This paper describes a real time digital adaptive filter for processing myoelectric signals. The filter time constant is automatically selected by the adaptation algorithm, giving a significant improvement over linear filters for estimating the muscle force and controlling a prosthetic device. Interference from mains sources often produces problems for myoelectric processing, and so 50 Hz and all harmonic frequencies are reduced by an averaging filter and differential process. This makes practical electrode placement and contact less critical and time consuming. An economic real time implementation is essential for a prosthetic controller, and this is achieved using an Intel 80C196KC microcontroller.

  19. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    PubMed Central

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  20. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce.

    PubMed

    Idris, Muhammad; Hussain, Shujaat; Siddiqi, Muhammad Hameed; Hassan, Waseem; Syed Muhammad Bilal, Hafiz; Lee, Sungyoung

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.

  1. Operations research methods improve chemotherapy patient appointment scheduling.

    PubMed

    Santibáñez, Pablo; Aristizabal, Ruben; Puterman, Martin L; Chow, Vincent S; Huang, Wenhai; Kollmannsberger, Christian; Nordin, Travis; Runzer, Nancy; Tyldesley, Scott

    2012-12-01

    Clinical complexity, scheduling restrictions, and outdated manual booking processes resulted in frequent clerical rework, long waitlists for treatment, and late appointment notification for patients at a chemotherapy clinic in a large cancer center in British Columbia, Canada. A 17-month study was conducted to address booking, scheduling and workload issues and to develop, implement, and evaluate solutions. A review of scheduling practices included process observation and mapping, analysis of historical appointment data, creation of a new performance metric (final appointment notification lead time), and a baseline patient satisfaction survey. Process improvement involved discrete event simulation to evaluate alternative booking practice scenarios, development of an optimization-based scheduling tool to improve scheduling efficiency, and change management for implementation of process changes. Results were evaluated through analysis of appointment data, a follow-up patient survey, and staff surveys. Process review revealed a two-stage scheduling process. Long waitlists and late notification resulted from an inflexible first-stage process. The second-stage process was time consuming and tedious. After a revised, more flexible first-stage process and an automated second-stage process were implemented, the median percentage of appointments exceeding the final appointment notification lead time target of one week was reduced by 57% and median waitlist size decreased by 83%. Patient surveys confirmed increased satisfaction while staff feedback reported reduced stress levels. Significant operational improvements can be achieved through process redesign combined with operations research methods.

  2. The Effect of Aging on the Stages of Processing in a Choice Reaction Time Task

    ERIC Educational Resources Information Center

    Simon, J. Richard; Pouraghabagher, A. Reza

    1978-01-01

    Two experiments were conducted to determine the effect of aging on encoding and response selection stages of a choice reaction time task. Results suggested reducing stimulus discriminability may affect information processing prior to the encoding stage, but the encoding stage is the primary locus of the slowing which accompanied aging. (Author)

  3. Improving operating room turnover time: a systems based approach.

    PubMed

    Bhatt, Ankeet S; Carlson, Grant W; Deckers, Peter J

    2014-12-01

    Operating room (OR) turnover time (TT) has a broad and significant impact on hospital administrators, providers, staff and patients. Our objective was to identify current problems in TT management and implement a consistent, reproducible process to reduce average TT and process variability. Initial observations of TT were made to document the existing process at a 511 bed, 24 OR, academic medical center. Three control groups, including one consisting of Orthopedic and Vascular Surgery, were used to limit potential confounders such as case acuity/duration and equipment needs. A redesigned process based on observed issues, focusing on a horizontally structured, systems-based approach has three major interventions: developing consistent criteria for OR readiness, utilizing parallel processing for patient and room readiness, and enhancing perioperative communication. Process redesign was implemented in Orthopedics and Vascular Surgery. Comparisons of mean and standard deviation of TT were made using an independent 2-tailed t-test. Using all surgical specialties as controls (n = 237), mean TT (hh:mm:ss) was reduced by 0:20:48 min (95 % CI, 0:10:46-0:30:50), from 0:44:23 to 0:23:25, a 46.9 % reduction. Standard deviation of TT was reduced by 0:10:32 min, from 0:16:24 to 0:05:52 and frequency of TT≥30 min was reduced from 72.5to 11.7 %. P < 0.001 for each. Using Vascular and Orthopedic surgical specialties as controls (n = 13), mean TT was reduced by 0:15:16 min (95 % CI, 0:07:18-0:23:14), from 0:38:51 to 0:23:35, a 39.4 % reduction. Standard deviation of TT reduced by 0:08:47, from 0:14:39 to 0:05:52 and frequency of TT≥30 min reduced from 69.2 to 11.7 %. P < 0.001 for each. Reductions in mean TT present major efficiency, quality improvement, and cost-reduction opportunities. An OR redesign process focusing on parallel processing and enhanced communication resulted in greater than 35 % reduction in TT. A systems-based focus should drive OR TT design.

  4. GEOTAIL Spacecraft historical data report

    NASA Technical Reports Server (NTRS)

    Boersig, George R.; Kruse, Lawrence F.

    1993-01-01

    The purpose of this GEOTAIL Historical Report is to document ground processing operations information gathered on the GEOTAIL mission during processing activities at the Cape Canaveral Air Force Station (CCAFS). It is hoped that this report may aid management analysis, improve integration processing and forecasting of processing trends, and reduce real-time schedule changes. The GEOTAIL payload is the third Delta 2 Expendable Launch Vehicle (ELV) mission to document historical data. Comparisons of planned versus as-run schedule information are displayed. Information will generally fall into the following categories: (1) payload stay times (payload processing facility/hazardous processing facility/launch complex-17A); (2) payload processing times (planned, actual); (3) schedule delays; (4) integrated test times (experiments/launch vehicle); (5) unique customer support requirements; (6) modifications performed at facilities; (7) other appropriate information (Appendices A & B); and (8) lessons learned (reference Appendix C).

  5. Time Reversal Acoustic Communication Using Filtered Multitone Modulation

    PubMed Central

    Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo

    2015-01-01

    The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process. PMID:26393586

  6. Time Reversal Acoustic Communication Using Filtered Multitone Modulation.

    PubMed

    Sun, Lin; Chen, Baowei; Li, Haisen; Zhou, Tian; Li, Ruo

    2015-09-17

    The multipath spread in underwater acoustic channels is severe and, therefore, when the symbol rate of the time reversal (TR) acoustic communication using single-carrier (SC) modulation is high, the large intersymbol interference (ISI) span caused by multipath reduces the performance of the TR process and needs to be removed using the long adaptive equalizer as the post-processor. In this paper, a TR acoustic communication method using filtered multitone (FMT) modulation is proposed in order to reduce the residual ISI in the processed signal using TR. In the proposed method, FMT modulation is exploited to modulate information symbols onto separate subcarriers with high spectral containment and TR technique, as well as adaptive equalization is adopted at the receiver to suppress ISI and noise. The performance of the proposed method is assessed through simulation and real data from a trial in an experimental pool. The proposed method was compared with the TR acoustic communication using SC modulation with the same spectral efficiency. Results demonstrate that the proposed method can improve the performance of the TR process and reduce the computational complexity of adaptive equalization for post-process.

  7. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    NASA Astrophysics Data System (ADS)

    Yang, Kuojun; Tian, Shulin; Zeng, Hao; Qiu, Lei; Guo, Lianping

    2014-04-01

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, which converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.

  8. A seamless acquisition digital storage oscilloscope with three-dimensional waveform display

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kuojun, E-mail: kuojunyang@gmail.com; Guo, Lianping; School of Electrical and Electronic Engineering, Nanyang Technological University

    In traditional digital storage oscilloscope (DSO), sampled data need to be processed after each acquisition. During data processing, the acquisition is stopped and oscilloscope is blind to the input signal. Thus, this duration is called dead time. With the rapid development of modern electronic systems, the effect of infrequent events becomes significant. To capture these occasional events in shorter time, dead time in traditional DSO that causes the loss of measured signal needs to be reduced or even eliminated. In this paper, a seamless acquisition oscilloscope without dead time is proposed. In this oscilloscope, three-dimensional waveform mapping (TWM) technique, whichmore » converts sampled data to displayed waveform, is proposed. With this technique, not only the process speed is improved, but also the probability information of waveform is displayed with different brightness. Thus, a three-dimensional waveform is shown to the user. To reduce processing time further, parallel TWM which processes several sampled points simultaneously, and dual-port random access memory based pipelining technique which can process one sampling point in one clock period are proposed. Furthermore, two DDR3 (Double-Data-Rate Three Synchronous Dynamic Random Access Memory) are used for storing sampled data alternately, thus the acquisition can continue during data processing. Therefore, the dead time of DSO is eliminated. In addition, a double-pulse test method is adopted to test the waveform capturing rate (WCR) of the oscilloscope and a combined pulse test method is employed to evaluate the oscilloscope's capture ability comprehensively. The experiment results show that the WCR of the designed oscilloscope is 6 250 000 wfms/s (waveforms per second), the highest value in all existing oscilloscopes. The testing results also prove that there is no dead time in our oscilloscope, thus realizing the seamless acquisition.« less

  9. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  10. Implementation of Testing Equipment for Asphalt Materials : Tech Summary

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated under this study. Each of these devices is designed to reduce testing time considerably and reduce operator error by automating the testing process. The Thery...

  11. Implementation of testing equipment for asphalt materials : tech summary.

    DOT National Transportation Integrated Search

    2009-05-01

    Three new automated methods for related asphalt material and mixture testing were evaluated : under this study. Each of these devices is designed to reduce testing time considerably and reduce : operator error by automating the testing process. The T...

  12. Additive Manufacturing of Tooling for Refrigeration Cabinet Foaming Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Brian K; Nuttall, David; Cukier, Michael

    The primary objective of this project was to leverage the Big Area Additive Manufacturing (BAAM) process and materials into a long term, quick change tooling concept to drastically reduce product lead and development timelines and costs. Current refrigeration foam molds are complicated to manufacture involving casting several aluminum parts in an approximate shape, machining components of the molds and post fitting and shimming of the parts in an articulated fixture. The total process timeline can take over 6 months. The foaming process is slower than required for production, therefore multiple fixtures, 10 to 27, are required per refrigerator model. Moldsmore » are particular to a specific product configuration making mixed model assembly challenging for sequencing, mold changes or auto changeover features. The initial goal was to create a tool leveraging the ORNL materials and additive process to build a tool in 4 to 6 weeks or less. A secondary goal was to create common fixture cores and provide lightweight fixture sections that could be revised in a very short time to increase equipment flexibility reduce lead times, lower the barriers to first production trials, and reduce tooling costs.« less

  13. Heat transfer enhancement in triplex-tube latent thermal energy storage system with selected arrangements of fins

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Xing, Yuming; Liu, Xin; Rui, Zhoufeng

    2018-01-01

    The use of thermal energy storage systems can effectively reduce energy consumption and improve the system performance. One of the promising ways for thermal energy storage system is application of phase change materials (PCMs). In this study, a two-dimensional numerical model is presented to investigate the heat transfer enhancement during the melting/solidification process in a triplex tube heat exchanger (TTHX) by using fluent software. The thermal conduction and natural convection are all taken into account in the simulation of the melting/solidification process. As the volume fraction of fin is kept to be a constant, the influence of proposed fin arrangement on temporal profile of liquid fraction over the melting process is studied and reported. By rotating the unit with different angle, the simulation shows that the melting time varies a little, which means that the installation error can be reduced by the selected fin arrangement. The proposed fin arrangement also can effectively reduce time of the solidification of the PCM by investigating the solidification process. To summarize, this work presents a shape optimization for the improvement of the thermal energy storage system by considering both thermal energy charging and discharging process.

  14. Evaluation of process excellence tools in improving donor flow management in a tertiary care hospital in South India

    PubMed Central

    Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T.

    2017-01-01

    BACKGROUND: Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. AIM AND OBJECTIVES: To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. MATERIALS AND METHODS: The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. RESULTS: Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. CONCLUSION: Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs. PMID:28970681

  15. Evaluation of process excellence tools in improving donor flow management in a tertiary care hospital in South India.

    PubMed

    Venugopal, Divya; Rafi, Aboobacker Mohamed; Innah, Susheela Jacob; Puthayath, Bibin T

    2017-01-01

    Process Excellence is a value based approach and focuses on standardizing work processes by eliminating the non-value added processes, identify process improving methodologies and maximize capacity and expertise of the staff. To Evaluate the utility of Process Excellence Tools in improving Donor Flow Management in a Tertiary care Hospital by studying the current state of donor movement within the blood bank and providing recommendations for eliminating the wait times and to improve the process and workflow. The work was done in two phases; The First Phase comprised of on-site observations with the help of an expert trained in Process Excellence Methodology who observed and documented various aspects of donor flow, donor turn around time, total staff details and operator process flow. The Second Phase comprised of constitution of a Team to analyse the data collected. The analyzed data along with the recommendations were presented before an expert hospital committee and the management. Our analysis put forward our strengths and identified potential problems. Donor wait time was reduced by 50% after lean due to better donor management with reorganization of the infrastructure of the donor area. Receptionist tracking showed that 62% of the total time the staff wastes in walking and 22% in other non-value added activities. Defining Duties for each staff reduced the time spent by them in non-value added activities. Implementation of the token system, generation of unique identification code for donors and bar code labeling of the tubes and bags are among the other recommendations. Process Excellence is not a programme; it's a culture that transforms an organization and improves its Quality and Efficiency through new attitudes, elimination of wastes and reduction in costs.

  16. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  17. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  18. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  19. PACE 2: Pricing and Cost Estimating Handbook

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Shepherd, T.

    1977-01-01

    An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.

  20. Real-Time Visualization of an HPF-based CFD Simulation

    NASA Technical Reports Server (NTRS)

    Kremenetsky, Mark; Vaziri, Arsi; Haimes, Robert; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Current time-dependent CFD simulations produce very large multi-dimensional data sets at each time step. The visual analysis of computational results are traditionally performed by post processing the static data on graphics workstations. We present results from an alternate approach in which we analyze the simulation data in situ on each processing node at the time of simulation. The locally analyzed results, usually more economical and in a reduced form, are then combined and sent back for visualization on a graphics workstation.

  1. Spaceborne Hybrid-FPGA System for Processing FTIR Data

    NASA Technical Reports Server (NTRS)

    Bekker, Dmitriy; Blavier, Jean-Francois L.; Pingree, Paula J.; Lukowiak, Marcin; Shaaban, Muhammad

    2008-01-01

    Progress has been made in a continuing effort to develop a spaceborne computer system for processing readout data from a Fourier-transform infrared (FTIR) spectrometer to reduce the volume of data transmitted to Earth. The approach followed in this effort, oriented toward reducing design time and reducing the size and weight of the spectrometer electronics, has been to exploit the versatility of recently developed hybrid field-programmable gate arrays (FPGAs) to run diverse software on embedded processors while also taking advantage of the reconfigurable hardware resources of the FPGAs.

  2. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  3. Parafoveal preview during reading: Effects of sentence position

    PubMed Central

    White, Sarah J.; Warren, Tessa; Reichle, Erik D.

    2011-01-01

    Two experiments examined parafoveal preview for words located in the middle of sentences and at sentence boundaries. Parafoveal processing was shown to occur for words at sentence-initial, mid-sentence, and sentence-final positions. Both Experiments 1 and 2 showed reduced effects of preview on regressions out for sentence-initial words. In addition, Experiment 2 showed reduced preview effects on first-pass reading times for sentence-initial words. These effects of sentence position on preview could result from reduced parafoveal processing for sentence-initial words, or other processes specific to word reading at sentence boundaries. In addition to the effects of preview, the experiments also demonstrate variability in the effects of sentence wrap-up on different reading measures, indicating that the presence and time course of wrap-up effects may be modulated by text-specific factors. We also report simulations of Experiment 2 using version 10 of E-Z Reader (Reichle, Warren, & McConnell, 2009), designed to explore the possible mechanisms underlying parafoveal preview at sentence boundaries. PMID:21500948

  4. Impact of point-of-care implementation of Xpert® MTB/RIF: product vs. process innovation.

    PubMed

    Schumacher, S G; Thangakunam, B; Denkinger, C M; Oliver, A A; Shakti, K B; Qin, Z Z; Michael, J S; Luo, R; Pai, M; Christopher, D J

    2015-09-01

    Both product innovation (e.g., more sensitive tests) and process innovation (e.g., a point-of-care [POC] testing programme) could improve patient outcomes. To study the respective contributions of product and process innovation in improving patient outcomes. We implemented a POC programme using Xpert(®) MTB/RIF in an out-patient clinic of a tertiary care hospital in India. We measured the impact of process innovation by comparing time to diagnosis with routine testing vs. POC testing. We measured the impact of product innovation by comparing accuracy and time to diagnosis using smear microscopy vs. POC Xpert. We enrolled 1012 patients over a 15-month period. Xpert had high accuracy, but the incremental value of one Xpert over two smears was only 6% (95%CI 3-12). Implementing Xpert as a routine laboratory test did not reduce the time to diagnosis compared to smear-based diagnosis. In contrast, the POC programme reduced the time to diagnosis by 5.5 days (95%CI 4.3-6.7), but required dedicated staff and substantial adaptation of clinic workflow. Process innovation by way of a POC Xpert programme had a greater impact on time to diagnosis than the product per se, and can yield important improvements in patient care that are complementary to those achieved by introducing innovative technologies.

  5. Stable and verifiable state estimation methods and systems with spacecraft applications

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor)

    2001-01-01

    The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o.

  6. Pulse transmission receiver with higher-order time derivative pulse generator

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-08-12

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission receiver includes: a front-end amplification/processing circuit; a synchronization circuit coupled to the front-end amplification/processing circuit; a clock coupled to the synchronization circuit; a trigger signal generator coupled to the clock; and at least one higher-order time derivative pulse generator coupled to the trigger signal generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  7. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

    PubMed

    Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex

    2017-01-01

    Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).

  8. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection

    PubMed Central

    Plitt, Mark; Kundu, Prantik; Bandettini, Peter A.; Martin, Alex

    2017-01-01

    Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion). PMID:28880888

  9. Effects of De-spinning and Lithosphere Thickening on the Lunar Fossil Bulge

    NASA Astrophysics Data System (ADS)

    Zhong, S.; Qin, C.; Phillips, R. J.

    2016-12-01

    The Moon has abnormally large degree-2 anomalies in gravity and shape (or bulge). The degree-2 gravity coefficients C20 and C22 are, respectively, 22 and 7 times greater than expected from the Moon's current orbital and rotational states. One prevalent hypothesis, called the fossil bulge hypothesis, interprets the current degree-2 shape as a remnant of the bulge that froze in when the Moon was closer to the Earth with stronger tidal and rotational potentials. However, the dynamic feasibility of the freeze-in process has never been quantitatively examined. In this study, we explore, using numerical models of viscoelastic deformation with time-dependent rotational potential and lithospheric rheology, how the degree-2 bulge would evolve with time as the early Moon cools and migrates away from the Earth. Our model includes two competing effects: 1) a thickening lithosphere with time through cooling, which helps maintain the bulge, and 2) de-spinning through tidal locking, which tends to reduce the bulge. In our model, a strong lithosphere is represented by the topmost layer that is orders of magnitude more viscous than the mantle. The benchmark results show that our numerical model can compute the bulge size accurately. Our calculations start with a bulge size that is in hydrostatic equilibrium with the initial rotational rate. The bulge reduces with time as the Moon spins down, while the lithosphere can support certain amount of bulge as it thickens. We find that the final size of the bulge is controlled by the relative time scales of the two processes. At limiting cases, if the time scale of de-spinning were much larger than that of lithosphere thickening, the bulge size would be largely maintained. Conversely, the bulge size would be reduced significantly. We will consider more realistic time scales for these two processes, as well as effects of other subsequent processes after lunar magma ocean crystallization, such as large impacts and mare volcanism.

  10. The role of business process reengineering in health care.

    PubMed

    Kohn, D

    1994-02-01

    Business process reengineering (BPR) is a management philosophy capturing attention in health care. It combines some new, old, and recycled management philosophies, and, more often than not, is yielding positive results. BPR's emphasis is on the streamlining of cross-functional processes to significantly reduce time and/or cost, increase revenue, improve quality and service, and reduce risk. Therefore, it has many applications in health care. This article provides an introduction to the concept of BPR, including the definition of BPR, its origin, its champions, and factors for its success.

  11. Process for reducing series resistance of solar-cell metal-contact systems with a soldering-flux etchant

    DOEpatents

    Coyle, R.T.; Barrett, J.M.

    1982-05-04

    Disclosed is a process for substantially reducing the series resistance of a solar cell having a thick film metal contact assembly thereon while simultaneously removing oxide coatings from the surface of the assembly prior to applying solder therewith. The process includes applying a flux to the contact assembly and heating the cell for a period of time sufficient to substantially remove the series resistance associated with the assembly by etching the assembly with the flux while simultaneously removing metal oxides from said surface of said assembly.

  12. Process for reducing series resistance of solar cell metal contact systems with a soldering flux etchant

    DOEpatents

    Coyle, R. T.; Barrett, Joy M.

    1984-01-01

    Disclosed is a process for substantially reducing the series resistance of a solar cell having a thick film metal contact assembly thereon while simultaneously removing oxide coatings from the surface of the assembly prior to applying solder therewith. The process includes applying a flux to the contact assembly and heating the cell for a period of time sufficient to substantially remove the series resistance associated with the assembly by etching the assembly with the flux while simultaneously removing metal oxides from said surface of said assembly.

  13. 75 FR 42296 - Safe, Efficient Use and Preservation of the Navigable Airspace

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ... facilitates the aeronautical study process and has reduced the overall processing time for these cases. The... cases to be processed, particularly if additional information, via public comment period, was necessary... the permit application is not necessary. There are cases where circulating the proposal for public...

  14. Environmental Data Flow Six Sigma Process Improvement Savings Overview

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paige, Karen S

    An overview of the Environmental Data Flow Six Sigma improvement project covers LANL’s environmental data processing following receipt from the analytical laboratories. The Six Sigma project identified thirty-three process improvements, many of which focused on cutting costs or reducing the time it took to deliver data to clients.

  15. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  16. [Effect of sodium carbonate assisted hydrothermal process on heavy metals stabilization in medical waste incinerator fly ash].

    PubMed

    Jin, Jian; Li, Xiao-dong; Chi, Yong; Yan, Jian-hua

    2010-04-01

    A sodium carbonate assisted hydrothermal process was induced to stabilize the fly ash from medical waste incinerator. The results showed that sodium carbonate assisted hydrothermal process reduced the heavy metals leachability of fly ash, and the heavy metal waste water from the process would not be a secondary pollution. The leachability of heavy metals studied in this paper were Cd 1.97 mg/L, Cr 1.56 mg/L, Cu 2.56 mg/L, Mn 17.30 mg/L, Ni 1.65 mg/L, Pb 1.56 mg/L and Zn 189.00 mg/L, and after hydrothermal process with the optimal experimental condition (Na2CO3/fly ash dosage = 5/20, reaction time = 8 h, L/S ratio = 10/1) the leachability reduced to < 0.02 mg/L for Cd, Cr, Cu, Mn, Ni, Pb, and 0.05 mg/L for Zn, according to GB 5085.3-2007. Meanwhile, the concentrations of heavy metals in effluent after hydrothermal process were less than 0.8 mg/L. The heavy metals leachability and concentration in effluent reduced with prolonged reaction time. Prolonged aging can affect the leachability of metals as solids become more crystalline, and heavy metals transferred inside of crystalline. The mechanism of heavy metal stabilization can be concluded to the co precipitation and adsorption effect of aluminosilicates formation, crystallization and aging process.

  17. Lead recovery from waste CRT funnel glass by high-temperature melting process.

    PubMed

    Hu, Biao; Hui, Wenlong

    2018-02-05

    In this research, a novel and effective process for waste CRT funnel glass treatment was developed. The key to this process is removal of lead from the CRT funnel glass by high-temperature melting process. Sodium carbonate powder was used as a fusion agent, sodium sulfide serves as a catalytic agent and carbon powder acts as reducing agent. Experimental results showed that lead recovery rate increased with an increase in the amount of added sodium carbonate, sodium sulfide, carbonate, temperature and holding time initially, and then reached a stable value. The maximum lead recovery rate was approximately 94%, when the optimum adding amount of sodium carbonate, sodium sulfide, carbonate, temperature and holding time were 25%, 8%, 3.6%, 1200°C and 120min, respectively. In the high-temperature melting process, lead silicate in the funnel glass was firstly reduced, and then removed. The glass slag can be made into sodium and potassium silicate by hydrolysis process. This study proposed a practical and economical process for recovery of lead and utilization of waste glass slag. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.

    ERIC Educational Resources Information Center

    Fewell, Dennis A.

    1998-01-01

    Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…

  19. Space system production cost benefits from contemporary philosophies in management and manufacturing

    NASA Technical Reports Server (NTRS)

    Rosmait, Russell L.

    1991-01-01

    The cost of manufacturing space system hardware has always been expensive. The Engineering Cost Group of the Program Planning office at Marshall is attempting to account for cost savings that result from new technologies in manufacturing and management. The objective is to identify and define contemporary philosophies in manufacturing and management. The seven broad categories that make up the areas where technological advances can assist in reducing space system costs are illustrated. Included within these broad categories is a list of the processes or techniques that specifically provide the cost savings within todays design, test, production and operations environments. The processes and techniques listed achieve savings in the following manner: increased productivity; reduced down time; reduced scrap; reduced rework; reduced man hours; and reduced material costs. In addition, it should be noted that cost savings from production and processing improvements effect 20 to 40 pct. of production costs whereas savings from management improvements effects 60 to 80 of production cost. This is important because most efforts in reducing costs are spent trying to reduce cost in the production.

  20. The Use of Lean Six Sigma Methodology in Increasing Capacity of a Chemical Production Facility at DSM.

    PubMed

    Meeuwse, Marco

    2018-03-30

    Lean Six Sigma is an improvement method, combining Lean, which focuses on removing 'waste' from a process, with Six Sigma, which is a data-driven approach, making use of statistical tools. Traditionally it is used to improve the quality of products (reducing defects), or processes (reducing variability). However, it can also be used as a tool to increase the productivity or capacity of a production plant. The Lean Six Sigma methodology is therefore an important pillar of continuous improvement within DSM. In the example shown here a multistep batch process is improved, by analyzing the duration of the relevant process steps, and optimizing the procedures. Process steps were performed in parallel instead of sequential, and some steps were made shorter. The variability was reduced, giving the opportunity to make a tighter planning, and thereby reducing waiting times. Without any investment in new equipment or technical modifications, the productivity of the plant was improved by more than 20%; only by changing procedures and the programming of the process control system.

  1. Integrated Analysis Tools for Determination of Structural Integrity and Durability of High temperature Polymer Matrix Composites

    DTIC Science & Technology

    2008-08-18

    fidelity will be used to reduce the massive experimental testing and associated time required for qualification of new materials. Tools and...develping a model of the thermo-oxidative process for polymer systems, that incorporates the effects of reaction rates, Fickian diffusion, time varying...degradation processes. Year: 2005 Month: 12 Not required at this time . AIR FORCE OFFICE OF SCIENTIFIC KESEARCH 04 SEP 2008 Page 2 of 2 DTIC Data

  2. Reduced-Order Kalman Filtering for Processing Relative Measurements

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2008-01-01

    A study in Kalman-filter theory has led to a method of processing relative measurements to estimate the current state of a physical system, using less computation than has previously been thought necessary. As used here, relative measurements signifies measurements that yield information on the relationship between a later and an earlier state of the system. An important example of relative measurements arises in computer vision: Information on relative motion is extracted by comparing images taken at two different times. Relative measurements do not directly fit into standard Kalman filter theory, in which measurements are restricted to those indicative of only the current state of the system. One approach heretofore followed in utilizing relative measurements in Kalman filtering, denoted state augmentation, involves augmenting the state of the system at the earlier of two time instants and then propagating the state to the later time instant.While state augmentation is conceptually simple, it can also be computationally prohibitive because it doubles the number of states in the Kalman filter. When processing a relative measurement, if one were to follow the state-augmentation approach as practiced heretofore, one would find it necessary to propagate the full augmented state Kalman filter from the earlier time to the later time and then select out the reduced-order components. The main result of the study reported here is proof of a property called reduced-order equivalence (ROE). The main consequence of ROE is that it is not necessary to augment with the full state, but, rather, only the portion of the state that is explicitly used in the partial relative measurement. In other words, it suffices to select the reduced-order components first and then propagate the partial augmented state Kalman filter from the earlier time to the later time; the amount of computation needed to do this can be substantially less than that needed for propagating the full augmented Kalman state filter.

  3. Diazo techniques for remote sensor data analysis

    NASA Technical Reports Server (NTRS)

    Mount, S.; Whitebay, L. E.

    1979-01-01

    Cost and time to extract land use maps, natural-resource surveys, and other data from aerial and satellite photographs are reduced by diazo processing. Process can be controlled to enhance features such as vegetation, land boundaries, and bodies of water.

  4. A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times

    NASA Astrophysics Data System (ADS)

    Li, Xin; Fung, Richard Y. K.

    2018-02-01

    This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.

  5. Systematic development of reduced reaction mechanisms for dynamic modeling

    NASA Technical Reports Server (NTRS)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  6. Outpatient Waiting Time in Health Services and Teaching Hospitals: A Case Study in Iran

    PubMed Central

    Mohebbifar, Rafat; Hasanpoor, Edris; Mohseni, Mohammad; Sokhanvar, Mobin; Khosravizadeh, Omid; Isfahani, Haleh Mousavi

    2014-01-01

    Background: One of the most important indexes of the health care quality is patient’s satisfaction and it takes place only when there is a process based on management. One of these processes in the health care organizations is the appropriate management of the waiting time process. The aim of this study is the systematic analyzing of the outpatient waiting time. Methods: This descriptive cross sectional study conducted in 2011 is an applicable study performed in the educational and health care hospitals of one of the medical universities located in the north west of Iran. Since the distributions of outpatients in all the months were equal, sampling stage was used. 160 outpatients were studied and the data was analyzed by using SPSS software. Results: Results of the study showed that the waiting time for the outpatients of ophthalmology clinic with an average of 245 minutes for each patient allocated the maximum time among the other clinics for itself. Orthopedic clinic had the minimal waiting time including an average of 77 minutes per patient. The total average waiting time for each patient in the educational hospitals under this study was about 161 minutes. Conclusion: by applying some models, we can reduce the waiting time especially in the realm of time and space before the admission to the examination room. Utilizing the models including the one before admission, electronic visit systems via internet, a process model, six sigma model, queuing theory model and FIFO model, are the components of the intervention that reduces the outpatient waiting time. PMID:24373277

  7. Production Time Loss Reduction in Sauce Production Line by Lean Six Sigma Approach

    NASA Astrophysics Data System (ADS)

    Ritprasertsri, Thitima; Chutima, Parames

    2017-06-01

    In all industries, time losses, which are incurred in processing are very important. As a result, losses are incurred in productivity and cost. This research aimed to reduce lost time that occurs in sauce production line by using the lean six sigma approach. The main objective was to reduce the time for heating sauce which causes a lot of time lost in the production line which affects productivity. The methodology was comprised of the five-phase improvement model of Six Sigma. This approach begins with defining phase, measuring phase, analysing phase, improving phase and controlling phase. Cause-and-effect matrix and failure mode and effect analysis (FMEA) were adopted to screen the factors which affect production time loss. The results showed that the percentage of lost time from heating sauce reduced by 47.76%. This increased productivity to meet the plan.

  8. Elimination of water pathogens with solar radiation using an automated sequential batch CPC reactor.

    PubMed

    Polo-López, M I; Fernández-Ibáñez, P; Ubomba-Jaswa, E; Navntoft, C; García-Fernández, I; Dunlop, P S M; Schmid, M; Byrne, J A; McGuigan, K G

    2011-11-30

    Solar disinfection (SODIS) of water is a well-known, effective treatment process which is practiced at household level in many developing countries. However, this process is limited by the small volume treated and there is no indication of treatment efficacy for the user. Low cost glass tube reactors, together with compound parabolic collector (CPC) technology, have been shown to significantly increase the efficiency of solar disinfection. However, these reactors still require user input to control each batch SODIS process and there is no feedback that the process is complete. Automatic operation of the batch SODIS process, controlled by UVA-radiation sensors, can provide information on the status of the process, can ensure the required UVA dose to achieve complete disinfection is received and reduces user work-load through automatic sequential batch processing. In this work, an enhanced CPC photo-reactor with a concentration factor of 1.89 was developed. The apparatus was automated to achieve exposure to a pre-determined UVA dose. Treated water was automatically dispensed into a reservoir tank. The reactor was tested using Escherichia coli as a model pathogen in natural well water. A 6-log inactivation of E. coli was achieved following exposure to the minimum uninterrupted lethal UVA dose. The enhanced reactor decreased the exposure time required to achieve the lethal UVA dose, in comparison to a CPC system with a concentration factor of 1.0. Doubling the lethal UVA dose prevented the need for a period of post-exposure dark inactivation and reduced the overall treatment time. Using this reactor, SODIS can be automatically carried out at an affordable cost, with reduced exposure time and minimal user input. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Custom FPGA processing for real-time fetal ECG extraction and identification.

    PubMed

    Torti, E; Koliopoulos, D; Matraxia, M; Danese, G; Leporati, F

    2017-01-01

    Monitoring the fetal cardiac activity during pregnancy is of crucial importance for evaluating fetus health. However, there is a lack of automatic and reliable methods for Fetal ECG (FECG) monitoring that can perform this elaboration in real-time. In this paper, we present a hardware architecture, implemented on the Altera Stratix V FPGA, capable of separating the FECG from the maternal ECG and to correctly identify it. We evaluated our system using both synthetic and real tracks acquired from patients beyond the 20th pregnancy week. This work is part of a project aiming at developing a portable system for FECG continuous real-time monitoring. Its characteristics of reduced power consumption, real-time processing capability and reduced size make it suitable to be embedded in the overall system, that is the first proposed exploiting Blind Source Separation with this technology, to the best of our knowledge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Post-processing method to reduce noise while preserving high time resolution in aethalometer real-time black carbon data

    EPA Science Inventory

    Real-time aerosol black carbon (BC) data, presented at time resolutions on the order of seconds to minutes, is desirable in field and source characterization studies measuring rapidly varying concentrations of BC. The Optimized Noise-reduction Averaging (ONA) algorithm has been d...

  11. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine; Adhinarayanan, Vignesh; Turton, Terece

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline atmore » the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.« less

  12. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  13. Reduced-Density-Matrix Description of Decoherence and Relaxation Processes for Electron-Spin Systems

    NASA Astrophysics Data System (ADS)

    Jacobs, Verne

    2017-04-01

    Electron-spin systems are investigated using a reduced-density-matrix description. Applications of interest include trapped atomic systems in optical lattices, semiconductor quantum dots, and vacancy defect centers in solids. Complimentary time-domain (equation-of-motion) and frequency-domain (resolvent-operator) formulations are self-consistently developed. The general non-perturbative and non-Markovian formulations provide a fundamental framework for systematic evaluations of corrections to the standard Born (lowest-order-perturbation) and Markov (short-memory-time) approximations. Particular attention is given to decoherence and relaxation processes, as well as spectral-line broadening phenomena, that are induced by interactions with photons, phonons, nuclear spins, and external electric and magnetic fields. These processes are treated either as coherent interactions or as environmental interactions. The environmental interactions are incorporated by means of the general expressions derived for the time-domain and frequency-domain Liouville-space self-energy operators, for which the tetradic-matrix elements are explicitly evaluated in the diagonal-resolvent, lowest-order, and Markov (short-memory time) approximations. Work supported by the Office of Naval Research through the Basic Research Program at The Naval Research Laboratory.

  14. A Coordinated Patient Transport System for ICU Patients Requiring Surgery: Impact on Operating Room Efficiency and ICU Workflow.

    PubMed

    Brown, Michael J; Kor, Daryl J; Curry, Timothy B; Marmor, Yariv; Rohleder, Thomas R

    2015-01-01

    Transfer of intensive care unit (ICU) patients to the operating room (OR) is a resource-intensive, time-consuming process that often results in patient throughput inefficiencies, deficiencies in information transfer, and suboptimal nurse to patient ratios. This study evaluates the implementation of a coordinated patient transport system (CPTS) designed to address these issues. Using data from 1,557 patient transfers covering the 2006-2010 period, interrupted time series and before and after designs were used to analyze the effect of implementing a CPTS at Mayo Clinic, Rochester. Using a segmented regression for the interrupted time series, on-time OR start time deviations were found to be significantly lower after the implementation of CPTS (p < .0001). The implementation resulted in a fourfold improvement in on-time OR starts (p < .01) while significantly reducing idle OR time (p < .01). A coordinated patient transfer process for moving patient from ICUs to ORs can significantly improve OR efficiency, reduce nonvalue added time, and ensure quality of care by preserving appropriate care provider to patient ratios.

  15. Large-scale seismic waveform quality metric calculation using Hadoop

    NASA Astrophysics Data System (ADS)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.

  16. Considering Time in Orthophotography Production: from a General Workflow to a Shortened Workflow for a Faster Disaster Response

    NASA Astrophysics Data System (ADS)

    Lucas, G.

    2015-08-01

    This article overall deals with production time with orthophoto imagery with medium size digital frame camera. The workflow examination follows two main parts: data acquisition and post-processing. The objectives of the research are fourfold: 1/ gathering time references for the most important steps of orthophoto production (it turned out that literature is missing on this topic); these figures are used later for total production time estimation; 2/ identifying levers for reducing orthophoto production time; 3/ building a simplified production workflow for emergency response: less exigent with accuracy and faster; and compare it to a classical workflow; 4/ providing methodical elements for the estimation of production time with a custom project. In the data acquisition part a comprehensive review lists and describes all the factors that may affect the acquisition efficiency. Using a simulation with different variables (average line length, time of the turns, flight speed) their effect on acquisition efficiency is quantitatively examined. Regarding post-processing, the time references figures were collected from the processing of a 1000 frames case study with 15 cm GSD covering a rectangular area of 447 km2; the time required to achieve each step during the production is written down. When several technical options are possible, each one is tested and time documented so as all alternatives are available. Based on a technical choice with the workflow and using the compiled time reference of the elementary steps, a total time is calculated for the post-processing of the 1000 frames. Two scenarios are compared as regards to time and accuracy. The first one follows the "normal" practices, comprising triangulation, orthorectification and advanced mosaicking methods (feature detection, seam line editing and seam applicator); the second is simplified and make compromise over positional accuracy (using direct geo-referencing) and seamlines preparation in order to achieve orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.

  17. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

    USGS Publications Warehouse

    Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.

    2017-01-01

    Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.

  18. "I'll stop procrastinating now!" Fostering specific processes of self-regulated learning to reduce academic procrastination.

    PubMed

    Grunschel, Carola; Patrzek, Justine; Klingsieck, Katrin B; Fries, Stefan

    2018-01-01

    Academic procrastination is considered to be a result of self-regulation failure having detrimental effects on students' well-being and academic performance. In the present study, we developed and evaluated a group training that aimed to reduce academic procrastination. We based the training on a cyclical process model of self-regulated learning, thus, focusing on improving deficient processes of self-regulated learning among academic procrastinators (e.g., time management, dealing with distractions). The training comprised five sessions and took place once a week for 90 min in groups of no more than 10 students. Overall, 106 students completed the training. We evaluated the training using a comprehensive control group design with repeated measures (three points of measurement); the control group was trained after the intervention group's training. The results showed that our training was successful. The trained intervention group significantly reduced academic procrastination and improved specific processes of self-regulated learning (e.g., time management, concentration), whereas the untrained control group showed no change regarding these variables. After the control group had also been trained, the control group also showed the expected favorable changes. The students rated the training overall as good and found it recommendable for procrastinating friends. Hence, fostering self-regulatory processes in our intervention was a successful attempt to support students in reducing academic procrastination. The evaluation of the training encourages us to adapt the training for different groups of procrastinators.

  19. Reducing Patient Waiting Times for Radiation Therapy and Improving the Treatment Planning Process: a Discrete-event Simulation Model (Radiation Treatment Planning).

    PubMed

    Babashov, V; Aivas, I; Begen, M A; Cao, J Q; Rodrigues, G; D'Souza, D; Lock, M; Zaric, G S

    2017-06-01

    We analysed the radiotherapy planning process at the London Regional Cancer Program to determine the bottlenecks and to quantify the effect of specific resource levels with the goal of reducing waiting times. We developed a discrete-event simulation model of a patient's journey from the point of referral to a radiation oncologist to the start of radiotherapy, considering the sequential steps and resources of the treatment planning process. We measured the effect of several resource changes on the ready-to-treat to treatment (RTTT) waiting time and on the percentage treated within a 14 calendar day target. Increasing the number of dosimetrists by one reduced the mean RTTT by 6.55%, leading to 84.92% of patients being treated within the 14 calendar day target. Adding one more oncologist decreased the mean RTTT from 10.83 to 10.55 days, whereas a 15% increase in arriving patients increased the waiting time by 22.53%. The model was relatively robust to the changes in quantity of other resources. Our model identified sensitive and non-sensitive system parameters. A similar approach could be applied by other cancer programmes, using their respective data and individualised adjustments, which may be beneficial in making the most effective use of limited resources. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  20. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  1. A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.

    PubMed

    O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A

    2015-02-01

    Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The effect of the natural bentonite to reduce COD in palm oil mill effluent by using a hybrid adsorption-flotation method

    NASA Astrophysics Data System (ADS)

    Dewi, Ratni; Sari, Ratna; Syafruddin

    2017-06-01

    Palm oil mill effluent is waste produced from palm oil processing activities. This waste are comingfrom condensate water, process water and hydrocyclone water. The high levels of contaminants in the palm oil mill effluent causes the waste becomes inappropriate to be discharged to water body before processing, one of the most major contaminants in wastewater is fats, oils and COD.This study investigated the effectiveness of chemically activated bentonite that serves as an alternative to reduce the COD in adsorption and floatation based palm oil effluent waste processing. Natural bentonite was activated by using nitrit acid and benzene. In the existing adsorption material to improve COD reduction capability whereas the flotation method was used to further remove residual effluent which is still remain after the adsorption process. An adsorption columns which operated in batch was used in the present study. By varying the circulation time and adsorbent treatment (activated and non-activated), it was shown that percentage of COD reduction reached 75% at the circulation time of 180 minutes for non activated adsorbent. On the other hand the percentof COD reduction in adsorption and flotation process using activated bentonite reached as high as 88% and 93% at the circulation time of 180 minutes.

  3. Study of residue type defect formation mechanism and the effect of advanced defect reduction (ADR) rinse process

    NASA Astrophysics Data System (ADS)

    Arima, Hiroshi; Yoshida, Yuichi; Yoshihara, Kosuke; Shibata, Tsuyoshi; Kushida, Yuki; Nakagawa, Hiroki; Nishimura, Yukio; Yamaguchi, Yoshikazu

    2009-03-01

    Residue type defect is one of yield detractors in lithography process. It is known that occurrence of the residue type defect is dependent on resist development process and the defect is reduced by optimized rinsing condition. However, the defect formation is affected by resist materials and substrate conditions. Therefore, it is necessary to optimize the development process condition by each mask level. Those optimization steps require a large amount of time and effort. The formation mechanism is investigated from viewpoint of both material and process. The defect formation is affected by resist material types, substrate condition and development process condition (D.I.W. rinse step). Optimized resist formulation and new rinse technology significantly reduce the residue type defect.

  4. Policy Process Editor for P3BM Software

    NASA Technical Reports Server (NTRS)

    James, Mark; Chang, Hsin-Ping; Chow, Edward T.; Crichton, Gerald A.

    2010-01-01

    A computer program enables generation, in the form of graphical representations of process flows with embedded natural-language policy statements, input to a suite of policy-, process-, and performance-based management (P3BM) software. This program (1) serves as an interface between users and the Hunter software, which translates the input into machine-readable form; and (2) enables users to initialize and monitor the policy-implementation process. This program provides an intuitive graphical interface for incorporating natural-language policy statements into business-process flow diagrams. Thus, the program enables users who dictate policies to intuitively embed their intended process flows as they state the policies, reducing the likelihood of errors and reducing the time between declaration and execution of policy.

  5. Applied digital signal processing systems for vortex flowmeter with digital signal processing.

    PubMed

    Xu, Ke-Jun; Zhu, Zhi-Hai; Zhou, Yang; Wang, Xiao-Fen; Liu, San-Shan; Huang, Yun-Zhi; Chen, Zhi-Yuan

    2009-02-01

    The spectral analysis is combined with digital filter to process the vortex sensor signal for reducing the effect of disturbance at low frequency from pipe vibrations and increasing the turndown ratio. Using digital signal processing chip, two kinds of digital signal processing systems are developed to implement these algorithms. One is an integrative system, and the other is a separated system. A limiting amplifier is designed in the input analog condition circuit to adapt large amplitude variation of sensor signal. Some technique measures are taken to improve the accuracy of the output pulse, speed up the response time of the meter, and reduce the fluctuation of the output signal. The experimental results demonstrate the validity of the digital signal processing systems.

  6. Analysis of production flow process with lean manufacturing approach

    NASA Astrophysics Data System (ADS)

    Siregar, Ikhsan; Arif Nasution, Abdillah; Prasetio, Aji; Fadillah, Kharis

    2017-09-01

    This research was conducted on the company engaged in the production of Fast Moving Consumer Goods (FMCG). The production process in the company are still exists several activities that cause waste. Non value added activity (NVA) in the implementation is still widely found, so the cycle time generated to make the product will be longer. A form of improvement on the production line is by applying lean manufacturing method to identify waste along the value stream to find non value added activities. Non value added activity can be eliminated and reduced by utilizing value stream mapping and identifying it with activity mapping process. According to the results obtained that there are 26% of value-added activities and 74% non value added activity. The results obtained through the current state map of the production process of process lead time value of 678.11 minutes and processing time of 173.94 minutes. While the results obtained from the research proposal is the percentage of value added time of 41% of production process activities while non value added time of the production process of 59%. While the results obtained through the future state map of the production process of process lead time value of 426.69 minutes and processing time of 173.89 minutes.

  7. Parallel hyperspectral image reconstruction using random projections

    NASA Astrophysics Data System (ADS)

    Sevilla, Jorge; Martín, Gabriel; Nascimento, José M. P.

    2016-10-01

    Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA). Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.

  8. Lossless data compression for improving the performance of a GPU-based beamformer.

    PubMed

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  9. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    PubMed

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  10. Bipolar plates for PEM fuel cells

    NASA Astrophysics Data System (ADS)

    Middelman, E.; Kout, W.; Vogelaar, B.; Lenssen, J.; de Waal, E.

    The bipolar plates are in weight and volume the major part of the PEM fuel cell stack, and are also a significant contributor to the stack costs. The bipolar plate is therefore a key component if power density has to increase and costs must come down. Three cell plate technologies are expected to reach targeted cost price levels, all having specific advantages and drawbacks. NedStack has developed a conductive composite materials and a production process for fuel cell plates (bipolar and mono-polar). The material has a high electric and thermal conductivity, and can be processed into bipolar plates by a proprietary molding process. Process cycle time has been reduced to less than 10 s, making the material and process suitable for economical mass production. Other development work to increase material efficiency resulted in thin bipolar plates with integrated cooling channels, and integrated seals, and in two-component bipolar plates. Total thickness of the bipolar plates is now less than 3 mm, and will be reduced to 2 mm in the near future. With these thin integrated plates it is possible to increase power density up to 2 kW/l and 2 kW/kg, while at the same time reducing cost by integrating other functions and less material use.

  11. Energy efficiency of batch and semi-batch (CCRO) reverse osmosis desalination.

    PubMed

    Warsinger, David M; Tow, Emily W; Nayar, Kishor G; Maswadeh, Laith A; Lienhard V, John H

    2016-12-01

    As reverse osmosis (RO) desalination capacity increases worldwide, the need to reduce its specific energy consumption becomes more urgent. In addition to the incremental changes attainable with improved components such as membranes and pumps, more significant reduction of energy consumption can be achieved through time-varying RO processes including semi-batch processes such as closed-circuit reverse osmosis (CCRO) and fully-batch processes that have not yet been commercialized or modelled in detail. In this study, numerical models of the energy consumption of batch RO (BRO), CCRO, and the standard continuous RO process are detailed. Two new energy-efficient configurations of batch RO are analyzed. Batch systems use significantly less energy than continuous RO over a wide range of recovery ratios and source water salinities. Relative to continuous RO, models predict that CCRO and batch RO demonstrate up to 37% and 64% energy savings, respectively, for brackish water desalination at high water recovery. For batch RO and CCRO, the primary reductions in energy use stem from atmospheric pressure brine discharge and reduced streamwise variation in driving pressure. Fully-batch systems further reduce energy consumption by not mixing streams of different concentrations, which CCRO does. These results demonstrate that time-varying processes can significantly raise RO energy efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  13. Improving preanalytic processes using the principles of lean production (Toyota Production System).

    PubMed

    Persoon, Thomas J; Zaleski, Sue; Frerichs, Janice

    2006-01-01

    The basic technologies used in preanalytic processes for chemistry tests have been mature for a long time, and improvements in preanalytic processes have lagged behind improvements in analytic and postanalytic processes. We describe our successful efforts to improve chemistry test turnaround time from a central laboratory by improving preanalytic processes, using existing resources and the principles of lean production. Our goal is to report 80% of chemistry tests in less than 1 hour and to no longer recognize a distinction between expedited and routine testing. We used principles of lean production (the Toyota Production System) to redesign preanalytic processes. The redesigned preanalytic process has fewer steps and uses 1-piece flow to move blood samples through the accessioning, centrifugation, and aliquoting processes. Median preanalytic processing time was reduced from 29 to 19 minutes, and the laboratory met the goal of reporting 80% of chemistry results in less than 1 hour for 11 consecutive months.

  14. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities

    DTIC Science & Technology

    2010-04-01

    the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose...made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis . We also show how our theory relates to, and...of the most recent investigations for Earth and Mars atmospheres will be discussed in the following sections. 2.4.1 Earth: lunar return NASA’s

  15. Soviet Civil Defense Agricultural Preparedness.

    DTIC Science & Technology

    1985-06-01

    medication reduces cicatrization time 40% in adults. "When we have proven the indisputable benefits of a new medication, we do not have to run smack into...at the capital’s William Soler children’s hospital. "EGF is a substance with magnificent qualities that stimulate the cicatrization process, and its...principal action is on the skin," the doctor explained. "Among its advantages, our group found that it greatly reduces the cicatrization time for

  16. Spacelab Mission Implementation Cost Assessment (SMICA)

    NASA Technical Reports Server (NTRS)

    Guynes, B. V.

    1984-01-01

    A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.

  17. Simulation and Validation of Injection-Compression Filling Stage of Liquid Moulding with Fast Curing Resins

    NASA Astrophysics Data System (ADS)

    Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa

    2018-03-01

    Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.

  18. DGIC Interconnection Insights | Distributed Generation Interconnection

    Science.gov Websites

    time and resources from utilities, customers, and local permitting authorities. Past research by the interconnection processes can benefit all parties by reducing the financial and time commitments involved. In this susceptible to time-consuming setbacks-for example, if an application is submitted with incomplete information

  19. RFI and SCRIMP Model Development and Verification

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Sayre, Jay

    2000-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) processes are becoming promising technologies in the manufacturing of primary composite structures in the aircraft industry as well as infrastructure. A great deal of work still needs to be done on efforts to reduce the costly trial-and-error methods of VARTM processing that are currently in practice today. A computer simulation model of the VARTM process would provide a cost-effective tool in the manufacturing of composites utilizing this technique. Therefore, the objective of this research was to modify an existing three-dimensional, Resin Film Infusion (RFI)/Resin Transfer Molding (RTM) model to include VARTM simulation capabilities and to verify this model with the fabrication of aircraft structural composites. An additional objective was to use the VARTM model as a process analysis tool, where this tool would enable the user to configure the best process for manufacturing quality composites. Experimental verification of the model was performed by processing several flat composite panels. The parameters verified included flow front patterns and infiltration times. The flow front patterns were determined to be qualitatively accurate, while the simulated infiltration times over predicted experimental times by 8 to 10%. Capillary and gravitational forces were incorporated into the existing RFI/RTM model in order to simulate VARTM processing physics more accurately. The theoretical capillary pressure showed the capability to reduce the simulated infiltration times by as great as 6%. The gravity, on the other hand, was found to be negligible for all cases. Finally, the VARTM model was used as a process analysis tool. This enabled the user to determine such important process constraints as the location and type of injection ports and the permeability and location of the high-permeable media. A process for a three-stiffener composite panel was proposed. This configuration evolved from the variation of the process constraints in the modeling of several different composite panels. The configuration was proposed by considering such factors as: infiltration time, the number of vacuum ports, and possible areas of void entrapment.

  20. Accelerated numerical processing of electronically recorded holograms with reduced speckle noise.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2013-09-01

    The numerical reconstruction of digitally recorded holograms suffers from speckle noise. An accelerated method that uses general-purpose computing in graphics processing units to reduce that noise is shown. The proposed methodology utilizes parallelized algorithms to record, reconstruct, and superimpose multiple uncorrelated holograms of a static scene. For the best tradeoff between reduction of the speckle noise and processing time, the method records, reconstructs, and superimposes six holograms of 1024 × 1024 pixels in 68 ms; for this case, the methodology reduces the speckle noise by 58% compared with that exhibited by a single hologram. The fully parallelized method running on a commodity graphics processing unit is one order of magnitude faster than the same technique implemented on a regular CPU using its multithreading capabilities. Experimental results are shown to validate the proposal.

  1. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  2. [Application of enzymes in pulp and paper industry].

    PubMed

    Lin, Ying

    2014-01-01

    The application of enzymes has a high potential in the pulp and paper industry to improve the economics of the paper production process and to achieve, at the same time, a reduced environmental burden. Specific enzymes contribute to reduce the amount of chemicals, water and energy in various processes. This review is aimed at presenting the latest progresses of applying enzymes in bio-pulping, bio-bleaching, bio-deinking, enzymatic control of pitch and enzymatic modification of fibers.

  3. The fastest spreader in SIS epidemics on networks

    NASA Astrophysics Data System (ADS)

    He, Zhidong; Van Mieghem, Piet

    2018-05-01

    Identifying the fastest spreaders in epidemics on a network helps to ensure an efficient spreading. By ranking the average spreading time for different spreaders, we show that the fastest spreader may change with the effective infection rate of a SIS epidemic process, which means that the time-dependent influence of a node is usually strongly coupled to the dynamic process and the underlying network. With increasing effective infection rate, we illustrate that the fastest spreader changes from the node with the largest degree to the node with the shortest flooding time. (The flooding time is the minimum time needed to reach all other nodes if the process is reduced to a flooding process.) Furthermore, by taking the local topology around the spreader and the average flooding time into account, we propose the spreading efficiency as a metric to quantify the efficiency of a spreader and identify the fastest spreader, which is adaptive to different infection rates in general networks.

  4. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  5. Lead-time reduction utilizing lean tools applied to healthcare: the inpatient pharmacy at a local hospital.

    PubMed

    Al-Araidah, Omar; Momani, Amer; Khasawneh, Mohammad; Momani, Mohammed

    2010-01-01

    The healthcare arena, much like the manufacturing industry, benefits from many aspects of the Toyota lean principles. Lean thinking contributes to reducing or eliminating nonvalue-added time, money, and energy in healthcare. In this paper, we apply selected principles of lean management aiming at reducing the wasted time associated with drug dispensing at an inpatient pharmacy at a local hospital. Thorough investigation of the drug dispensing process revealed unnecessary complexities that contribute to delays in delivering medications to patients. We utilize DMAIC (Define, Measure, Analyze, Improve, Control) and 5S (Sort, Set-in-order, Shine, Standardize, Sustain) principles to identify and reduce wastes that contribute to increasing the lead-time in healthcare operations at the pharmacy understudy. The results obtained from the study revealed potential savings of > 45% in the drug dispensing cycle time.

  6. Incremental analysis of the reengineering of an outpatient billing process: an empirical study in a public hospital

    PubMed Central

    2013-01-01

    Background A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a ‘reader’. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. Methods The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Results Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Conclusions Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments. PMID:23763904

  7. Incremental analysis of the reengineering of an outpatient billing process: an empirical study in a public hospital.

    PubMed

    Chu, Kuan-Yu; Huang, Chunmin

    2013-06-13

    A smartcard is an integrated circuit card that provides identification, authentication, data storage, and application processing. Among other functions, smartcards can serve as credit and ATM cards and can be used to pay various invoices using a 'reader'. This study looks at the unit cost and activity time of both a traditional cash billing service and a newly introduced smartcard billing service in an outpatient department in a hospital in Taipei, Taiwan. The activity time required in using the cash billing service was determined via a time and motion study. A cost analysis was used to compare the unit costs of the two services. A sensitivity analysis was also performed to determine the effect of smartcard use and number of cashier windows on incremental cost and waiting time. Overall, the smartcard system had a higher unit cost because of the additional service fees and business tax, but it reduced patient waiting time by at least 8 minutes. Thus, it is a convenient service for patients. In addition, if half of all outpatients used smartcards to pay their invoices, along with four cashier windows for cash payments, then the waiting time of cash service users could be reduced by approximately 3 minutes and the incremental cost would be close to breaking even (even though it has a higher overall unit cost that the traditional service). Traditional cash billing services are time consuming and require patients to carry large sums of money. Smartcard services enable patients to pay their bill immediately in the outpatient clinic and offer greater security and convenience. The idle time of nurses could also be reduced as they help to process smartcard payments. A reduction in idle time reduces hospital costs. However, the cost of the smartcard service is higher than the cash service and, as such, hospital administrators must weigh the costs and benefits of introducing a smartcard service. In addition to the obvious benefits of the smartcard service, there is also scope to extend its use in a hospital setting to include the notification of patient arrival and use in other departments.

  8. Modeling the Technological Process for Harvesting of Agricultural Produce

    NASA Astrophysics Data System (ADS)

    Shepelev, S. D.; Shepelev, V. D.; Almetova, Z. V.; Shepeleva, N. P.; Cheskidov, M. V.

    2018-01-01

    The efficiency and the parameters of harvesting as a technological process being substantiated make it possible to reduce the cost of production and increase the profit of enterprises. To increase the efficiency of combine harvesters when the level of technical equipment declines is possible due to their efficient operating modes within daily and every season. Therefore, the correlation between the operational daily time and the seasonal load of combine harvesters is found, with the increase in the seasonal load causing the prolonged duration of operational daily time for harvesters being determined. To increase the efficient time of the seasonal load is possible due to a reasonable ratio of crop varieties according to their ripening periods, the necessary quantity of machines thereby to be reduced up to 40%. By timing and field testing the operational factor of the useful shift time of combine harvesters and the efficient modes of operating machines are defined, with the alternatives for improving the technical readiness of combine harvesters being identified.

  9. COMPUTER SIMULATOR (BEST) FOR DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS

    EPA Science Inventory

    BEST (bioreactor economics, size and time of operation) is a spreadsheet-based model that is used in conjunction with public domain software, PhreeqcI. BEST is used in the design process of sulfate-reducing bacteria (SRB) field bioreactors to passively treat acid mine drainage (A...

  10. EVALUATION OF LEAD AVAILABILITY IN AMENDED SOILS MONITORED OVER A LONG-TERM TIME PERIOD

    EPA Science Inventory

    Two different soil amendment processes were evaluated for reducing lead availability from a contaminated soil at a demonstration study site, to reduce potential public health and environmental concerns. A limited variety of in vitro laboratory availability tests (relativ...

  11. DESIGNING SULFATE-REDUCING BACTERIA FIELD BIOREACTORS USING THE BEST MODEL

    EPA Science Inventory

    BEST (bioreactor economics, size and time of operation) is a spreadsheet-based model that is used in conjunction with a public domain computer software package, PHREEQCI. BEST is intended to be used in the design process of sulfate-reducing bacteria (SRB)field bioreactors to pas...

  12. Recent Cycle Time Reduction at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Kegelman, Jerome T.

    2000-01-01

    The NASA Langley Research Center (LaRC) has been engaged in an effort to reduce wind tunnel test cycle time in support of Agency goals and to satisfy the wind tunnel testing needs of the commercial and military aerospace communities. LaRC has established the Wind Tunnel Enterprise (WTE), with goals of reducing wind tunnel test cycle time by an order of magnitude by 2002, and by two orders of magnitude by 2010. The WTE also plans to meet customer expectations for schedule integrity, as well as data accuracy and quality assurance. The WTE has made progress towards these goals over the last year with a focused effort on technological developments balanced by attention to process improvements. This paper presents a summary of several of the WTE activities over the last year that are related to test cycle time reductions at the Center. Reducing wind tunnel test cycle time, defined here as the time between the freezing of loft lines and delivery of test data, requires that the relationship between high productivity and data quality assurance be considered. The efforts have focused on all of the drivers for test cycle time reduction, including process centered improvements, facility upgrades, technological improvements to enhance facility readiness and productivity, as well as advanced measurement techniques. The application of internet tools and computer modeling of facilities to allow a virtual presence of the customer team is also presented.

  13. Job-shop scheduling applied to computer vision

    NASA Astrophysics Data System (ADS)

    Sebastian y Zuniga, Jose M.; Torres-Medina, Fernando; Aracil, Rafael; Reinoso, Oscar; Jimenez, Luis M.; Garcia, David

    1997-09-01

    This paper presents a method for minimizing the total elapsed time spent by n tasks running on m differents processors working in parallel. The developed algorithm not only minimizes the total elapsed time but also reduces the idle time and waiting time of in-process tasks. This condition is very important in some applications of computer vision in which the time to finish the total process is particularly critical -- quality control in industrial inspection, real- time computer vision, guided robots. The scheduling algorithm is based on the use of two matrices, obtained from the precedence relationships between tasks, and the data obtained from the two matrices. The developed scheduling algorithm has been tested in one application of quality control using computer vision. The results obtained have been satisfactory in the application of different image processing algorithms.

  14. Estimating replicate time shifts using Gaussian process regression

    PubMed Central

    Liu, Qiang; Andersen, Bogi; Smyth, Padhraic; Ihler, Alexander

    2010-01-01

    Motivation: Time-course gene expression datasets provide important insights into dynamic aspects of biological processes, such as circadian rhythms, cell cycle and organ development. In a typical microarray time-course experiment, measurements are obtained at each time point from multiple replicate samples. Accurately recovering the gene expression patterns from experimental observations is made challenging by both measurement noise and variation among replicates' rates of development. Prior work on this topic has focused on inference of expression patterns assuming that the replicate times are synchronized. We develop a statistical approach that simultaneously infers both (i) the underlying (hidden) expression profile for each gene, as well as (ii) the biological time for each individual replicate. Our approach is based on Gaussian process regression (GPR) combined with a probabilistic model that accounts for uncertainty about the biological development time of each replicate. Results: We apply GPR with uncertain measurement times to a microarray dataset of mRNA expression for the hair-growth cycle in mouse back skin, predicting both profile shapes and biological times for each replicate. The predicted time shifts show high consistency with independently obtained morphological estimates of relative development. We also show that the method systematically reduces prediction error on out-of-sample data, significantly reducing the mean squared error in a cross-validation study. Availability: Matlab code for GPR with uncertain time shifts is available at http://sli.ics.uci.edu/Code/GPRTimeshift/ Contact: ihler@ics.uci.edu PMID:20147305

  15. Design of container velocity profile for the suppression of liquid sloshing

    NASA Astrophysics Data System (ADS)

    Kim, Dongjoo

    2016-11-01

    In many industrial applications, high-speed position control of a liquid container causes undesirable liquid vibrations called 'sloshing' which poses a control challenge in fast maneuvering and accurate positioning of containers. Recently, it has been shown that a control theory called 'input shaping' is successfully applied to reduce the sloshing, but its success comes at a cost of longer process time. Therefore, we aim to minimize liquid sloshing without increasing the process time when a container moves horizontally by a target distance within a limited time. In this study, sensing and feedback actuation are not permitted but the container velocity is allowed to be modified from a given triangular profile. A new design is proposed by applying input shaping to the container velocity with carefully selected acceleration time. That is, the acceleration time is chosen to be the 1st mode natural period, and the input shaper is determined based on the 3rd mode natural frequency. The proposed approach is validated by performing numerical simulations, which show that the simple modification of container velocity reduces the sloshing significantly without additional process time in a feedforward manner. Supported by the NRF programs (NRF-2015R1D1A1A01059675) of Korean government.

  16. Facilitating Goal-Oriented Behaviour in the Stroop Task: When Executive Control Is Influenced by Automatic Processing

    PubMed Central

    Parris, Benjamin A.; Bate, Sarah; Brown, Scott D.; Hodgson, Timothy L.

    2012-01-01

    A portion of Stroop interference is thought to arise from a failure to maintain goal-oriented behaviour (or goal neglect). The aim of the present study was to investigate whether goal- relevant primes could enhance goal maintenance and reduce the Stroop interference effect. Here it is shown that primes related to the goal of responding quickly in the Stroop task (e.g. fast, quick, hurry) substantially reduced Stroop interference by reducing reaction times to incongruent trials but increasing reaction times to congruent and neutral trials. No effects of the primes were observed on errors. The effects on incongruent, congruent and neutral trials are explained in terms of the influence of the primes on goal maintenance. The results show that goal priming can facilitate goal-oriented behaviour and indicate that automatic processing can modulate executive control. PMID:23056553

  17. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  18. Real-time speckle reduction in optical coherence tomography using the dual window method.

    PubMed

    Zhao, Yang; Chu, Kengyeh K; Eldridge, Will J; Jelly, Evan T; Crose, Michael; Wax, Adam

    2018-02-01

    Speckle is an intrinsic noise of interferometric signals which reduces contrast and degrades the quality of optical coherence tomography (OCT) images. Here, we present a frequency compounding speckle reduction technique using the dual window (DW) method. Using the DW method, speckle noise is reduced without the need to acquire multiple frames. A ~25% improvement in the contrast-to-noise ratio (CNR) was achieved using the DW speckle reduction method with only minimal loss (~17%) in axial resolution. We also demonstrate that real-time speckle reduction can be achieved at a B-scan rate of ~21 frames per second using a graphic processing unit (GPU). The DW speckle reduction technique can work on any existing OCT instrument without further system modification or extra components. This makes it applicable both in real-time imaging systems and during post-processing.

  19. An assembly process model based on object-oriented hierarchical time Petri Nets

    NASA Astrophysics Data System (ADS)

    Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui

    2017-04-01

    In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.

  20. Production of Tuber-Inducing Factor

    NASA Technical Reports Server (NTRS)

    Stutte, Gary W.; Yorio, Neil C.

    2006-01-01

    A process for making a substance that regulates the growth of potatoes and some other economically important plants has been developed. The process also yields an economically important by-product: potatoes. The particular growth-regulating substance, denoted tuber-inducing factor (TIF), is made naturally by, and acts naturally on, potato plants. The primary effects of TIF on potato plants are reducing the lengths of the main shoots, reducing the numbers of nodes on the main stems, reducing the total biomass, accelerating the initiation of potatoes, and increasing the edible fraction (potatoes) of the overall biomass. To some extent, these effects of TIF can override environmental effects that typically inhibit the formation of tubers. TIF can be used in the potato industry to reduce growth time and increase harvest efficiency. Other plants that have been observed to be affected by TIF include tomatoes, peppers, radishes, eggplants, marigolds, and morning glories. In the present process, potatoes are grown with their roots and stolons immersed in a nutrient solution in a recirculating hydroponic system. From time to time, a nutrient replenishment solution is added to the recirculating nutrient solution to maintain the required nutrient concentration, water is added to replace water lost from the recirculating solution through transpiration, and an acid or base is added, as needed, to maintain the recirculating solution at a desired pH level. The growing potato plants secrete TIF into the recirculating solution. The concentration of TIF in the solution gradually increases to a range in which the TIF regulates the growth of the plants.

  1. Masked multichannel analyzer

    DOEpatents

    Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.

    1984-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  2. Masked multichannel analyzer

    DOEpatents

    Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.

    1986-01-01

    An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.

  3. The Brot System

    NASA Astrophysics Data System (ADS)

    Lanzing, Jan

    1988-01-01

    Processing meteor data from observation notes to a format which is used by people who use the observations for statistical reduction is a very time consuming business. It cost our group in Hengelo (Buurse) so much time that there were complaints by the reducers. That day we realized that we had to automatize the process of formatting. Because of the small budget we had to work with what we had, a CBM-64 homecomputer.

  4. Inactivation of Salmonella Enteritidis on lettuces used by minimally processed vegetable industries.

    PubMed

    Silveira, Josete Bailardi; Hessel, Claudia Titze; Tondo, Eduardo Cesar

    2017-01-30

    Washing and disinfection methods used by minimally processed vegetable industries of Southern Brazil were reproduced in laboratory in order to verify their effectiveness to reduce Salmonella Enteritidis SE86 (SE86) on lettuce. Among the five industries investigated, four carried out washing with potable water followed by disinfection with 200 ppm sodium hypochlorite during different immersion times. The washing procedure alone decreased approximately 1 log CFU/g of SE86 population and immersion times of 1, 2, 5, and 15 minutes in disinfectant solution demonstrated reduction rates ranging from 2.06±0.10 log CFU/g to 3.01±0.21 log CFU/g. Rinsing alone was able to reduce counts from 0.12±0.63 log CFU/g to 1.90±1.07 log CFU/g. The most effective method was washing followed by disinfection with 200 ppm sodium hypochlorite for 15 minutes and final rinse with potable water, reaching 5.83 log CFU/g of reduction. However, no statistical differences were observed on the reduction rates after different immersion times. A time interval of 1 to 2 minutes may be an advantage to the minimally vegetable processed industries in order to optimize the process without putting at risk food safety.

  5. Microwave flow and conventional heating effects on the physicochemical properties, bioactive compounds and enzymatic activity of tomato puree.

    PubMed

    Arjmandi, Mitra; Otón, Mariano; Artés, Francisco; Artés-Hernández, Francisco; Gómez, Perla A; Aguayo, Encarna

    2017-02-01

    Thermal processing causes a number of undesirable changes in physicochemical and bioactive properties of tomato products. Microwave (MW) technology is an emergent thermal industrial process that offers a rapid and uniform heating, high energy efficiency and high overall quality of the final product. The main quality changes of tomato puree after pasteurization at 96 ± 2 °C for 35 s, provided by a semi-industrial continuous microwave oven (MWP) under different doses (low power/long time to high power/short time) or by conventional method (CP) were studied. All heat treatments reduced colour quality, total antioxidant capacity and vitamin C, with a greater reduction in CP than in MWP. On the other hand, use of an MWP, in particular high power/short time (1900 W/180 s, 2700 W/160 s and 3150 W/150 s) enhanced the viscosity and lycopene extraction and decreased the enzyme residual activity better than with CP samples. For tomato puree, polygalacturonase was the more thermo-resistant enzyme, and could be used as an indicator of pasteurization efficiency. MWP was an excellent pasteurization technique that provided tomato puree with improved nutritional quality, reducing process times compared to the standard pasteurization process. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  6. Using Six Sigma methodology to reduce patient transfer times from floor to critical-care beds.

    PubMed

    Silich, Stephan J; Wetz, Robert V; Riebling, Nancy; Coleman, Christine; Khoueiry, Georges; Abi Rafeh, Nidal; Bagon, Emma; Szerszen, Anita

    2012-01-01

    In response to concerns regarding delays in transferring critically ill patients to intensive care units (ICU), a quality improvement project, using the Six Sigma process, was undertaken to correct issues leading to transfer delay. To test the efficacy of a Six Sigma intervention to reduce transfer time and establish a patient transfer process that would effectively enhance communication between hospital caregivers and improve the continuum of care for patients. The project was conducted at a 714-bed tertiary care hospital in Staten Island, New York. A Six Sigma multidisciplinary team was assembled to assess areas that needed improvement, manage the intervention, and analyze the results. The Six Sigma process identified eight key steps in the transfer of patients from general medical floors to critical care areas. Preintervention data and a root-cause analysis helped to establish the goal transfer-time limits of 3 h for any individual transfer and 90 min for the average of all transfers. The Six Sigma approach is a problem-solving methodology that resulted in almost a 60% reduction in patient transfer time from a general medical floor to a critical care area. The Six Sigma process is a feasible method for implementing healthcare related quality of care projects, especially those that are complex. © 2011 National Association for Healthcare Quality.

  7. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  8. Supply and demand: application of Lean Six Sigma methods to improve drug round efficiency and release nursing time.

    PubMed

    Kieran, Maríosa; Cleary, Mary; De Brún, Aoife; Igoe, Aileen

    2017-10-01

    To improve efficiency, reduce interruptions and reduce the time taken to complete oral drug rounds. Lean Six Sigma methods were applied to improve drug round efficiency using a pre- and post-intervention design. A 20-bed orthopaedic ward in a large teaching hospital in Ireland. Pharmacy, nursing and quality improvement staff. A multifaceted intervention was designed which included changes in processes related to drug trolley organization and drug supply planning. A communications campaign aimed at reducing interruptions during nurse-led during rounds was also developed and implemented. Average number of interruptions, average drug round time and variation in time taken to complete drug round. At baseline, the oral drug round took an average of 125 min. Following application of Lean Six Sigma methods, the average drug round time decreased by 51 min. The average number of interruptions per drug round reduced from an average of 12 at baseline to 11 following intervention, with a 75% reduction in drug supply interruptions. Lean Six Sigma methodology was successfully employed to reduce interruptions and to reduce time taken to complete the oral drug round. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. Color Improves Speed of Processing But Not Perception in a Motion Illusion

    PubMed Central

    Perry, Carolyn J.; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored. PMID:22479255

  10. Color improves speed of processing but not perception in a motion illusion.

    PubMed

    Perry, Carolyn J; Fallah, Mazyar

    2012-01-01

    When two superimposed surfaces of dots move in different directions, the perceived directions are shifted away from each other. This perceptual illusion has been termed direction repulsion and is thought to be due to mutual inhibition between the representations of the two directions. It has further been shown that a speed difference between the two surfaces attenuates direction repulsion. As speed and direction are both necessary components of representing motion, the reduction in direction repulsion can be attributed to the additional motion information strengthening the representations of the two directions and thus reducing the mutual inhibition. We tested whether bottom-up attention and top-down task demands, in the form of color differences between the two surfaces, would also enhance motion processing, reducing direction repulsion. We found that the addition of color differences did not improve direction discrimination and reduce direction repulsion. However, we did find that adding a color difference improved performance on the task. We hypothesized that the performance differences were due to the limited presentation time of the stimuli. We tested this in a follow-up experiment where we varied the time of presentation to determine the duration needed to successfully perform the task with and without the color difference. As we expected, color segmentation reduced the amount of time needed to process and encode both directions of motion. Thus we find a dissociation between the effects of attention on the speed of processing and conscious perception of direction. We propose four potential mechanisms wherein color speeds figure-ground segmentation of an object, attentional switching between objects, direction discrimination and/or the accumulation of motion information for decision-making, without affecting conscious perception of the direction. Potential neural bases are also explored.

  11. Gold-coated nanoparticles for use in biotechnology applications

    DOEpatents

    Berning, Douglas E [Los Alamos, NM; Kraus, Jr., Robert H.; Atcher, Robert W [Los Alamos, NM; Schmidt, Jurgen G [Los Alamos, NM

    2009-07-07

    A process of preparing gold-coated magnetic nanoparticles is disclosed and includes forming a suspension of magnetic nanoparticles within a suitable liquid, adding an amount of a reducible gold compound and a reducing agent to the suspension, and, maintaining the suspension for time sufficient to form gold-coated magnetic nanoparticles.

  12. How to get parts out of prison (without paperwork).

    PubMed

    Brown, K

    1998-11-01

    This article describes the business relationship between a manufacturing company and a vendor that is a minimum-security correctional facility. In particular, it describes a set of revisions in the purchasing and delivery process that reduced the amount of paperwork substantially and also reduced the turnaround time.

  13. Gold-coated nanoparticles for use in biotechnology applications

    DOEpatents

    Berning, Douglas E [Los Alamos, NM; Kraus, Jr., Robert H.; Atcher, Robert W [Los Alamos, NM; Schmidt, Jurgen G [Los Alamos, NM

    2007-06-05

    A process of preparing gold-coated magnetic nanoparticles is disclosed and includes forming a suspension of magnetic nanoparticles within a suitable liquid, adding an amount of a reducible gold compound and a reducing agent to the suspension, and, maintaining the suspension for time sufficient to form gold-coated magnetic nanoparticles.

  14. Progressive Band Selection

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin; Chang, Chein-I

    2009-01-01

    Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.

  15. Improving surgeon utilization in an orthopedic department using simulation modeling

    PubMed Central

    Simwita, Yusta W; Helgheim, Berit I

    2016-01-01

    Purpose Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time. Methods The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization. Results The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services. Conclusion This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. PMID:29355193

  16. Apathy and Reduced Speed of Processing Underlie Decline in Verbal Fluency following DBS.

    PubMed

    Foley, Jennifer A; Foltynie, Tom; Zrinzo, Ludvic; Hyam, Jonathan A; Limousin, Patricia; Cipolotti, Lisa

    2017-01-01

    Objective . Reduced verbal fluency is a strikingly uniform finding following deep brain stimulation (DBS) for Parkinson's disease (PD). The precise cognitive mechanism underlying this reduction remains unclear, but theories have suggested reduced motivation, linguistic skill, and/or executive function. It is of note, however, that previous reports have failed to consider the potential role of any changes in speed of processing. Thus, the aim of this study was to examine verbal fluency changes with a particular focus on the role of cognitive speed. Method . In this study, 28 patients with PD completed measures of verbal fluency, motivation, language, executive functioning, and speed of processing, before and after DBS. Results . As expected, there was a marked decline in verbal fluency but also in a timed test of executive functions and two measures of speed of processing. Verbal fluency decline was associated with markers of linguistic and executive functioning, but not after speed of processing was statistically controlled for. In contrast, greater decline in verbal fluency was associated with higher levels of apathy at baseline, which was not associated with changes in cognitive speed. Discussion . Reduced generativity and processing speed may account for the marked reduction in verbal fluency commonly observed following DBS.

  17. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L.J.; DeYoreo, J.J.; Roberts, D.H.

    1992-03-24

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light. 12 figs.

  18. Method for reducing energy losses in laser crystals

    DOEpatents

    Atherton, L. Jeffrey; DeYoreo, James J.; Roberts, David H.

    1992-01-01

    A process for reducing energy losses in crystals is disclosed which comprises: a. heating a crystal to a temperature sufficiently high as to cause dissolution of microscopic inclusions into the crystal, thereby converting said inclusions into point-defects, and b. maintaining said crystal at a given temperature for a period of time sufficient to cause said point-defects to diffuse out of said crystal. Also disclosed are crystals treated by the process, and lasers utilizing the crystals as a source of light.

  19. Optimization of airport security process

    NASA Astrophysics Data System (ADS)

    Wei, Jianan

    2017-05-01

    In order to facilitate passenger travel, on the basis of ensuring public safety, the airport security process and scheduling to optimize. The stochastic Petri net is used to simulate the single channel security process, draw the reachable graph, construct the homogeneous Markov chain to realize the performance analysis of the security process network, and find the bottleneck to limit the passenger throughput. Curve changes in the flow of passengers to open a security channel for the initial state. When the passenger arrives at a rate that exceeds the processing capacity of the security channel, it is queued. The passenger reaches the acceptable threshold of the queuing time as the time to open or close the next channel, simulate the number of dynamic security channel scheduling to reduce the passenger queuing time.

  20. A Corporate-Wide Application of Organizational Behavior Management.

    ERIC Educational Resources Information Center

    Wikoff, Martin B.

    1984-01-01

    Describes a longitudinal project in which organizational behavior management (OBM) procedures have been applied to improve performance of plant employees, increase sales of contract furniture, accelerate response time to customer inquiries, increase orders processed, and reduce processing errors at Krueger, a contract and institutional furniture…

  1. Production and Characterization of Ethyl Ester from Crude Jatropha curcas Oil having High Free Fatty Acid Content

    NASA Astrophysics Data System (ADS)

    Kumar, Rajneesh; Dixit, Anoop; Singh, Shashi Kumar; Singh, Gursahib; Sachdeva, Monica

    2015-09-01

    The two step process was carried out to produce biodiesel from crude Jatropha curcas oil. The pretreatment process was carried out to reduce the free fatty acid content by (≤2 %) acid catalyzed esterification. The optimum reaction conditions for esterification were reported to be 5 % H2SO4, 20 % ethanol and 1 h reaction time at temperature of 65 °C. The pretreatment process reduced the free fatty acid of oil from 7 to 1.85 %. In second process, alkali catalysed transesterification of pretreated oil was carried and the effects of the varying concentrations of KOH and ethanol: oil ratios on percent ester recovery were investigated. The optimum reaction conditions for transesterification were reported to be 3 % KOH (w/v of oil) and 30 % (v/v) ethanol: oil ratio and reaction time 2 h at 65 °C. The maximum percent recovery of ethyl ester was reported to be 60.33 %.

  2. Electrochemical planarization

    DOEpatents

    Bernhardt, A.F.; Contolini, R.J.

    1993-10-26

    In a process for fabricating planarized thin film metal interconnects for integrated circuit structures, a planarized metal layer is etched back to the underlying dielectric layer by electropolishing, ion milling or other procedure. Electropolishing reduces processing time from hours to minutes and allows batch processing of multiple wafers. The etched back planarized thin film interconnect is flush with the dielectric layer. 12 figures.

  3. Autogen Version 2.0

    NASA Technical Reports Server (NTRS)

    Gladden, Roy

    2007-01-01

    Version 2.0 of the autogen software has been released. "Autogen" (automated sequence generation) signifies both a process and software used to implement the process of automated generation of sequences of commands in a standard format for uplink to spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes.

  4. Automated Chromium Plating Line for Gun Barrels

    DTIC Science & Technology

    1979-09-01

    consistent pretreatments and bath dwell times. Some of the advantages of automated processing include increased productivity (average of 20^) due to...when automated processing procedures’ are used. The current method of applying chromium electrodeposits to gun tubes is a manual, batch operation...currently practiced with rotary swaged gun tubes would substantially reduce the difficulties in automated processing . RECOMMENDATIONS

  5. Reduction of Poisson noise in measured time-resolved data for time-domain diffuse optical tomography.

    PubMed

    Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y

    2012-01-01

    A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.

  6. Critical elements in implementations of just-in-time management: empirical study of cement industry in Pakistan.

    PubMed

    Qureshi, Muhammad Imran; Iftikhar, Mehwish; Bhatti, Mansoor Nazir; Shams, Tauqeer; Zaman, Khalid

    2013-01-01

    In recent years, inventory management is continuous challenge for all organizations not only due to heavy cost associated with inventory holding, but also it has a great deal to do with the organizations production process. Cement industry is a growing sector of Pakistan's economy which is now facing problems in capacity utilization of their plants. This study attempts to identify the key strategies for successful implementation of just-in-time (JIT) management philosophy on the cement industry of Pakistan. The study uses survey responses from four hundred operations' managers of cement industry in order to know about the advantages and benefits that cement industry have experienced by Just in time (JIT) adoption. The results show that implementing the quality, product design, inventory management, supply chain and production plans embodied through the JIT philosophy which infect enhances cement industry competitiveness in Pakistan. JIT implementation increases performance by lower level of inventory, reduced operations & inventory costs was reduced eliminates wastage from the processes and reduced unnecessary production which is a big challenge for the manufacturer who are trying to maintain the continuous flow processes. JIT implementation is a vital manufacturing strategy that reaches capacity utilization and minimizes the rate of defect in continuous flow processes. The study emphasize the need for top management commitment in order to incorporate the necessary changes that need to take place in cement industry so that JIT implementation can take place in an effective manner.

  7. Real time gamma-ray signature identifier

    DOEpatents

    Rowland, Mark [Alamo, CA; Gosnell, Tom B [Moraga, CA; Ham, Cheryl [Livermore, CA; Perkins, Dwight [Livermore, CA; Wong, James [Dublin, CA

    2012-05-15

    A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

  8. Good Housekeeping Implementation for Improving Efficiency in Cassava Starch Industry (Case Study : Margoyoso District, Pati Regency)

    NASA Astrophysics Data System (ADS)

    Aji, Wijayanto Setyo; Purwanto; Suherman, S.

    2018-02-01

    Cassava starch industry is one of the leading small-medium enterprises (SMEs) in Pati Regency. Cassava starch industry released waste that reduces the quantity of final product and potentially contamined the environment. This study was conducted to observe the feasibility of good housekeeping implementation to reduce waste and at the same time improve efficiency of production process. Good housekeeping opportunities are consideration by three aspect, technical, economy and environmental. Good housekeeping opportunities involved water conservation and waste reduction. These included reuse of water in washing process, improving workers awareness in drying section and packaging section. Implementation of these opportunities can reduce water consumption, reduce wastewater and solid waste generation also increased quantity of final product.

  9. Acceleration of linear stationary iterative processes in multiprocessor computers. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romm, Ya.E.

    1982-05-01

    For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less

  10. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.

    PubMed

    Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel

    2016-05-01

    Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.

  11. Elements of Designing for Cost

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.; Unal, Resit

    1992-01-01

    During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.

  12. Elements of designing for cost

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.; Unal, Resit

    1992-01-01

    During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity-based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.

  13. Cell Structure Evolution of Aluminum Foams Under Reduced Pressure Foaming

    NASA Astrophysics Data System (ADS)

    Cao, Zhuokun; Yu, Yang; Li, Min; Luo, Hongjie

    2016-09-01

    Ti-H particles are used to increase the gas content in aluminum melts for reduced pressure foaming. This paper reports on the RPF process of AlCa alloy by adding TiH2, but in smaller amounts compared to traditional process. TiH2 is completely decomposed by stirring the melt, following which reduced pressure is applied. TiH2 is not added as the blowing agent; instead, it is added for increasing the H2 concentration in the liquid AlCa melt. It is shown that pressure change induces further release of hydrogen from Ti phase. It is also found that foam collapse is caused by the fast bubble coalescing during pressure reducing procedure, and the instability of liquid film is related to the significant increase in critical thickness of film rupture. A combination of lower amounts of TiH2, coupled with reduced pressure, is another way of increasing hydrogen content in the liquid aluminum. A key benefit of this process is that it provides time to transfer the molten metal to a mold and then apply the reduced pressure to produce net shape foam parts.

  14. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  15. Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems. Ph.D. Thesis, 1992

    NASA Technical Reports Server (NTRS)

    Long, Junsheng

    1994-01-01

    This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.

  16. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  17. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  18. Concreteness of idiographic worry and anticipatory processing.

    PubMed

    McGowan, Sarah Kate; Stevens, Elizabeth S; Behar, Evelyn; Judah, Matt R; Mills, Adam C; Grant, DeMond M

    2017-03-01

    Worry and anticipatory processing are forms of repetitive negative thinking (RNT) that are associated with maladaptive characteristics and negative consequences. One key maladaptive characteristic of worry is its abstract nature (Goldwin & Behar, 2012; Stöber & Borkovec, 2002). Several investigations have relied on inductions of worry that are social-evaluative in nature, which precludes distinctions between worry and RNT about social-evaluative situations. The present study examined similarities and distinctions between worry and anticipatory processing on potentially important maladaptive characteristics. Participants (N = 279) engaged in idiographic periods of uninstructed mentation, worry, and anticipatory processing and provided thought samples during each minute of each induction. Thought samples were assessed for concreteness, degree of verbal-linguistic activity, and degree of imagery-based activity. Both worry and anticipatory processing were characterized by reduced concreteness, increased abstraction of thought over time, and a predominance of verbal-linguistic activity. However, worry was more abstract, more verbal-linguistic, and less imagery-based relative to anticipatory processing. Finally, worry demonstrated reductions in verbal-linguistic activity over time, whereas anticipatory processing demonstrated reductions in imagery-based activity over time. Worry was limited to non-social topics to distinguish worry from anticipatory processing, and may not represent worry that is social in nature. Generalizability may also be limited by use of an undergraduate sample. Results from the present study provide support for Stöber's theory regarding the reduced concreteness of worry, and suggest that although worry and anticipatory processing share some features, they also contain characteristics unique to each process. Published by Elsevier Ltd.

  19. Time-based analysis of the apheresis platelet supply chain in England.

    PubMed

    Wilding, R; Cotton, S; Dobbin, J; Chapman, J; Yates, N

    2011-10-01

    During 2009/2010 loss of platelets within NHS Blood and Transplant (NHSBT) due to time expiry was 9.3%. Hospitals remain reluctant to hold stocks of platelets due to the poor shelf life at issue. The purpose of this study was to identify areas for time compression in the apheresis platelet supply chain to extend the shelf life available for hospitals and reduce wastage in NHSBT. This was done within the context of NHSBT reconfiguring their supply chain and moving towards a consolidated and centralised approach. Time based process mapping was applied to identify value and non-value adding time in two manufacturing models. A large amount of the non-value adding time in the apheresis platelet supply chain is due to transportation and waiting for the next process in the manufacturing process to take place. Time based process mapping provides an effective 'lens' for supply chain professionals to identify opportunities for improvement in the platelet supply chain. © 2011 The Author(s). Vox Sanguinis © 2011 International Society of Blood Transfusion.

  20. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    PubMed

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing and determine opportunities for improvement. This should include a systematic approach to consider variability in the parameters for the models used by food process engineers when designing a thermal process. The Monte Carlo procedure here presented is a tool to facilitate this task for the determination of process time at a constant lethal temperature. © 2011 Institute of Food Technologists®

  1. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  2. Improved model reduction and tuning of fractional-order PI(λ)D(μ) controllers for analytical rule extraction with genetic programming.

    PubMed

    Das, Saptarshi; Pan, Indranil; Das, Shantanu; Gupta, Amitava

    2012-03-01

    Genetic algorithm (GA) has been used in this study for a new approach of suboptimal model reduction in the Nyquist plane and optimal time domain tuning of proportional-integral-derivative (PID) and fractional-order (FO) PI(λ)D(μ) controllers. Simulation studies show that the new Nyquist-based model reduction technique outperforms the conventional H(2)-norm-based reduced parameter modeling technique. With the tuned controller parameters and reduced-order model parameter dataset, optimum tuning rules have been developed with a test-bench of higher-order processes via genetic programming (GP). The GP performs a symbolic regression on the reduced process parameters to evolve a tuning rule which provides the best analytical expression to map the data. The tuning rules are developed for a minimum time domain integral performance index described by a weighted sum of error index and controller effort. From the reported Pareto optimal front of the GP-based optimal rule extraction technique, a trade-off can be made between the complexity of the tuning formulae and the control performance. The efficacy of the single-gene and multi-gene GP-based tuning rules has been compared with the original GA-based control performance for the PID and PI(λ)D(μ) controllers, handling four different classes of representative higher-order processes. These rules are very useful for process control engineers, as they inherit the power of the GA-based tuning methodology, but can be easily calculated without the requirement for running the computationally intensive GA every time. Three-dimensional plots of the required variation in PID/fractional-order PID (FOPID) controller parameters with reduced process parameters have been shown as a guideline for the operator. Parametric robustness of the reported GP-based tuning rules has also been shown with credible simulation examples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Identifying causes of laboratory turnaround time delay in the emergency department.

    PubMed

    Jalili, Mohammad; Shalileh, Keivan; Mojtahed, Ali; Mojtahed, Mohammad; Moradi-Lakeh, Maziar

    2012-12-01

    Laboratory turnaround time (TAT) is an important determinant of patient stay and quality of care. Our objective is to evaluate laboratory TAT in our emergency department (ED) and to generate a simple model for identifying the primary causes for delay. We measured TATs of hemoglobin, potassium, and prothrombin time tests requested in the ED of a tertiary-care, metropolitan hospital during a consecutive one-week period. The time of different steps (physician order, nurse registration, blood-draw, specimen dispatch from the ED, specimen arrival at the laboratory, and result availability) in the test turnaround process were recorded and the intervals between these steps (order processing, specimen collection, ED waiting, transit, and within-laboratory time) and total TAT were calculated. Median TATs for hemoglobin and potassium were compared with those of the 1990 Q-Probes Study (25 min for hemoglobin and 36 min for potassium) and its recommended goals (45 min for 90% of tests). Intervals were compared according to the proportion of TAT they comprised. Median TATs (170 min for 132 hemoglobin tests, 225 min for 172 potassium tests, and 195.5 min for 128 prothrombin tests) were drastically longer than Q-Probes reported and recommended TATs. The longest intervals were ED waiting time and order processing.  Laboratory TAT varies among institutions, and data are sparse in developing countries. In our ED, actions to reduce ED waiting time and order processing are top priorities. We recommend utilization of this model by other institutions in settings with limited resources to identify their own priorities for reducing laboratory TAT.

  4. New algorithms for processing time-series big EEG data within mobile health monitoring systems.

    PubMed

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani; Harous, Saad; Navaz, Alramzana Nujum

    2017-10-01

    Recent advances in miniature biomedical sensors, mobile smartphones, wireless communications, and distributed computing technologies provide promising techniques for developing mobile health systems. Such systems are capable of monitoring epileptic seizures reliably, which are classified as chronic diseases. Three challenging issues raised in this context with regard to the transformation, compression, storage, and visualization of big data, which results from a continuous recording of epileptic seizures using mobile devices. In this paper, we address the above challenges by developing three new algorithms to process and analyze big electroencephalography data in a rigorous and efficient manner. The first algorithm is responsible for transforming the standard European Data Format (EDF) into the standard JavaScript Object Notation (JSON) and compressing the transformed JSON data to decrease the size and time through the transfer process and to increase the network transfer rate. The second algorithm focuses on collecting and storing the compressed files generated by the transformation and compression algorithm. The collection process is performed with respect to the on-the-fly technique after decompressing files. The third algorithm provides relevant real-time interaction with signal data by prospective users. It particularly features the following capabilities: visualization of single or multiple signal channels on a smartphone device and query data segments. We tested and evaluated the effectiveness of our approach through a software architecture model implementing a mobile health system to monitor epileptic seizures. The experimental findings from 45 experiments are promising and efficiently satisfy the approach's objectives in a price of linearity. Moreover, the size of compressed JSON files and transfer times are reduced by 10% and 20%, respectively, while the average total time is remarkably reduced by 67% through all performed experiments. Our approach successfully develops efficient algorithms in terms of processing time, memory usage, and energy consumption while maintaining a high scalability of the proposed solution. Our approach efficiently supports data partitioning and parallelism relying on the MapReduce platform, which can help in monitoring and automatic detection of epileptic seizures. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Introduction to Neural Networks.

    DTIC Science & Technology

    1992-03-01

    parallel processing of information that can greatly reduce the time required to perform operations which are needed in pattern recognition. Neural network, Artificial neural network , Neural net, ANN.

  6. Tree-ring width reveals the preparation of the 1974 Mt. Etna eruption

    PubMed Central

    Seiler, Ruedi; Houlié, Nicolas; Cherubini, Paolo

    2017-01-01

    Reduced near-infrared reflectance observed in September 1973 in Skylab images of the western flank of Mt. Etna has been interpreted as an eruption precursor of the January 1974 eruption. Until now, it has been unclear when this signal started, whether it was sustained and which process(es) could have caused it. By analyzing tree-ring width time-series, we show that the reduced near-infrared precursory signal cannot be linked to a reduction in annual tree growth in the area. However, comparing the tree-ring width time-series with both remote sensing observations and volcano-seismic activity enables us to discuss the starting date of the pre-eruptive period of the 1974 eruption. PMID:28266610

  7. Resin Flow Behavior Simulation of Grooved Foam Sandwich Composites with the Vacuum Assisted Resin Infusion (VARI) Molding Process

    PubMed Central

    Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo

    2012-01-01

    The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.

  8. Senescence sweetening of chip and fry processing potatoes

    USDA-ARS?s Scientific Manuscript database

    Potato storage makes the crop available over an extended time period, but increases financial risk to growers and end users. Senescence sweetening limits storage duration for chip and fry processing potatoes because it results in an unacceptable accumulation of reducing sugars that result in dark-co...

  9. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  10. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  11. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  12. Sensitive high-throughput screening for the detection of reducing sugars.

    PubMed

    Mellitzer, Andrea; Glieder, Anton; Weis, Roland; Reisinger, Christoph; Flicker, Karlheinz

    2012-01-01

    The exploitation of renewable resources for the production of biofuels relies on efficient processes for the enzymatic hydrolysis of lignocellulosic materials. The development of enzymes and strains for these processes requires reliable and fast activity-based screening assays. Additionally, these assays are also required to operate on the microscale and on the high-throughput level. Herein, we report the development of a highly sensitive reducing-sugar assay in a 96-well microplate screening format. The assay is based on the formation of osazones from reducing sugars and para-hydroxybenzoic acid hydrazide. By using this sensitive assay, the enzyme loads and conversion times during lignocellulose hydrolysis can be reduced, thus allowing higher throughput. The assay is about five times more sensitive than the widely applied dinitrosalicylic acid based assay and can reliably detect reducing sugars down to 10 μM. The assay-specific variation over one microplate was determined for three different lignocellulolytic enzymes and ranges from 2 to 8%. Furthermore, the assay was combined with a microscale cultivation procedure for the activity-based screening of Pichia pastoris strains expressing functional Thermomyces lanuginosus xylanase A, Trichoderma reesei β-mannanase, or T. reesei cellobiohydrolase 2. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A review of channel selection algorithms for EEG signal processing

    NASA Astrophysics Data System (ADS)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  14. Improvement of nuclear power plants within the perspective of applications of lean manufacturing practices

    NASA Astrophysics Data System (ADS)

    Malek, A. K.; Muhammad, H. I.; Rosmaini, A.; Alaa, A. S.; Falah, A. M.

    2017-09-01

    Development and improvement process are essential to the companies and factories of various kinds and this necessity is related aspects of cost, time and risk that can be avoided, these aspects are available at the nuclear power stations essential demands cannot be ignored. The lean management technique is one of the recent trends in the management system. Where the lean management is stated as the system increases the customer value and reduces the wastage process in an industry or in a power plants. Therefore, there is an urgent necessity to ensure the development and improvement in nuclear power plants in the pre-established in process of being established and stage of the management and production. All of these stages according to the study are closely related to the necessity operationalize and apply lean manufacturing practices that these applications are ineffective and clear contribution to reduce costs and control of production processes and the process of reducing future risks that could be exposed to the station.

  15. Flexibility of orthographic and graphomotor coordination during a handwritten copy task: effect of time pressure

    PubMed Central

    Sausset, Solen; Lambert, Eric; Olive, Thierry

    2013-01-01

    The coordination of the various processes involved in language production is a subject of keen debate in writing research. Some authors hold that writing processes can be flexibly coordinated according to task demands, whereas others claim that process coordination is entirely inflexible. For instance, orthographic planning has been shown to be resource-dependent during handwriting, but inflexible in typing, even under time pressure. The present study therefore went one step further in studying flexibility in the coordination of orthographic processing and graphomotor execution, by measuring the impact of time pressure during a handwritten copy task. Orthographic and graphomotor processes were observed via syllable processing. Writers copied out two- and three-syllable words three times in a row, with and without time pressure. Latencies and letter measures at syllable boundaries were analyzed. We hypothesized that if coordination is flexible and varies according to task demands, it should be modified by time pressure, affecting both latency before execution and duration of execution. We therefore predicted that the extent of syllable processing before execution would be reduced under time pressure and, as a consequence, syllable effects during execution would be more salient. Results showed, however, that time pressure interacted neither with syllable number nor with syllable structure. Accordingly, syllable processing appears to remain the same regardless of time pressure. The flexibility of process coordination during handwriting is discussed, as is the operationalization of time pressure constraints. PMID:24319435

  16. Real-time speckle reduction in optical coherence tomography using the dual window method

    PubMed Central

    Zhao, Yang; Chu, Kengyeh K.; Eldridge, Will J.; Jelly, Evan T.; Crose, Michael; Wax, Adam

    2018-01-01

    Speckle is an intrinsic noise of interferometric signals which reduces contrast and degrades the quality of optical coherence tomography (OCT) images. Here, we present a frequency compounding speckle reduction technique using the dual window (DW) method. Using the DW method, speckle noise is reduced without the need to acquire multiple frames. A ~25% improvement in the contrast-to-noise ratio (CNR) was achieved using the DW speckle reduction method with only minimal loss (~17%) in axial resolution. We also demonstrate that real-time speckle reduction can be achieved at a B-scan rate of ~21 frames per second using a graphic processing unit (GPU). The DW speckle reduction technique can work on any existing OCT instrument without further system modification or extra components. This makes it applicable both in real-time imaging systems and during post-processing. PMID:29552398

  17. Toward Implementing Patient Flow in a Cancer Treatment Center to Reduce Patient Waiting Time and Improve Efficiency.

    PubMed

    Suss, Samuel; Bhuiyan, Nadia; Demirli, Kudret; Batist, Gerald

    2017-06-01

    Outpatient cancer treatment centers can be considered as complex systems in which several types of medical professionals and administrative staff must coordinate their work to achieve the overall goals of providing quality patient care within budgetary constraints. In this article, we use analytical methods that have been successfully employed for other complex systems to show how a clinic can simultaneously reduce patient waiting times and non-value added staff work in a process that has a series of steps, more than one of which involves a scarce resource. The article describes the system model and the key elements in the operation that lead to staff rework and patient queuing. We propose solutions to the problems and provide a framework to evaluate clinic performance. At the time of this report, the proposals are in the process of implementation at a cancer treatment clinic in a major metropolitan hospital in Montreal, Canada.

  18. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  19. Reducing neural network training time with parallel processing

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1995-01-01

    Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.

  20. Time Is Not on Our Side: How Radiology Practices Should Manage Customer Queues.

    PubMed

    Loving, Vilert A; Ellis, Richard L; Rippee, Robert; Steele, Joseph R; Schomer, Donald F; Shoemaker, Stowe

    2017-11-01

    As health care shifts toward patient-centered care, wait times have received increasing scrutiny as an important metric for patient satisfaction. Long queues form when radiology practices inefficiently service their customers, leading to customer dissatisfaction and a lower perception of value. This article describes a four-step framework for radiology practices to resolve problematic queues: (1) analyze factors contributing to queue formation; (2) improve processes to reduce service times; (3) reduce variability; (4) address the psychology of queues. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. Increasing Speed of Processing With Action Video Games

    PubMed Central

    Dye, Matthew W.G.; Green, C. Shawn; Bavelier, Daphne

    2010-01-01

    In many everyday situations, speed is of the essence. However, fast decisions typically mean more mistakes. To this day, it remains unknown whether reaction times can be reduced with appropriate training, within one individual, across a range of tasks, and without compromising accuracy. Here we review evidence that the very act of playing action video games significantly reduces reaction times without sacrificing accuracy. Critically, this increase in speed is observed across various tasks beyond game situations. Video gaming may therefore provide an efficient training regimen to induce a general speeding of perceptual reaction times without decreases in accuracy of performance. PMID:20485453

  2. Analysis of microarray leukemia data using an efficient MapReduce-based K-nearest-neighbor classifier.

    PubMed

    Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar

    2016-04-01

    Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Reducing the time rabbit sperm are held at 5 °C negatively affects their fertilizing ability after cryopreservation.

    PubMed

    Mocé, E; Blanch, E; Talaván, A; Viudes de Castro, M P

    2014-10-15

    Cooling sperm to and equilibrating the sperm at 5 °C require the most time in any sperm cryopreservation protocol. Reducing the time required for these phases would simplify sperm freezing protocols and allow greater number of ejaculates to be processed and frozen in a given time. This study determined how holding rabbit sperm at 5 °C for different lengths of time (0, 10, 15, 20, 30, or 45 minutes) affected the quality of rabbit sperm, measured by in vitro assays, and if reducing the cooling time to only 10 minutes affected the fertilizing ability of the sperm. Reducing the time sperm were held at 5 °C to 10 minutes did not affect the in vitro quality of the sperm (percent motile and with intact plasma membranes), although eliminating the cooling phase completely (directly freezing the sperm from room temperature) decreased in vitro assessed sperm quality (P<0.01). However, reducing the time sperm were held at 5 °C, from 45 to 10 minutes, negatively affected the fertilizing ability of sperm in vivo (P<0.05). In conclusion, completely eliminating cooling rabbit sperm to 5 °C before freezing is detrimental for rabbit sperm cryosurvival, and although shortening the time sperm are held at 5 °C to 10 minutes does not reduce in vitro sperm quality, it does reduce the fertility of rabbit sperm. Therefore, the length of time rabbit sperm equilibrate at 5 °C is crucial to the fertilizing ability of rabbit sperm and must be longer than 10 minutes. Currently, it is not known if holding rabbit sperm at 5 °C for less than 45 minutes will affect sperm fertilizing ability. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Strategies to Reduce the Negative Effects of Spoken Explanatory Text on Integrated Tasks

    ERIC Educational Resources Information Center

    Singh, Anne-Marie; Marcus, Nadine; Ayres, Paul

    2017-01-01

    Two experiments involving 125 grade-10 students learning about commerce investigated strategies to overcome the transient information effect caused by explanatory spoken text. The transient information effect occurs when learning is reduced as a result of information disappearing before the learner has time to adequately process it, or link it…

  5. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  6. Using lean principles to improve outpatient adult infusion clinic chemotherapy preparation turnaround times.

    PubMed

    Lamm, Matthew H; Eckel, Stephen; Daniels, Rowell; Amerine, Lindsey B

    2015-07-01

    The workflow and chemotherapy preparation turnaround times at an adult infusion clinic were evaluated to identify opportunities to optimize workflow and efficiency. A three-phase study using Lean Six Sigma methodology was conducted. In phase 1, chemotherapy turnaround times in the adult infusion clinic were examined one year after the interim goal of a 45-minute turnaround time was established. Phase 2 implemented various experiments including a five-day Kaizen event, using lean principles in an effort to decrease chemotherapy preparation turnaround times in a controlled setting. Phase 3 included the implementation of process-improvement strategies identified during the Kaizen event, coupled with a final refinement of operational processes. In phase 1, the mean turnaround time for all chemotherapy preparations decreased from 60 to 44 minutes, and a mean of 52 orders for adult outpatient chemotherapy infusions was received each day. After installing new processes, the mean turnaround time had improved to 37 minutes for each chemotherapy preparation in phase 2. In phase 3, the mean turnaround time decreased from 37 to 26 minutes. The overall mean turnaround time was reduced by 26 minutes, representing a 57% decrease in turnaround times in 19 months through the elimination of waste and the implementation of lean principles. This reduction was accomplished through increased efficiencies in the workplace, with no addition of human resources. Implementation of Lean Six Sigma principles improved workflow and efficiency at an adult infusion clinic and reduced the overall chemotherapy turnaround times from 60 to 26 minutes. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  7. Numerical Analysis of Heat Transfer During Quenching Process

    NASA Astrophysics Data System (ADS)

    Madireddi, Sowjanya; Krishnan, Krishnan Nambudiripad; Reddy, Ammana Satyanarayana

    2018-04-01

    A numerical model is developed to simulate the immersion quenching process of metals. The time of quench plays an important role if the process involves a defined step quenching schedule to obtain the desired characteristics. Lumped heat capacity analysis used for this purpose requires the value of heat transfer coefficient, whose evaluation requires large experimental data. Experimentation on a sample work piece may not represent the actual component which may vary in dimension. A Fluid-Structure interaction technique with a coupled interface between the solid (metal) and liquid (quenchant) is used for the simulations. Initial times of quenching shows boiling heat transfer phenomenon with high values of heat transfer coefficients (5000-2.5 × 105 W/m2K). Shape of the work piece with equal dimension shows less influence on the cooling rate Non-uniformity in hardness at the sharp corners can be reduced by rounding off the edges. For a square piece of 20 mm thickness, with 3 mm fillet radius, this difference is reduced by 73 %. The model can be used for any metal-quenchant combination to obtain time-temperature data without the necessity of experimentation.

  8. SU-D-209-03: Radiation Dose Reduction Using Real-Time Image Processing in Interventional Radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanal, K; Moirano, J; Zamora, D

    Purpose: To characterize changes in radiation dose after introducing a new real-time image processing technology in interventional radiology systems. Methods: Interventional radiology (IR) procedures are increasingly complex, at times requiring substantial time and radiation dose. The risk of inducing tissue reactions as well as long-term stochastic effects such as radiation-induced cancer is not trivial. To reduce this risk, IR systems are increasingly equipped with dose reduction technologies.Recently, ClarityIQ (Philips Healthcare) technology was installed in our existing neuroradiology IR (NIR) and vascular IR (VIR) suites respectively. ClarityIQ includes real-time image processing that reduces noise/artifacts, enhances images, and sharpens edges while alsomore » reducing radiation dose rates. We reviewed 412 NIR (175 pre- and 237 post-ClarityIQ) procedures and 329 VIR (156 preand 173 post-ClarityIQ) procedures performed at our institution pre- and post-ClarityIQ implementation. NIR procedures were primarily classified as interventional or diagnostic. VIR procedures included drain port, drain placement, tube change, mesenteric, and implanted venous procedures. Air Kerma (AK in units of mGy) was documented for all the cases using a commercial radiation exposure management system. Results: When considering all NIR procedures, median AK decreased from 1194 mGy to 561 mGy. When considering all VIR procedures, median AK decreased from 49 to 14 mGy. Both NIR and VIR exhibited a decrease in AK exceeding 50% after ClarityIQ implementation, a statistically significant (p<0.05) difference. Of the 5 most common VIR procedures, all median AK values decreased, but significance (p<0.05) was only reached in venous access (N=53), angio mesenteric (N=41), and drain placement procedures (N=31). Conclusion: ClarityIQ can reduce dose significantly for both NIR and VIR procedures. Image quality was not assessed in conjunction with the dose reduction.« less

  9. Analysis of the United States Marine Corps Continuous Process Improvement Program Applied to the Contracting Process at Marine Corps Regional Contracting Office - Southwest

    DTIC Science & Technology

    2007-12-01

    37 3. Poka - yoke ............................................................................................37 4. Systems for...Standard operating procedures • Visual displays for workflow and communication • Total productive maintenance • Poka - yoke techniques to prevent...process step or eliminating non-value-added steps, and reducing the seven common wastes, will decrease the total time of a process. 3. Poka - yoke

  10. Airport security inspection process model and optimization based on GSPN

    NASA Astrophysics Data System (ADS)

    Mao, Shuainan

    2018-04-01

    Aiming at the efficiency of airport security inspection process, Generalized Stochastic Petri Net is used to establish the security inspection process model. The model is used to analyze the bottleneck problem of airport security inspection process. The solution to the bottleneck is given, which can significantly improve the efficiency and reduce the waiting time by adding the place for people to remove their clothes and the X-ray detector.

  11. Optimal Control of the Valve Based on Traveling Wave Method in the Water Hammer Process

    NASA Astrophysics Data System (ADS)

    Cao, H. Z.; Wang, F.; Feng, J. L.; Tan, H. P.

    2011-09-01

    Valve regulation is an effective method for process control during the water hammer. The principle of d'Alembert traveling wave theory was used in this paper to construct the exact analytical solution of the water hammer, and the optimal speed law of the valve that can reduce the water hammer pressure in the maximum extent was obtained. Combining this law with the valve characteristic curve, the principle corresponding to the valve opening changing with time was obtained, which can be used to guide the process of valve closing and to reduce the water hammer pressure in the maximum extent.

  12. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE PAGES

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.; ...

    2016-05-27

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less

  13. Large-scale seismic waveform quality metric calculation using Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magana-Zook, Steven; Gaylord, Jessie M.; Knapp, Douglas R.

    Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/Omore » performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less

  14. Does Living Outside of a Major City Impact on the Timeliness of Chlamydia Treatment? A Multicenter Cross-Sectional Analysis.

    PubMed

    Foster, Rosalind; Ali, Hammad; Crowley, Margaret; Dyer, Roisin; Grant, Kim; Lenton, Joanne; Little, Christine; Knight, Vickie; Read, Phillip; Donovan, Basil; McNulty, Anna; Guy, Rebecca

    2016-08-01

    Timely treatment of Chlamydia trachomatis infection reduces complications and onward transmission. We assessed client, process, and clinic factors associated with treatment delays at sexual health clinics in New South Wales, Australia. A retrospective review of 450 consecutive clients with positive chlamydia results (not treated at the time of the consultation) was undertaken at 6 clinics (1 urban, 3 regional, and 2 remote) from October 2013. Mean and median times to treatment were calculated, overall and stratified by process steps and clinic location. Nearly all clients (446, 99%) were treated, with 398 (88%) treated in ≤14 days and 277 (62%) in ≤7 days. The mean time-to-treatment was 22 days at remote clinics, 13 days at regional and 8 days at the urban clinic (P < 0.001). Mean time between the laboratory receipt of specimen and reporting of result was 4.9 in the remote clinics, 4.1 in the regional, and 2.7 days in the urban clinic (P < 0.001); and the mean time between the clinician receiving the result until client treatment was15, 5, and 3 days (P < 0.01), respectively. At participating clinics, treatment uptake was high, however treatment delays were greater with increasing remoteness. Strategies to reduce the time-to-treatment should be explored such as point-of-care testing, faster specimen processing, dedicated clinical time to follow up recalls, SMS results to clients, and taking treatment out to clients.

  15. Investigation of FPGA-Based Real-Time Adaptive Digital Pulse Shaping for High-Count-Rate Applications

    NASA Astrophysics Data System (ADS)

    Saxena, Shefali; Hawari, Ayman I.

    2017-07-01

    Digital signal processing techniques have been widely used in radiation spectrometry to provide improved stability and performance with compact physical size over the traditional analog signal processing. In this paper, field-programmable gate array (FPGA)-based adaptive digital pulse shaping techniques are investigated for real-time signal processing. National Instruments (NI) NI 5761 14-bit, 250-MS/s adaptor module is used for digitizing high-purity germanium (HPGe) detector's preamplifier pulses. Digital pulse processing algorithms are implemented on the NI PXIe-7975R reconfigurable FPGA (Kintex-7) using the LabVIEW FPGA module. Based on the time separation between successive input pulses, the adaptive shaping algorithm selects the optimum shaping parameters (rise time and flattop time of trapezoid-shaping filter) for each incoming signal. A digital Sallen-Key low-pass filter is implemented to enhance signal-to-noise ratio and reduce baseline drifting in trapezoid shaping. A recursive trapezoid-shaping filter algorithm is employed for pole-zero compensation of exponentially decayed (with two-decay constants) preamplifier pulses of an HPGe detector. It allows extraction of pulse height information at the beginning of each pulse, thereby reducing the pulse pileup and increasing throughput. The algorithms for RC-CR2 timing filter, baseline restoration, pile-up rejection, and pulse height determination are digitally implemented for radiation spectroscopy. Traditionally, at high-count-rate conditions, a shorter shaping time is preferred to achieve high throughput, which deteriorates energy resolution. In this paper, experimental results are presented for varying count-rate and pulse shaping conditions. Using adaptive shaping, increased throughput is accepted while preserving the energy resolution observed using the longer shaping times.

  16. Ramp Technology and Intelligent Processing in Small Manufacturing

    NASA Technical Reports Server (NTRS)

    Rentz, Richard E.

    1992-01-01

    To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.

  17. Ramp technology and intelligent processing in small manufacturing

    NASA Astrophysics Data System (ADS)

    Rentz, Richard E.

    1992-04-01

    To address the issues of excessive inventories and increasing procurement lead times, the Navy is actively pursuing flexible computer integrated manufacturing (FCIM) technologies, integrated by communication networks to respond rapidly to its requirements for parts. The Rapid Acquisition of Manufactured Parts (RAMP) program, initiated in 1986, is an integral part of this effort. The RAMP program's goal is to reduce the current average production lead times experienced by the Navy's inventory control points by a factor of 90 percent. The manufacturing engineering component of the RAMP architecture utilizes an intelligent processing technology built around a knowledge-based shell provided by ICAD, Inc. Rules and data bases in the software simulate an expert manufacturing planner's knowledge of shop processes and equipment. This expert system can use Product Data Exchange using STEP (PDES) data to determine what features the required part has, what material is required to manufacture it, what machines and tools are needed, and how the part should be held (fixtured) for machining, among other factors. The program's rule base then indicates, for example, how to make each feature, in what order to make it, and to which machines on the shop floor the part should be routed for processing. This information becomes part of the shop work order. The process planning function under RAMP greatly reduces the time and effort required to complete a process plan. Since the PDES file that drives the intelligent processing is 100 percent complete and accurate to start with, the potential for costly errors is greatly diminished.

  18. A Demonstration of Nitrogen Dynamics in Oxic and Hypoxic Soils and Sediments.

    ERIC Educational Resources Information Center

    Ambler, Julie; Pelovitz, Kelly; Ladd, Timothy; Steucek, Guy

    2001-01-01

    Describes an experiment in which the incubation time to observe denitrification and other processes of the nitrogen cycle is reduced from 7-14 days to 24-48 hours. Presents calculations of processes in the nitrogen cycle in the form of a dichotomous key. (SAH)

  19. Process Redesign of the Norwegian Navy Materiel Command’s Replenishment of Inventory Items

    DTIC Science & Technology

    1997-12-01

    procurement offices into one. The second proposal is to introduce, and use electronic commerce in the replenishment process. It is concluded that both...redesign proposals will reduce administrative lead-time, variability and hence cost. Benefits from an introduction of electronic commerce will yield a yearly

  20. Event-Based Processing of Neutron Scattering Data

    DOE PAGES

    Peterson, Peter F.; Campbell, Stuart I.; Reuter, Michael A.; ...

    2015-09-16

    Many of the world's time-of-flight spallation neutrons sources are migrating to the recording of individual neutron events. This provides for new opportunities in data processing, the least of which is to filter the events based on correlating them with logs of sample environment and other ancillary equipment. This paper will describe techniques for processing neutron scattering data acquired in event mode that preserve event information all the way to a final spectrum, including any necessary corrections or normalizations. This results in smaller final errors, while significantly reducing processing time and memory requirements in typical experiments. Results with traditional histogramming techniquesmore » will be shown for comparison.« less

  1. Time takes space: selective effects of multitasking on concurrent spatial processing.

    PubMed

    Mäntylä, Timo; Coni, Valentina; Kubik, Veit; Todorov, Ivo; Del Missier, Fabio

    2017-08-01

    Many everyday activities require coordination and monitoring of complex relations of future goals and deadlines. Cognitive offloading may provide an efficient strategy for reducing control demands by representing future goals and deadlines as a pattern of spatial relations. We tested the hypothesis that multiple-task monitoring involves time-to-space transformational processes, and that these spatial effects are selective with greater demands on coordinate (metric) than categorical (nonmetric) spatial relation processing. Participants completed a multitasking session in which they monitored four series of deadlines, running on different time scales, while making concurrent coordinate or categorical spatial judgments. We expected and found that multitasking taxes concurrent coordinate, but not categorical, spatial processing. Furthermore, males showed a better multitasking performance than females. These findings provide novel experimental evidence for the hypothesis that efficient multitasking involves metric relational processing.

  2. A single aerobic exercise session accelerates movement execution but not central processing.

    PubMed

    Beyer, Kit B; Sage, Michael D; Staines, W Richard; Middleton, Laura E; McIlroy, William E

    2017-03-27

    Previous research has demonstrated that aerobic exercise has disparate effects on speed of processing and movement execution. In simple and choice reaction tasks, aerobic exercise appears to increase speed of movement execution while speed of processing is unaffected. In the flanker task, aerobic exercise has been shown to reduce response time on incongruent trials more than congruent trials, purportedly reflecting a selective influence on speed of processing related to cognitive control. However, it is unclear how changes in speed of processing and movement execution contribute to these exercise-induced changes in response time during the flanker task. This study examined how a single session of aerobic exercise influences speed of processing and movement execution during a flanker task using electromyography to partition response time into reaction time and movement time, respectively. Movement time decreased during aerobic exercise regardless of flanker congruence but returned to pre-exercise levels immediately after exercise. Reaction time during incongruent flanker trials decreased over time in both an aerobic exercise and non-exercise control condition indicating it was not specifically influenced by exercise. This disparate influence of aerobic exercise on movement time and reaction time indicates the importance of partitioning response time when examining the influence of aerobic exercise on speed of processing. The decrease in reaction time over time independent of aerobic exercise indicates that interpreting pre-to-post exercise changes in behavior requires caution. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. High-Volume Production of Lightweight Multijunction Solar Cells

    NASA Technical Reports Server (NTRS)

    Youtsey, Christopher

    2015-01-01

    MicroLink Devices, Inc., has transitioned its 6-inch epitaxial lift-off (ELO) solar cell fabrication process into a manufacturing platform capable of sustaining large-volume production. This Phase II project improves the ELO process by reducing cycle time and increasing the yield of large-area devices. In addition, all critical device fabrication processes have transitioned to 6-inch production tool sets designed for volume production. An emphasis on automated cassette-to-cassette and batch processes minimizes operator dependence and cell performance variability. MicroLink Devices established a pilot production line capable of at least 1,500 6-inch wafers per month at greater than 80 percent yield. The company also increased the yield and manufacturability of the 6-inch reclaim process, which is crucial to reducing the cost of the cells.

  4. Overlap of movement planning and movement execution reduces reaction time.

    PubMed

    Orban de Xivry, Jean-Jacques; Legrain, Valéry; Lefèvre, Philippe

    2017-01-01

    Motor planning is the process of preparing the appropriate motor commands in order to achieve a goal. This process has largely been thought to occur before movement onset and traditionally has been associated with reaction time. However, in a virtual line bisection task we observed an overlap between movement planning and execution. In this task performed with a robotic manipulandum, we observed that participants (n = 30) made straight movements when the line was in front of them (near target) but often made curved movements when the same target was moved sideways (far target, which had the same orientation) in such a way that they crossed the line perpendicular to its orientation. Unexpectedly, movements to the far targets had shorter reaction times than movements to the near targets (mean difference: 32 ms, SE: 5 ms, max: 104 ms). In addition, the curvature of the movement modulated reaction time. A larger increase in movement curvature from the near to the far target was associated with a larger reduction in reaction time. These highly curved movements started with a transport phase during which accuracy demands were not taken into account. We conclude that an accuracy demand imposes a reaction time penalty if processed before movement onset. This penalty is reduced if the start of the movement consists of a transport phase and if the movement plan can be refined with respect to accuracy demands later in the movement, hence demonstrating an overlap between movement planning and execution. In the planning of a movement, the brain has the opportunity to delay the incorporation of accuracy requirements of the motor plan in order to reduce the reaction time by up to 100 ms (average: 32 ms). Such shortening of reaction time is observed here when the first phase of the movement consists of a transport phase. This forces us to reconsider the hypothesis that motor plans are fully defined before movement onset. Copyright © 2017 the American Physiological Society.

  5. Overlap of movement planning and movement execution reduces reaction time

    PubMed Central

    Legrain, Valéry; Lefèvre, Philippe

    2016-01-01

    Motor planning is the process of preparing the appropriate motor commands in order to achieve a goal. This process has largely been thought to occur before movement onset and traditionally has been associated with reaction time. However, in a virtual line bisection task we observed an overlap between movement planning and execution. In this task performed with a robotic manipulandum, we observed that participants (n = 30) made straight movements when the line was in front of them (near target) but often made curved movements when the same target was moved sideways (far target, which had the same orientation) in such a way that they crossed the line perpendicular to its orientation. Unexpectedly, movements to the far targets had shorter reaction times than movements to the near targets (mean difference: 32 ms, SE: 5 ms, max: 104 ms). In addition, the curvature of the movement modulated reaction time. A larger increase in movement curvature from the near to the far target was associated with a larger reduction in reaction time. These highly curved movements started with a transport phase during which accuracy demands were not taken into account. We conclude that an accuracy demand imposes a reaction time penalty if processed before movement onset. This penalty is reduced if the start of the movement consists of a transport phase and if the movement plan can be refined with respect to accuracy demands later in the movement, hence demonstrating an overlap between movement planning and execution. NEW & NOTEWORTHY In the planning of a movement, the brain has the opportunity to delay the incorporation of accuracy requirements of the motor plan in order to reduce the reaction time by up to 100 ms (average: 32 ms). Such shortening of reaction time is observed here when the first phase of the movement consists of a transport phase. This forces us to reconsider the hypothesis that motor plans are fully defined before movement onset. PMID:27733598

  6. Improving flow in the OR.

    PubMed

    Blouin-Delisle, Charles Hubert; Drolet, Renee; Gagnon, Serge; Turcotte, Stephane; Boutet, Sylvie; Coulombe, Martin; Daneau, Eric

    2018-03-12

    Purpose The purpose of this paper is to increase efficiency in ORs without affecting quality of care by improving the workflow processes. Administrative processes independent of the surgical act can be challenging and may lead to clinical impacts such as increasing delays. The authors hypothesized that a Lean project could improve efficiency of surgical processes by reducing the length of stays in the recovery ward. Design/methodology/approach Two similar Lean projects were performed in the surgery departments of two hospitals of the Centre Hospitalier Universitaire de Québec: Hôtel Dieu de Quebec (HDQ) and Hôpital de l'Enfant Jesus (HEJ). The HDQ project designed around a Define, Measure, Analyse, Improve and Control process revision and a Kaizen workshop focused on patients who were hospitalized in a specific care unit after surgery and the HEJ project targeted patients in a post-operative ambulatory context. The recovery ward output delay was measured retrospectively before and after project. Findings For the HDQ Lean project, wasted time in the recovery ward was reduced by 62 minutes (68 percent reduction) between the two groups. The authors also observed an increase of about 25 percent of all admissions made in the daytime after the project compared to the time period before the project. For the HEJ Lean project, time passed in the recovery ward was reduced by 6 min (29 percent reduction). Originality/value These projects produced an improvement in the flow of the OR without targeting clinical practices in the OR itself. They demonstrated that change in administrative processes can have a great impact on the flow of clinical pathways and highlight the need for comprehensive and precise monitoring of every step of the elective surgery patient trajectory.

  7. Cost-Effectiveness of Reduced Waiting Time for Head and Neck Cancer Patients due to a Lean Process Redesign.

    PubMed

    Simons, Pascale A M; Ramaekers, Bram; Hoebers, Frank; Kross, Kenneth W; Marneffe, Wim; Pijls-Johannesma, Madelon; Vandijck, Dominique

    2015-07-01

    Compared with new technologies, the redesign of care processes is generally considered less attractive to improve patient outcomes. Nevertheless, it might result in better patient outcomes, without further increasing costs. Because early initiation of treatment is of vital importance for patients with head and neck cancer (HNC), these care processes were redesigned. This study aimed to assess patient outcomes and cost-effectiveness of this redesign. An economic (Markov) model was constructed to evaluate the biopsy process of suspicious lesion under local instead of general anesthesia, and combining computed tomography and positron emission tomography for diagnostics and radiotherapy planning. Patients treated for HNC were included in the model stratified by disease location (larynx, oropharynx, hypopharynx, and oral cavity) and stage (I-II and III-IV). Probabilistic sensitivity analyses were performed. Waiting time before treatment start reduced from 5 to 22 days for the included patient groups, resulting in 0.13 to 0.66 additional quality-adjusted life-years. The new workflow was cost-effective for all the included patient groups, using a ceiling ratio of €80,000 or €20,000. For patients treated for tumors located at the larynx and oral cavity, the new workflow resulted in additional quality-adjusted life-years, and costs decreased compared with the regular workflow. The health care payer benefited €14.1 million and €91.5 million, respectively, when individual net monetary benefits were extrapolated to an organizational level and a national level. The redesigned care process reduced the waiting time for the treatment of patients with HNC and proved cost-effective. Because care improved, implementation on a wider scale should be considered. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. A user-friendly tool to transform large scale administrative data into wide table format using a MapReduce program with a Pig Latin based script.

    PubMed

    Horiguchi, Hiromasa; Yasunaga, Hideo; Hashimoto, Hideki; Ohe, Kazuhiko

    2012-12-22

    Secondary use of large scale administrative data is increasingly popular in health services and clinical research, where a user-friendly tool for data management is in great demand. MapReduce technology such as Hadoop is a promising tool for this purpose, though its use has been limited by the lack of user-friendly functions for transforming large scale data into wide table format, where each subject is represented by one row, for use in health services and clinical research. Since the original specification of Pig provides very few functions for column field management, we have developed a novel system called GroupFilterFormat to handle the definition of field and data content based on a Pig Latin script. We have also developed, as an open-source project, several user-defined functions to transform the table format using GroupFilterFormat and to deal with processing that considers date conditions. Having prepared dummy discharge summary data for 2.3 million inpatients and medical activity log data for 950 million events, we used the Elastic Compute Cloud environment provided by Amazon Inc. to execute processing speed and scaling benchmarks. In the speed benchmark test, the response time was significantly reduced and a linear relationship was observed between the quantity of data and processing time in both a small and a very large dataset. The scaling benchmark test showed clear scalability. In our system, doubling the number of nodes resulted in a 47% decrease in processing time. Our newly developed system is widely accessible as an open resource. This system is very simple and easy to use for researchers who are accustomed to using declarative command syntax for commercial statistical software and Structured Query Language. Although our system needs further sophistication to allow more flexibility in scripts and to improve efficiency in data processing, it shows promise in facilitating the application of MapReduce technology to efficient data processing with large scale administrative data in health services and clinical research.

  9. Mixed feed and its ingredients electron beam decontamination

    NASA Astrophysics Data System (ADS)

    Bezuglov, V. V.; Bryazgin, A. A.; Vlasov, A. Yu; Voronin, L. A.; Ites, Yu V.; Korobeynikov, M. V.; Leonov, S. V.; Leonova, M. A.; Tkachenko, V. O.; Shtarklev, E. A.; Yuskov, Yu G.

    2017-01-01

    Electron beam treatment is used for food processing for decades to prevent or minimize food losses and prolong storage time. This process is also named cold pasteurization. Mixed feed ingredients supplied in Russia regularly occur to be contaminated. To reduce contamination level the contaminated mixed feed ingredients samples were treated by electron beam with doses from 2 to 12 kGy. The contamination levels were decreased to the level that ensuring storage time up to 1 year.

  10. Five-Year Plan (FY04-FY-08) for the Manufacturing Technology (ManTech) Program. Supplement to the FY03 - FY07 Plan

    DTIC Science & Technology

    2003-07-01

    magnetorheological (MRF) finishing to reduce surface roughness in half the time of previous processes . Improved image quality directly supports improved...affordably polish the inside surface of small tight free form optics to a finish on the order of 3 angstroms. • Demonstrate cycle time reduction...processes and controls for steel, titanium, and superalloys. FY2007: • Demonstrate an improved superfine finishing for optical components to

  11. Mental training enhances attentional stability: Neural and behavioral evidence

    PubMed Central

    Lutz, Antoine; Slagter, Heleen A.; Rawlings, Nancy B.; Francis, Andrew D.; Greischar, Lawrence L.; Davidson, Richard J.

    2009-01-01

    The capacity to stabilize the content of attention over time varies among individuals and its impairment is a hallmark of several mental illnesses. Impairments in sustained attention in patients with attention disorders have been associated with increased trial-to-trial variability in reaction time and event-related potential (ERP) deficits during attention tasks. At present, it is unclear whether the ability to sustain attention and its underlying brain circuitry are transformable through training. Here, we show, with dichotic listening task performance and electroencephalography (EEG), that training attention, as cultivated by meditation, can improve the ability to sustain attention. Three months of intensive meditation training reduced variability in attentional processing of target tones, as indicated by both enhanced theta-band phase consistency of oscillatory neural responses over anterior brain areas and reduced reaction time variability. Furthermore, those individuals who showed the greatest increase in neural response consistency showed the largest decrease in behavioral response variability. Notably, we also observed reduced variability in neural processing, in particular in low-frequency bands, regardless of whether the deviant tone was attended or unattended. Focused attention meditation may thus affect both distracter and target processing, perhaps by enhancing entrainment of neuronal oscillations to sensory input rhythms; a mechanism important for controlling the content of attention. These novel findings highlight the mechanisms underlying focused attention meditation, and support the notion that mental training can significantly affect attention and brain function. PMID:19846729

  12. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  13. MRO DKF Post-Processing Tool

    NASA Technical Reports Server (NTRS)

    Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat

    2008-01-01

    This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.

  14. Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.

    PubMed

    Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy

    2015-12-30

    While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  16. Does leisure time moderate or mediate the effect of daily stress on positive affect? An examination using eight-day diary data

    PubMed Central

    Qian, Xinyi Lisa; Yarnal, Careen M.; Almeida, David M.

    2013-01-01

    This study tested the applicability of moderation and mediation models to leisure time as a stress coping resource. Analyzing eight-day diary data (N=2,022), we examined the within-person process of using leisure time to cope with daily stressors. We found that relatively high daily stress frequency, while reducing positive affect, prompted an individual to allocate more time to leisure than usual, which then increased positive affect, thus partially remedying the damage by high daily stress frequency. This within-person process, however, is significantly stronger among those with less leisure time on average than leisure-rich individuals. The findings support a partial counteractive mediation model, demonstrate between-person difference in the within-person coping process, and reveal the importance of positive affect as a coping outcome. PMID:25221350

  17. Does leisure time moderate or mediate the effect of daily stress on positive affect? An examination using eight-day diary data.

    PubMed

    Qian, Xinyi Lisa; Yarnal, Careen M; Almeida, David M

    2014-01-01

    This study tested the applicability of moderation and mediation models to leisure time as a stress coping resource. Analyzing eight-day diary data (N=2,022), we examined the within -person process of using leisure time to cope with daily stressors. We found that relatively high daily stress frequency, while reducing positive affect, prompted an individual to allocate more time to leisure than usual, which then increased positive affect, thus partially remedying the damage by high daily stress frequency. This within-person process, however, is significantly stronger among those with less leisure time on average than leisure-rich individuals. The findings support a partial counteractive mediation model, demonstrate between-person difference in the within-person coping process, and reveal the importance of positive affect as a coping outcome.

  18. Spectral analysis of temporal non-stationary rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2018-04-01

    This study treats the catchment as a block box system with considering the rainfall input and runoff output being a stochastic process. The temporal rainfall-runoff relationship at the catchment scale is described by a convolution integral on a continuous time scale. Using the Fourier-Stieltjes representation approach, a frequency domain solution to the convolution integral is developed to the spectral analysis of runoff processes generated by temporal non-stationary rainfall events. It is shown that the characteristic time scale of rainfall process increases the runoff discharge variability, while the catchment mean travel time constant plays the role in reducing the variability of runoff discharge. Similar to the behavior of groundwater aquifers, catchments act as a low-pass filter in the frequency domain for the rainfall input signal.

  19. Randomised controlled trial of a text messaging intervention for reducing processed meat consumption: The mediating roles of anticipated regret and intention.

    PubMed

    Carfora, V; Caso, D; Conner, M

    2017-10-01

    The present study aimed to extend the literature on text messaging interventions involved in promoting healthy eating behaviours. The theoretical framework was the Theory of Planned Behaviour (TPB). A randomized controlled trial was used to test the impact of daily text messages compared to no message (groups) for reducing processed meat consumption (PMC) over a 2 week period, testing the sequential mediation role of anticipated regret and intention on the relationship between groups and PMC reduction. PMC and TPB variables were assessed both at Time 1 and Time 2. Participants were Italian undergraduates (at Time 1 N = 124) randomly allocated to control and message condition groups. Undergraduates in the message condition group received a daily SMS, which focused on anticipated regret and urged them to self-monitor PMC. Participants in the control group did not receive any message. Those who completed all measures at both time points were included in the analyses (N = 112). Findings showed that a daily messaging intervention, controlling for participants' past behaviour, reduced self-reported consumption of PMC. Mediation analyses indicated partial serial mediation via anticipated regret and intentions. The current study provided support for the efficacy of a daily messaging intervention targeting anticipated regret and encouraging self-monitoring in decreasing PMC. Outcomes showed the important mediating role of anticipated regret and intentions for reducing PMC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  1. PROCESSES OF RECLAIMING URANIUM FROM SOLUTIONS

    DOEpatents

    Zumwalt, L.R.

    1959-02-10

    A process is described for reclaiming residual enriched uranium from calutron wash solutions containing Fe, Cr, Cu, Ni, and Mn as impurities. The solution is adjusted to a pH of between 2 and 4 and is contacted with a metallic reducing agent, such as iron or zinc, in order to reduce the copper to metal and thereby remove it from the solution. At the same time the uranium present is reduced to the uranous state The solution is then contacted with a precipitate of zinc hydroxide or barium carbonate in order to precipitate and carry uranium, iron, and chromium away from the nickel and manganese ions in the solution. The uranium is then recovered fronm this precipitate.

  2. Apathy and Reduced Speed of Processing Underlie Decline in Verbal Fluency following DBS

    PubMed Central

    Foltynie, Tom; Zrinzo, Ludvic; Hyam, Jonathan A.; Limousin, Patricia

    2017-01-01

    Objective. Reduced verbal fluency is a strikingly uniform finding following deep brain stimulation (DBS) for Parkinson's disease (PD). The precise cognitive mechanism underlying this reduction remains unclear, but theories have suggested reduced motivation, linguistic skill, and/or executive function. It is of note, however, that previous reports have failed to consider the potential role of any changes in speed of processing. Thus, the aim of this study was to examine verbal fluency changes with a particular focus on the role of cognitive speed. Method. In this study, 28 patients with PD completed measures of verbal fluency, motivation, language, executive functioning, and speed of processing, before and after DBS. Results. As expected, there was a marked decline in verbal fluency but also in a timed test of executive functions and two measures of speed of processing. Verbal fluency decline was associated with markers of linguistic and executive functioning, but not after speed of processing was statistically controlled for. In contrast, greater decline in verbal fluency was associated with higher levels of apathy at baseline, which was not associated with changes in cognitive speed. Discussion. Reduced generativity and processing speed may account for the marked reduction in verbal fluency commonly observed following DBS. PMID:28408788

  3. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  4. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  5. Machining of AISI D2 Tool Steel with Multiple Hole Electrodes by EDM Process

    NASA Astrophysics Data System (ADS)

    Prasad Prathipati, R.; Devuri, Venkateswarlu; Cheepu, Muralimohan; Gudimetla, Kondaiah; Uzwal Kiran, R.

    2018-03-01

    In recent years, with the increasing of technology the demand for machining processes is increasing for the newly developed materials. The conventional machining processes are not adequate to meet the accuracy of the machining of these materials. The non-conventional machining processes of electrical discharge machining is one of the most efficient machining processes is being widely used to machining of high accuracy products of various industries. The optimum selection of process parameters is very important in machining processes as that of an electrical discharge machining as they determine surface quality and dimensional precision of the obtained parts, even though time consumption rate is higher for machining of large dimension features. In this work, D2 high carbon and chromium tool steel has been machined using electrical discharge machining with the multiple hole electrode technique. The D2 steel has several applications such as forming dies, extrusion dies and thread rolling. But the machining of this tool steel is very hard because of it shard alloyed elements of V, Cr and Mo which enhance its strength and wear properties. However, the machining is possible by using electrical discharge machining process and the present study implemented a new technique to reduce the machining time using a multiple hole copper electrode. In this technique, while machining with multiple holes electrode, fin like projections are obtained, which can be removed easily by chipping. Then the finishing is done by using solid electrode. The machining time is reduced to around 50% while using multiple hole electrode technique for electrical discharge machining.

  6. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  7. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  8. Pasteurization of shell eggs using radio frequency heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveke, David J.; Bigley, Andrew B. W.; Brunkhorst, Christopher D.

    The USDA-FSIS estimates that pasteurization of all shell eggs in the U.S. would reduce the annual number of illnesses by more than 110,000. However, less than 3% of shell eggs are commercially pasteurized. One of the main reasons for this is that the commercial hot water process requires as much as 60 min to complete. In the present study, a radio frequency (RF) apparatus was constructed, and a two-step process was developed that uses RF energy and hot water, to pasteurize eggs in less than half the time. In order to select an appropriate RF generator, the impedance of shellmore » eggs was measured in the frequency range of 10–70 MHz. The power density within the egg was modeled to prevent potential hotspots. Escherichia coli (ATCC 35218) was inoculated in the yolk to approximately 7.5 log CFU/ml. The combination process first heated the egg in 35.0 °C water for 3.5 min using 60 MHz RF energy. This resulted in the yolk being preferentially heated to 61 °C. Then, the egg was heated for an additional 20 min with 56.7 °C water. This two-step process reduced the population of E. coli by 6.5 log. The total time for the process was 23.5 min. By contrast, processing for 60 min was required to reduce the E. coli by 6.6 log using just hot water. The novel RF pasteurization process presented in this study was considerably faster than the existing commercial process. As a result, this should lead to an increase in the percentage of eggs being pasteurized, as well as a reduction of foodborne illnesses.« less

  9. Pasteurization of shell eggs using radio frequency heating

    DOE PAGES

    Geveke, David J.; Bigley, Andrew B. W.; Brunkhorst, Christopher D.

    2016-08-21

    The USDA-FSIS estimates that pasteurization of all shell eggs in the U.S. would reduce the annual number of illnesses by more than 110,000. However, less than 3% of shell eggs are commercially pasteurized. One of the main reasons for this is that the commercial hot water process requires as much as 60 min to complete. In the present study, a radio frequency (RF) apparatus was constructed, and a two-step process was developed that uses RF energy and hot water, to pasteurize eggs in less than half the time. In order to select an appropriate RF generator, the impedance of shellmore » eggs was measured in the frequency range of 10–70 MHz. The power density within the egg was modeled to prevent potential hotspots. Escherichia coli (ATCC 35218) was inoculated in the yolk to approximately 7.5 log CFU/ml. The combination process first heated the egg in 35.0 °C water for 3.5 min using 60 MHz RF energy. This resulted in the yolk being preferentially heated to 61 °C. Then, the egg was heated for an additional 20 min with 56.7 °C water. This two-step process reduced the population of E. coli by 6.5 log. The total time for the process was 23.5 min. By contrast, processing for 60 min was required to reduce the E. coli by 6.6 log using just hot water. The novel RF pasteurization process presented in this study was considerably faster than the existing commercial process. As a result, this should lead to an increase in the percentage of eggs being pasteurized, as well as a reduction of foodborne illnesses.« less

  10. Confocal laser scanning microscopic photoconversion: a new method to stabilize fluorescently labeled cellular elements for electron microscopic analysis.

    PubMed

    Colello, Raymond J; Tozer, Jordan; Henderson, Scott C

    2012-01-01

    Photoconversion, the method by which a fluorescent dye is transformed into a stable, osmiophilic product that can be visualized by electron microscopy, is the most widely used method to enable the ultrastructural analysis of fluorescently labeled cellular structures. Nevertheless, the conventional method of photoconversion using widefield fluorescence microscopy requires long reaction times and results in low-resolution cell targeting. Accordingly, we have developed a photoconversion method that ameliorates these limitations by adapting confocal laser scanning microscopy to the procedure. We have found that this method greatly reduces photoconversion times, as compared to conventional wide field microscopy. Moreover, region-of-interest scanning capabilities of a confocal microscope facilitate the targeting of the photoconversion process to individual cellular or subcellular elements within a fluorescent field. This reduces the area of the cell exposed to light energy, thereby reducing the ultrastructural damage common to this process when widefield microscopes are employed. © 2012 by John Wiley & Sons, Inc.

  11. Transfer of care and offload delay: continued resistance or integrative thinking?

    PubMed

    Schwartz, Brian

    2015-11-01

    The disciplines of paramedicine and emergency medicine have evolved synchronously over the past four decades, linked by emergency physicians with expertise in prehospital care. Ambulance offload delay (OD) is an inevitable consequence of emergency department overcrowding (EDOC) and compromises the care of the patient on the ambulance stretcher in the emergency department (ED), as well as paramedic emergency medical service response in the community. Efforts to define transfer of care from paramedics to ED staff with a view to reducing offload time have met with resistance from both sides with different agendas. These include the need to return paramedics to serve the community versus the lack of ED capacity to manage the patient. Innovative solutions to other system issues, such as rapid access to trauma teams, reducing door-to-needle time, and improving throughput in the ED to reduce EDOC, have been achieved by involving all stakeholders in an integrative thinking process. Only by addressing this issue in a similar integrative process will solutions to OD be realized.

  12. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    PubMed

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  13. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  14. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE PAGES

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  15. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  16. Automated Guidance from Physiological Sensing to Reduce Thermal-Work Strain Levels on a Novel Task

    USDA-ARS?s Scientific Manuscript database

    This experiment demonstrated that automated pace guidance generated from real-time physiological monitoring allowed least stressful completion of a timed (60 minute limit) 5 mile treadmill exercise. An optimal pacing policy was estimated from a Markov decision process that balanced the goals of the...

  17. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    NASA Astrophysics Data System (ADS)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  18. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  19. The combined positive impact of Lean methodology and Ventana Symphony autostainer on histology lab workflow

    PubMed Central

    2010-01-01

    Background Histologic samples all funnel through the H&E microtomy staining area. Here manual processes intersect with semi-automated processes creating a bottleneck. We compare alternate work processes in anatomic pathology primarily in the H&E staining work cell. Methods We established a baseline measure of H&E process impact on personnel, information management and sample flow from historical workload and production data and direct observation. We compared this to performance after implementing initial Lean process modifications, including workstation reorganization, equipment relocation and workflow levelling, and the Ventana Symphony stainer to assess the impact on productivity in the H&E staining work cell. Results Average time from gross station to assembled case decreased by 2.9 hours (12%). Total process turnaround time (TAT) exclusive of processor schedule changes decreased 48 minutes/case (4%). Mean quarterly productivity increased 8.5% with the new methods. Process redesign reduced the number of manual steps from 219 to 182, a 17% reduction. Specimen travel distance was reduced from 773 ft/case to 395 ft/case (49%) overall, and from 92 to 53 ft/case in the H&E cell (42% improvement). Conclusions Implementation of Lean methods in the H&E work cell of histology can result in improved productivity, improved through-put and case availability parameters including TAT. PMID:20181123

  20. Life cycle analysis within pharmaceutical process optimization and intensification: case study of active pharmaceutical ingredient production.

    PubMed

    Ott, Denise; Kralisch, Dana; Denčić, Ivana; Hessel, Volker; Laribi, Yosra; Perrichon, Philippe D; Berguerand, Charline; Kiwi-Minsker, Lioubov; Loeb, Patrick

    2014-12-01

    As the demand for new drugs is rising, the pharmaceutical industry faces the quest of shortening development time, and thus, reducing the time to market. Environmental aspects typically still play a minor role within the early phase of process development. Nevertheless, it is highly promising to rethink, redesign, and optimize process strategies as early as possible in active pharmaceutical ingredient (API) process development, rather than later at the stage of already established processes. The study presented herein deals with a holistic life-cycle-based process optimization and intensification of a pharmaceutical production process targeting a low-volume, high-value API. Striving for process intensification by transfer from batch to continuous processing, as well as an alternative catalytic system, different process options are evaluated with regard to their environmental impact to identify bottlenecks and improvement potentials for further process development activities. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Dynamic Resource Allocation to Improve Service Performance in Order Fulfillment Systems

    DTIC Science & Technology

    2009-01-01

    efficient system uses economies of scale at two points: orders are batched before processing, which reduces processing costs, and processed or- ders ...the ef- fects of batching on order picking processes is well-researched and well-understood ( van den Berg and Gademann, 1999). Because orders are...a final so- journ time distribution. Our work builds on existing research in matrix-geometric methods by Neuts (1981), Asmussen and M0ller (2001

  2. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  3. Remote Sensing: A valuable tool in the Forest Service decision making process. [in Utah

    NASA Technical Reports Server (NTRS)

    Stanton, F. L.

    1975-01-01

    Forest Service studies for integrating remotely sensed data into existing information systems highlight a need to: (1) re-examine present methods of collecting and organizing data, (2) develop an integrated information system for rapidly processing and interpreting data, (3) apply existing technological tools in new ways, and (4) provide accurate and timely information for making right management decisions. The Forest Service developed an integrated information system using remote sensors, microdensitometers, computer hardware and software, and interactive accessories. Their efforts substantially reduce the time it takes for collecting and processing data.

  4. Disciplined rubidium oscillator with GPS selective availability

    NASA Technical Reports Server (NTRS)

    Dewey, Wayne P.

    1993-01-01

    A U.S. Department of Defense decision for continuous implementation of GPS Selective Availability (S/A) has made it necessary to modify Rubidium oscillator disciplining methods. One such method for reducing the effects of S/A on the oscillator disciplining process was developed which achieves results approaching pre-S/A GPS. The Satellite Hopping algorithm used in minimizing the effects of S/A on the oscillator disciplining process is described, and the results of using this process to those obtained prior to the implementation of S/A are compared. Test results are from a TrueTime Rubidium based Model GPS-DC timing receiver.

  5. Advanced Decontamination Technologies: High Hydrostatic Pressure on Meat Products

    NASA Astrophysics Data System (ADS)

    Garriga, Margarita; Aymerich, Teresa

    The increasing demand for “natural” foodstuffs, free from chemical additives, and preservatives has triggered novel approaches in food technology developments. In the last decade, practical use of high-pressure processing (HPP) made this emerging non-thermal technology very attractive from a commercial point of view. Despite the fact that the investment is still high, the resulting value-added products, with an extended and safe shelf-life, will fulfil the wishes of consumers who prefer preservative-free minimally processed foods, retaining sensorial characteristics of freshness. Moreover, unlike thermal treatment, pressure treatment is not time/mass dependant, thus reducing the time of processing.

  6. Effects of Bioreactor Retention Time on Aerobic Microbial Decomposition of CELSS Crop Residues

    NASA Technical Reports Server (NTRS)

    Strayer, R. F.; Finger, B. W.; Alazraki, M. P.

    1997-01-01

    The focus of resource recovery research at the KSC-CELSS Breadboard Project has been the evaluation of microbiologically mediated biodegradation of crop residues by manipulation of bioreactor process and environmental variables. We will present results from over 3 years of studies that used laboratory- and breadboard-scale (8 and 120 L working volumes, respectively) aerobic, fed-batch, continuous stirred tank reactors (CSTR) for recovery of carbon and minerals from breadboard grown wheat and white potato residues. The paper will focus on the effects of a key process variable, bioreactor retention time, on response variables indicative of bioreactor performance. The goal is to determine the shortest retention time that is feasible for processing CELSS crop residues, thereby reducing bioreactor volume and weight requirements. Pushing the lower limits of bioreactor retention times will provide useful data for engineers who need to compare biological and physicochemical components. Bioreactor retention times were manipulated to range between 0.25 and 48 days. Results indicate that increases in retention time lead to a 4-fold increase in crop residue biodegradation, as measured by both dry weight losses and CO2 production. A similar overall trend was also observed for crop residue fiber (cellulose and hemicellulose), with a noticeable jump in cellulose degradation between the 5.3 day and 10.7 day retention times. Water-soluble organic compounds (measured as soluble TOC) were appreciably reduced by more than 4-fold at all retention times tested. Results from a study of even shorter retention times (down to 0.25 days), in progress, will also be presented.

  7. Ways to reduce patient turnaround time and improve service quality in emergency departments.

    PubMed

    Sinreich, David; Marmor, Yariv

    2005-01-01

    Recent years have witnessed a fundamental change in the function of emergency departments (EDs). The emphasis of the ED shifts from triage to saving the lives of shock-trauma rooms equipped with state-of-the-art equipment. At the same time walk-in clinics are being set up to treat ambulatory type patients. Simultaneously ED overcrowding has become a common sight in many large urban hospitals. This paper recognises that in order to provide quality treatment to all these patient types, ED process operations have to be flexible and efficient. The paper aims to examine one major benchmark for measuring service quality--patient turnaround time, claiming that in order to provide the quality treatment to which EDs aspire, this time needs to be reduced. This study starts by separating the process each patient type goes through when treated at the ED into unique components. Next, using a simple model, the impact each of these components has on the total patient turnaround time is determined. This in turn, identifies the components that need to be addressed if patient turnaround time is to be streamlined. The model was tested using data that were gathered through a comprehensive time study in six major hospitals. The analysis reveals that waiting time comprises 51-63 per cent of total patient turnaround time in the ED. Its major components are: time away for an x-ray examination; waiting time for the first physician's examination; and waiting time for blood work. The study covers several hospitals and analyses over 20,000 process components; as such the common findings may serve as guidelines to other hospitals when addressing this issue.

  8. Use of lean sigma principles in a tertiary care otolaryngology clinic to improve efficiency.

    PubMed

    Lin, Sandra Y; Gavney, Dean; Ishman, Stacey L; Cady-Reh, Julie

    2013-11-01

    To apply Lean Sigma, a quality-improvement strategy supported by tactical tools to eliminate waste and reduce variation, to improve efficiency of patient flow in a large tertiary otolaryngology clinic. The project goals were to decrease overall lead time from patient arrival to start of interaction with care provider, improve on-time starts of patient visits, and decrease excess staff/patient motion. Prospective observational study. Patient flow was mapped through the clinic, including preregistration processes. A time-stamp observation study was performed on 188 patient visits over 5 days. Using Lean Sigma principles, time stamps were analyzed to identify patient flow constraints and areas for potential interventions. Interventions were evaluated and adjusted based on feedback from shareholders: removal of bottlenecks in clinic flow, elimination of non-value added registration staff tasks, and alignment of staff hours to accommodate times of high patient census. A postintervention time observation study of 141 patients was performed 5 months later. Patient lead time from clinic arrival to exam start time decreased by 12.2% on average (P = .042). On-time starts for patient exams improved by 34% (χ(2) = 16.091, P < .001). Excess patient motion was reduced by 74 feet per patient, which represents a 34% reduction in motion per visit. Use of Lean Sigma principles in a large tertiary otolaryngology clinic led to decreased patient wait time and significant improvements in on-time patient exam start time. Process mapping, engagement of leadership and staff, and elimination of non-value added steps or processes were key to improvement. Copyright © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Feasibility of Applying Ohmic Heating and Split-Phase Aseptic Processing for Ration Entree Preservation

    DTIC Science & Technology

    1994-08-01

    study demonstrated that either of these reduced- temperature sterilization processes will produce an acceptable product that is an alternative to thermal...and uniform heating of liquids and solids simultaneously, even of large particles, up to sterilization temperatures . Uniform heating means shorter...potential cost reduction by substitution of continuous processing of a high- temperature /short-time ( HTST ) nature for traditional batch retort

  10. Energy-conscious production of titania and titanium powders from slag

    NASA Astrophysics Data System (ADS)

    Middlemas, Scott C.

    Titanium dioxide (TiO2) is used as a whitening agent in numerous domestic and technological applications and is mainly produced by the high temperature chloride process. A new hydrometallurgical process for making commercially pure TiO2 pigment is described with the goal of reducing the necessary energy consumption and CO2 emissions. The process includes alkaline roasting of titania slag with subsequent washing, HCl leaching, solvent extraction, hydrolysis, and calcination stages. The thermodynamics of the roasting reaction were analyzed, and the experimental parameters for each step in the new process were optimized with respect to TiO 2 recovery, final product purity, and total energy requirements. Contacting the leach solution with a tertiary amine extractant resulted in complete Fe extraction in a single stage and proved effective in reducing the concentration of discoloring impurities in the final pigment to commercially acceptable levels. Additionally, a new method of producing Ti powders from titania slag is proposed as a potentially more energy efficient and lower cost alternative to the traditional Kroll process. Thermodynamic analysis and initial experimental results validate the concept of reducing titanium slag with a metal hydride to produce titanium hydride (TiH2) powders, which are subsequently purified by leaching and dehydrided to form Ti powders. The effects of reducing agent type, heating time and temperature, ball milling, powder compaction, and eutectic chloride salts on the conversion of slag to TiH2 powders were determined. The purification of reduced powders through NH4Cl, NaOH, and HCl leaching stages was investigated, and reagent concentration, leaching temperature, and time were varied in order to determine the best conditions for maximum impurity removal and recovery of TiH2. A model plant producing 100,000 tons TiO2 per year was designed that would employ the new method of pigment manufacture. A comparison of the new process and the chloride process indicated a 25% decrease in energy consumption and CO2 emissions. For the Ti powder making process, a 10,000 tons per year model plant employing the metal hydride reduction was designed and a comparison with the Kroll process indicated potential for over 60% less energy consumption and 50% less CO2 emission.

  11. A Time-Domain CMOS Oscillator-Based Thermostat with Digital Set-Point Programming

    PubMed Central

    Chen, Chun-Chi; Lin, Shih-Hao

    2013-01-01

    This paper presents a time-domain CMOS oscillator-based thermostat with digital set-point programming [without a digital-to-analog converter (DAC) or external resistor] to achieve on-chip thermal management of modern VLSI systems. A time-domain delay-line-based thermostat with multiplexers (MUXs) was used to substantially reduce the power consumption and chip size, and can benefit from the performance enhancement due to the scaling down of fabrication processes. For further cost reduction and accuracy enhancement, this paper proposes a thermostat using two oscillators that are suitable for time-domain curvature compensation instead of longer linear delay lines. The final time comparison was achieved using a time comparator with a built-in custom hysteresis to generate the corresponding temperature alarm and control. The chip size of the circuit was reduced to 0.12 mm2 in a 0.35-μm TSMC CMOS process. The thermostat operates from 0 to 90 °C, and achieved a fine resolution better than 0.05 °C and an improved inaccuracy of ± 0.6 °C after two-point calibration for eight packaged chips. The power consumption was 30 μW at a sample rate of 10 samples/s. PMID:23385403

  12. Designing Process Improvement of Finished Good On Time Release and Performance Indicator Tool in Milk Industry Using Business Process Reengineering Method

    NASA Astrophysics Data System (ADS)

    Dachyar, M.; Christy, E.

    2014-04-01

    To maintain position as a major milk producer, the Indonesian milk industry should do some business development with the purpose of increasing customer service level. One strategy is to create on time release conditions for finished goods which will be distributed to customers and distributors. To achieve this condition, management information systems of finished goods on time release needs to be improved. The focus of this research is to conduct business process improvement using Business Process Reengineering (BPR). The deliverable key of this study is a comprehensive business strategy which is the solution of the root problems. To achieve the goal, evaluation, reengineering, and improvement of the ERP system are conducted. To visualize the predicted implementation, a simulation model is built by Oracle BPM. The output of this simulation showed that the proposed solution could effectively reduce the process lead time and increase the number of quality releases.

  13. Parallel hyperspectral compressive sensing method on GPU

    NASA Astrophysics Data System (ADS)

    Bernabé, Sergio; Martín, Gabriel; Nascimento, José M. P.

    2015-10-01

    Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

  14. Improving Initiation and Tracking of Research Projects at an Academic Health Center: A Case Study.

    PubMed

    Schmidt, Susanne; Goros, Martin; Parsons, Helen M; Saygin, Can; Wan, Hung-Da; Shireman, Paula K; Gelfond, Jonathan A L

    2017-09-01

    Research service cores at academic health centers are important in driving translational advancements. Specifically, biostatistics and research design units provide services and training in data analytics, biostatistics, and study design. However, the increasing demand and complexity of assigning appropriate personnel to time-sensitive projects strains existing resources, potentially decreasing productivity and increasing costs. Improving processes for project initiation, assigning appropriate personnel, and tracking time-sensitive projects can eliminate bottlenecks and utilize resources more efficiently. In this case study, we describe our application of lean six sigma principles to our biostatistics unit to establish a systematic continual process improvement cycle for intake, allocation, and tracking of research design and data analysis projects. The define, measure, analyze, improve, and control methodology was used to guide the process improvement. Our goal was to assess and improve the efficiency and effectiveness of operations by objectively measuring outcomes, automating processes, and reducing bottlenecks. As a result, we developed a web-based dashboard application to capture, track, categorize, streamline, and automate project flow. Our workflow system resulted in improved transparency, efficiency, and workload allocation. Using the dashboard application, we reduced the average study intake time from 18 to 6 days, a 66.7% reduction over 12 months (January to December 2015).

  15. The 'pit-crew' model for improving door-to-needle times in endovascular stroke therapy: a Six-Sigma project.

    PubMed

    Rai, Ansaar T; Smith, Matthew S; Boo, SoHyun; Tarabishy, Abdul R; Hobbs, Gerald R; Carpenter, Jeffrey S

    2016-05-01

    Delays in delivering endovascular stroke therapy adversely affect outcomes. Time-sensitive treatments such as stroke interventions benefit from methodically developed protocols. Clearly defined roles in these protocols allow for parallel processing of tasks, resulting in consistent delivery of care. To present the outcomes of a quality-improvement (QI) process directed at reducing stroke treatment times in a tertiary level academic medical center. A Six-Sigma-based QI process was developed over a 3-month period. After an initial analysis, procedures were implemented and fine-tuned to identify and address rate-limiting steps in the endovascular care pathway. Prospectively recorded treatment times were then compared in two groups of patients who were treated 'before' (n=64) or 'after' (n=30) the QI process. Three time intervals were measured: emergency room (ER) to arrival for CT scan (ER-CT), CT scan to interventional laboratory arrival (CT-Lab), and interventional laboratory arrival to groin puncture (Lab-puncture). The ER-CT time was 40 (±29) min in the 'before' and 26 (±15) min in the 'after' group (p=0.008). The CT-Lab time was 87 (±47) min in the 'before' and 51 (±33) min in the 'after' group (p=0.0002). The Lab-puncture time was 24 (±11) min in the 'before' and 15 (±4) min in the 'after' group (p<0.0001). The overall ER-arrival to groin-puncture time was reduced from 2 h, 31 min (±51) min in the 'before' to 1 h, 33 min (±37) min in the 'after' group, (p<0.0001). The improved times were seen for both working hours and off-hours interventions. A protocol-driven process can significantly improve efficiency of care in time-sensitive stroke interventions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Design for Review - Applying Lessons Learned to Improve the FPGA Review Process

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Li, Kenneth E.

    2014-01-01

    Flight Field Programmable Gate Array (FPGA) designs are required to be independently reviewed. This paper provides recommendations to Flight FPGA designers to properly prepare their designs for review in order to facilitate the review process, and reduce the impact of the review time in the overall project schedule.

  17. The effects of limited bandwidth and noise on verbal processing time and word recall in normal-hearing children.

    PubMed

    McCreery, Ryan W; Stelmachowicz, Patricia G

    2013-09-01

    Understanding speech in acoustically degraded environments can place significant cognitive demands on school-age children who are developing the cognitive and linguistic skills needed to support this process. Previous studies suggest the speech understanding, word learning, and academic performance can be negatively impacted by background noise, but the effect of limited audibility on cognitive processes in children has not been directly studied. The aim of the present study was to evaluate the impact of limited audibility on speech understanding and working memory tasks in school-age children with normal hearing. Seventeen children with normal hearing between 6 and 12 years of age participated in the present study. Repetition of nonword consonant-vowel-consonant stimuli was measured under conditions with combinations of two different signal to noise ratios (SNRs; 3 and 9 dB) and two low-pass filter settings (3.2 and 5.6 kHz). Verbal processing time was calculated based on the time from the onset of the stimulus to the onset of the child's response. Monosyllabic word repetition and recall were also measured in conditions with a full bandwidth and 5.6 kHz low-pass cutoff. Nonword repetition scores decreased as audibility decreased. Verbal processing time increased as audibility decreased, consistent with predictions based on increased listening effort. Although monosyllabic word repetition did not vary between the full bandwidth and 5.6 kHz low-pass filter condition, recall was significantly poorer in the condition with limited bandwidth (low pass at 5.6 kHz). Age and expressive language scores predicted performance on word recall tasks, but did not predict nonword repetition accuracy or verbal processing time. Decreased audibility was associated with reduced accuracy for nonword repetition and increased verbal processing time in children with normal hearing. Deficits in free recall were observed even under conditions where word repetition was not affected. The negative effects of reduced audibility may occur even under conditions where speech repetition is not impacted. Limited stimulus audibility may result in greater cognitive effort for verbal rehearsal in working memory and may limit the availability of cognitive resources to allocate to working memory and other processes.

  18. Konjac gel as pork backfat replacer in dry fermented sausages: processing and quality characteristics.

    PubMed

    Ruiz-Capillas, C; Triki, M; Herrero, A M; Rodriguez-Salas, L; Jiménez-Colmenero, F

    2012-10-01

    The effect of replacing animal fat (0%, 50% and 80% of pork backfat) by an equal proportion of konjac gel, on processing and quality characteristics of reduced and low-fat dry fermented sausage was studied. Weight loss, pH, and water activity of the sausage were affected (P<0.05) by fat reduction and processing time. Low lipid oxidation levels were observed during processing time irrespective of the dry sausage formulation. The fat content for normal-fat (NF), reduced-fat (RF) and low-fat (LF) sausages was 29.96%, 19.69% and 13.79%, respectively. This means an energy reduction of about 14.8% for RF and 24.5% for LF. As the fat content decreases there is an increase (P<0.05) in hardness and chewiness and a decrease (P<0.05) in cohesiveness. No differences were appreciated (P>0.05) in the presence of microorganisms as a result of the reformulation. The sensory panel considered that NF and RF products had acceptable sensory characteristics. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  19. Automated matching software for clinical trials eligibility: measuring efficiency and flexibility.

    PubMed

    Penberthy, Lynne; Brown, Richard; Puma, Federico; Dahman, Bassam

    2010-05-01

    Clinical trials (CT) serve as the media that translates clinical research into standards of care. Low or slow recruitment leads to delays in delivery of new therapies to the public. Determination of eligibility in all patients is one of the most important factors to assure unbiased results from the clinical trials process and represents the first step in addressing the issue of under representation and equal access to clinical trials. This is a pilot project evaluating the efficiency, flexibility, and generalizibility of an automated clinical trials eligibility screening tool across 5 different clinical trials and clinical trial scenarios. There was a substantial total savings during the study period in research staff time spent in evaluating patients for eligibility ranging from 165h to 1329h. There was a marked enhancement in efficiency with the automated system for all but one study in the pilot. The ratio of mean staff time required per eligible patient identified ranged from 0.8 to 19.4 for the manual versus the automated process. The results of this study demonstrate that automation offers an opportunity to reduce the burden of the manual processes required for CT eligibility screening and to assure that all patients have an opportunity to be evaluated for participation in clinical trials as appropriate. The automated process greatly reduces the time spent on eligibility screening compared with the traditional manual process by effectively transferring the load of the eligibility assessment process to the computer. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  20. Precise timing correlation in telemetry recording and processing systems

    NASA Technical Reports Server (NTRS)

    Pickett, R. B.; Matthews, F. L.

    1973-01-01

    Independent PCM telemetry data signals received from missiles must be correlated to within + or - 100 microseconds for comparison with radar data. Tests have been conducted to determine RF antenna receiving system delays; delays associated with wideband analog tape recorders used in the recording, dubbing and repdocuing processes; and uncertainties associated with computer processed time tag data. Several methods used in the recording of timing are evaluated. Through the application of a special time tagging technique, the cumulative timing bias from all sources is determined and the bias removed from final data. Conclusions show that relative time differences in receiving, recording, playback and processing of two telemetry links can be accomplished with a + or - 4 microseconds accuracy. In addition, the absolute time tag error (with respect to UTC) can be reduced to less than 15 microseconds. This investigation is believed to be the first attempt to identify the individual error contributions within the telemetry system and to describe the methods of error reduction within the telemetry system and to describe the methods of error reduction and correction.

  1. The Launch Processing System for Space Shuttle.

    NASA Technical Reports Server (NTRS)

    Springer, D. A.

    1973-01-01

    In order to reduce costs and accelerate vehicle turnaround, a single automated system will be developed to support shuttle launch site operations, replacing a multiplicity of systems used in previous programs. The Launch Processing System will provide real-time control, data analysis, and information display for the checkout, servicing, launch, landing, and refurbishment of the launch vehicles, payloads, and all ground support systems. It will also provide real-time and historical data retrieval for management and sustaining engineering (test records and procedures, logistics, configuration control, scheduling, etc.).

  2. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  3. Method for Reducing the Refresh Rate of Fiber Bragg Grating Sensors

    NASA Technical Reports Server (NTRS)

    Parker, Allen R., Jr. (Inventor)

    2014-01-01

    The invention provides a method of obtaining the FBG data in final form (transforming the raw data into frequency and location data) by taking the raw FBG sensor data and dividing the data into a plurality of segments over time. By transforming the raw data into a plurality of smaller segments, processing time is significantly decreased. Also, by defining the segments over time, only one processing step is required. By employing this method, the refresh rate of FBG sensor systems can be improved from about 1 scan per second to over 20 scans per second.

  4. Time-course human urine proteomics in space-flight simulation experiments.

    PubMed

    Binder, Hans; Wirth, Henry; Arakelyan, Arsen; Lembcke, Kathrin; Tiys, Evgeny S; Ivanisenko, Vladimir A; Kolchanov, Nikolay A; Kononikhin, Alexey; Popov, Igor; Nikolaev, Evgeny N; Pastushkova, Lyudmila; Larina, Irina M

    2014-01-01

    Long-term space travel simulation experiments enabled to discover different aspects of human metabolism such as the complexity of NaCl salt balance. Detailed proteomics data were collected during the Mars105 isolation experiment enabling a deeper insight into the molecular processes involved. We studied the abundance of about two thousand proteins extracted from urine samples of six volunteers collected weekly during a 105-day isolation experiment under controlled dietary conditions including progressive reduction of salt consumption. Machine learning using Self Organizing maps (SOM) in combination with different analysis tools was applied to describe the time trajectories of protein abundance in urine. The method enables a personalized and intuitive view on the physiological state of the volunteers. The abundance of more than one half of the proteins measured clearly changes in the course of the experiment. The trajectory splits roughly into three time ranges, an early (week 1-6), an intermediate (week 7-11) and a late one (week 12-15). Regulatory modes associated with distinct biological processes were identified using previous knowledge by applying enrichment and pathway flow analysis. Early protein activation modes can be related to immune response and inflammatory processes, activation at intermediate times to developmental and proliferative processes and late activations to stress and responses to chemicals. The protein abundance profiles support previous results about alternative mechanisms of salt storage in an osmotically inactive form. We hypothesize that reduced NaCl consumption of about 6 g/day presumably will reduce or even prevent the activation of inflammatory processes observed in the early time range of isolation. SOM machine learning in combination with analysis methods of class discovery and functional annotation enable the straightforward analysis of complex proteomics data sets generated by means of mass spectrometry.

  5. A pattern-based method to automate mask inspection files

    NASA Astrophysics Data System (ADS)

    Kamal Baharin, Ezni Aznida Binti; Muhsain, Mohamad Fahmi Bin; Ahmad Ibrahim, Muhamad Asraf Bin; Ahmad Noorhani, Ahmad Nurul Ihsan Bin; Sweis, Jason; Lai, Ya-Chieh; Hurat, Philippe

    2017-03-01

    Mask inspection is a critical step in the mask manufacturing process in order to ensure all dimensions printed are within the needed tolerances. This becomes even more challenging as the device nodes shrink and the complexity of the tapeout increases. Thus, the amount of measurement points and their critical dimension (CD) types are increasing to ensure the quality of the mask. In addition to the mask quality, there is a significant amount of manpower needed when the preparation and debugging of this process are not automated. By utilizing a novel pattern search technology with the ability to measure and report match region scan-line (edge) measurements, we can create a flow to find, measure and mark all metrology locations of interest and provide this automated report to the mask shop for inspection. A digital library is created based on the technology product and node which contains the test patterns to be measured. This paper will discuss how these digital libraries will be generated and then utilized. As a time-critical part of the manufacturing process, this can also reduce the data preparation cycle time, minimize the amount of manual/human error in naming and measuring the various locations, reduce the risk of wrong/missing CD locations, and reduce the amount of manpower needed overall. We will also review an example pattern and how the reporting structure to the mask shop can be processed. This entire process can now be fully automated.

  6. Processing deficits and the mediation of positive affect in persuasion.

    PubMed

    Mackie, D M; Worth, L T

    1989-07-01

    Motivational and cognitive mediators of the reduced processing of persuasive messages shown by recipients in a positive mood were tested. Ss in positive or neutral moods read strong or weak counterattitudinal advocadies for either a limited time or for as long as they wanted. Under limited exposure conditions, neutral mood Ss showed attitude change indicative of systemic processing, whereas positive mood Ss showed no differentiation of strong and weak versions of the message. When message exposure was unlimited, positive mood Ss viewed the message longer than did neutral mood Ss and systematically processed it rather than relying on persuasion heuristics. These findings replicated with 2 manipulations of mood and 2 different attitude issues. We interpret the results as providing evidence that reduced cognitive capacity to process the message contributes to the decrements shown by positive mood Ss.

  7. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  8. Temporal orienting precedes intersensory attention and has opposing effects on early evoked brain activity.

    PubMed

    Keil, Julian; Pomper, Ulrich; Feuerbach, Nele; Senkowski, Daniel

    2017-03-01

    Intersensory attention (IA) describes the process of directing attention to a specific modality. Temporal orienting (TO) characterizes directing attention to a specific moment in time. Previously, studies indicated that these two processes could have opposite effects on early evoked brain activity. The exact time-course and processing stages of both processes are still unknown. In this human electroencephalography study, we investigated the effects of IA and TO on visuo-tactile stimulus processing within one paradigm. IA was manipulated by presenting auditory cues to indicate whether participants should detect visual or tactile targets in visuo-tactile stimuli. TO was manipulated by presenting stimuli block-wise at fixed or variable inter-stimulus intervals. We observed that TO affects evoked activity to visuo-tactile stimuli prior to IA. Moreover, we found that TO reduces the amplitude of early evoked brain activity, whereas IA enhances it. Using beamformer source-localization, we observed that IA increases neural responses in sensory areas of the attended modality whereas TO reduces brain activity in widespread cortical areas. Based on these findings we derive an updated working model for the effects of temporal and intersensory attention on early evoked brain activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference.

    PubMed

    Ozmeral, Erol J; Eddins, David A; Eddins, Ann C

    2016-12-01

    Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging. Copyright © 2016 the American Physiological Society.

  10. Short mechanical biological treatment of municipal solid waste allows landfill impact reduction saving waste energy content.

    PubMed

    Scaglia, Barbara; Salati, Silvia; Di Gregorio, Alessandra; Carrera, Alberto; Tambone, Fulvia; Adani, Fabrizio

    2013-09-01

    The aim of this work was to evaluate the effects of full scale MBT process (28 d) in removing inhibition condition for successive biogas (ABP) production in landfill and in reducing total waste impact. For this purpose the organic fraction of MSW was treated in a full-scale MBT plant and successively incubated vs. untreated waste, in simulated landfills for one year. Results showed that untreated landfilled-waste gave a total ABP reduction that was null. On the contrary MBT process reduced ABP of 44%, but successive incubation for one year in landfill gave a total ABP reduction of 86%. This ABP reduction corresponded to a MBT process of 22 weeks length, according to the predictive regression developed for ABP reduction vs. MBT-time. Therefore short MBT allowed reducing landfill impact, preserving energy content (ABP) to be produced successively by bioreactor technology since pre-treatment avoided process inhibition because of partial waste biostabilization. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Rhamnolipid produced by Pseudomonas aeruginosa USM-AR2 facilitates crude oil distillation.

    PubMed

    Asshifa Md Noh, Nur; Al-Ashraf Abdullah, Amirul; Nasir Mohamad Ibrahim, Mohamad; Ramli Mohd Yahya, Ahmad

    2012-01-01

    A biosurfactant-producing and hydrocarbon-utilizing bacterium, Pseudomonas aeruginosa USM-AR2, was used to assist conventional distillation. Batch cultivation in a bioreactor gave a biomass of 9.4 g L(-1) and rhamnolipid concentration of 2.4 g L(-1) achieved after 72 h. Biosurfactant activity (rhamnolipid) was detected by the orcinol assay, emulsification index and drop collapse test. Pretreatment of crude oil TK-1 and AG-2 with a culture of P. aeruginosa USM-AR2 that contains rhamnolipid was proven to facilitate the distillation process by reducing the duration without reducing the quality of petroleum distillate. It showed a potential in reducing the duration of the distillation process, with at least 2- to 3-fold decreases in distillation time. This is supported by GC-MS analysis of the distillate where there was no difference between compounds detected in distillate obtained from treated or untreated crude oil. Calorimetric tests showed the calorie value of the distillate remained the same with or without treatment. These two factors confirmed that the quality of the distillate was not compromised and the incubation process by the microbial culture did not over-degrade the oil. The rhamnolipid produced by this culture was the main factor that enhanced the distillation performance, which is related to the emulsification of hydrocarbon chains in the crude oil. This biotreatment may play an important role to improve the existing conventional refinery and distillation process. Reducing the distillation times by pretreating the crude oil with a natural biosynthetic product translates to energy and cost savings in producing petroleum products.

  12. Solvent Effects on the Photothermal Regeneration of CO 2 in Monoethanolamine Nanofluids

    DOE PAGES

    Nguyen, Du; Stolaroff, Joshuah; Esser-Kahn, Aaron

    2015-11-02

    We present that a potential approach to reduce energy costs associated with carbon capture is to use external and renewable energy sources. The photothermal release of CO 2 from monoethanolamine mediated by nanoparticles is a unique solution to this problem. When combined with light-absorbing nanoparticles, vapor bubbles form inside the capture solution and release the CO 2 without heating the bulk solvent. The mechanism by which CO 2 is released remained unclear, and understanding this process would improve the efficiency of photothermal CO 2 release. Here we report the use of different cosolvents to improve or reduce the photothermal regenerationmore » of CO 2 captured by monoethanolamine. We found that properties that reduce the residence time of the gas bubbles (viscosity, boiling point, and convection direction) can enhance the regeneration efficiencies. The reduction of bubble residence times minimizes the reabsorption of CO 2 back into the capture solvent where bulk temperatures remain lower than the localized area surrounding the nanoparticle. These properties shed light on the mechanism of release and indicated methods for improving the efficiency of the process. We used this knowledge to develop an improved photothermal CO 2 regeneration system in a continuously flowing setup. Finally, using techniques to reduce residence time in the continuously flowing setup, such as alternative cosolvents and smaller fluid volumes, resulted in regeneration efficiency enhancements of over 200%.« less

  13. Solvent Effects on the Photothermal Regeneration of CO 2 in Monoethanolamine Nanofluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Du; Stolaroff, Joshuah; Esser-Kahn, Aaron

    We present that a potential approach to reduce energy costs associated with carbon capture is to use external and renewable energy sources. The photothermal release of CO 2 from monoethanolamine mediated by nanoparticles is a unique solution to this problem. When combined with light-absorbing nanoparticles, vapor bubbles form inside the capture solution and release the CO 2 without heating the bulk solvent. The mechanism by which CO 2 is released remained unclear, and understanding this process would improve the efficiency of photothermal CO 2 release. Here we report the use of different cosolvents to improve or reduce the photothermal regenerationmore » of CO 2 captured by monoethanolamine. We found that properties that reduce the residence time of the gas bubbles (viscosity, boiling point, and convection direction) can enhance the regeneration efficiencies. The reduction of bubble residence times minimizes the reabsorption of CO 2 back into the capture solvent where bulk temperatures remain lower than the localized area surrounding the nanoparticle. These properties shed light on the mechanism of release and indicated methods for improving the efficiency of the process. We used this knowledge to develop an improved photothermal CO 2 regeneration system in a continuously flowing setup. Finally, using techniques to reduce residence time in the continuously flowing setup, such as alternative cosolvents and smaller fluid volumes, resulted in regeneration efficiency enhancements of over 200%.« less

  14. Cleaning conveyor belts in the chicken-cutting area of a poultry processing plant with 45°c water.

    PubMed

    Soares, V M; Pereira, J G; Zanette, C M; Nero, L A; Pinto, J P A N; Barcellos, V C; Bersot, L S

    2014-03-01

    Conveyor belts are widely used in food handling areas, especially in poultry processing plants. Because they are in direct contact with food and it is a requirement of the Brazilian health authority, conveyor belts are required to be continuously cleaned with hot water under pressure. The use of water in this procedure has been questioned based on the hypothesis that water may further disseminate microorganisms but not effectively reduce the organic material on the surface. Moreover, reducing the use of water in processing may contribute to a reduction in costs and emission of effluents. However, no consistent evidence in support of removing water during conveyor belt cleaning has been reported. Therefore, the objective of the present study was to compare the bacterial counts on conveyor belts that were or were not continuously cleaned with hot water under pressure. Superficial samples from conveyor belts (cleaned or not cleaned) were collected at three different times during operation (T1, after the preoperational cleaning [5 a.m.]; T2, after the first work shift [4 p.m.]; and T3, after the second work shift [1:30 a.m.]) in a poultry meat processing facility, and the samples were subjected to mesophilic and enterobacterial counts. For Enterobacteriaceae, no significant differences were observed between the conveyor belts, independent of the time of sampling or the cleaning process. No significant differences were observed between the counts of mesophilic bacteria at the distinct times of sampling on the conveyor belt that had not been subjected to continuous cleaning with water at 45°C. When comparing similar periods of sampling, no significant differences were observed between the mesophilic counts obtained from the conveyor belts that were or were not subjected to continuous cleaning with water at 45°C. Continuous cleaning with water did not significantly reduce microorganism counts, suggesting the possibility of discarding this procedure in chicken processing.

  15. A simple and reliable method reducing sulfate to sulfide for multiple sulfur isotope analysis.

    PubMed

    Geng, Lei; Savarino, Joel; Savarino, Clara A; Caillon, Nicolas; Cartigny, Pierre; Hattori, Shohei; Ishino, Sakiko; Yoshida, Naohiro

    2018-02-28

    Precise analysis of four sulfur isotopes of sulfate in geological and environmental samples provides the means to extract unique information in wide geological contexts. Reduction of sulfate to sulfide is the first step to access such information. The conventional reduction method suffers from a cumbersome distillation system, long reaction time and large volume of the reducing solution. We present a new and simple method enabling the process of multiple samples at one time with a much reduced volume of reducing solution. One mL of reducing solution made of HI and NaH 2 PO 2 was added to a septum glass tube with dry sulfate. The tube was heated at 124°C and the produced H 2 S was purged with inert gas (He or N 2 ) through gas-washing tubes and then collected by NaOH solution. The collected H 2 S was converted into Ag 2 S by adding AgNO 3 solution and the co-precipitated Ag 2 O was removed by adding a few drops of concentrated HNO 3 . Within 2-3 h, a 100% yield was observed for samples with 0.2-2.5 μmol Na 2 SO 4 . The reduction rate was much slower for BaSO 4 and a complete reduction was not observed. International sulfur reference materials, NBS-127, SO-5 and SO-6, were processed with this method, and the measured against accepted δ 34 S values yielded a linear regression line which had a slope of 0.99 ± 0.01 and a R 2 value of 0.998. The new methodology is easy to handle and allows us to process multiple samples at a time. It has also demonstrated good reproducibility in terms of H 2 S yield and for further isotope analysis. It is thus a good alternative to the conventional manual method, especially when processing samples with limited amount of sulfate available. © 2017 The Authors. Rapid Communications in Mass Spectrometry Pubished by John Wiley & Sons Ltd.

  16. Reduced laser-evoked potential habituation detects abnormal central pain processing in painful radiculopathy patients.

    PubMed

    Hüllemann, P; von der Brelie, C; Manthey, G; Düsterhöft, J; Helmers, A K; Synowitz, M; Baron, R

    2017-05-01

    Repetitive painful laser stimuli lead to physiological laser-evoked potential (LEP) habituation, measurable by a decrement of the N2/P2 amplitude. The time course of LEP-habituation is reduced in the capsaicin model for peripheral and central sensitization and in patients with migraine and fibromyalgia. In the present investigation, we aimed to assess the time course of LEP-habituation in a neuropathic pain syndrome, i.e. painful radiculopathy. At the side of radiating pain, four blocks of 25 painful laser stimuli each were applied to the ventral thigh at the L3 dermatome in 27 patients with painful radiculopathy. Inclusion criteria were (1) at least one neurological finding of radiculopathy, (2) low back pain with radiation into the foot and (3) a positive one-sided compression of the L5 and/or S1 root in the MRI. The time course of LEP-habituation was compared to 20 healthy height and age matched controls. Signs of peripheral (heat hyperalgesia) and central sensitization (dynamic mechanical allodynia and hyperalgesia) at the affected L5 or S1 dermatome were assessed with quantitative sensory testing. Painful radiculopathy patients showed decreased LEP-habituation compared to controls. Patients with signs of central sensitization showed a more prominent LEP-habituation decrease within the radiculopathy patient group. Laser-evoked potential habituation is reduced in painful radiculopathy patients, which indicates an abnormal central pain processing. Central sensitization seems to be a major contributor to abnormal LEP habituation. The LEP habituation paradigm might be useful as a clinical tool to assess central pain processing alterations in nociceptive and neuropathic pain conditions. Abnormal central pain processing in neuropathic pain conditions may be revealed with the laser-evoked potential habituation paradigm. In painful radiculopathy patients, LEP-habituation is reduced compared to healthy controls. © 2017 European Pain Federation - EFIC®.

  17. IJA: an efficient algorithm for query processing in sensor networks.

    PubMed

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  18. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    PubMed Central

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  19. Increasing Efficiency at the NTF by Optimizing Model AoA Positioning

    NASA Technical Reports Server (NTRS)

    Crawford, Bradley L.; Spells, Courtney

    2006-01-01

    The National Transonic Facility (NTF) at NASA Langley Research Center (LaRC) is a national resource for aeronautical research and development. The government, military and private industries rely on the capability of this facility for realistic flight data. Reducing the operation costs and keeping the NTF affordable is essential for aeronautics research. The NTF is undertaking an effort to reduce the time between data points during a pitch polar. This reduction is being driven by the operating costs of a cryogenic facility. If the time per data point can be reduced, a substantial cost savings can be realized from a reduction in liquid nitrogen (LN2) consumption. It is known that angle-of-attack (AoA) positioning is the longest lead-time item between points. In January 2005 a test was conducted at the NTF to determine the cause of the long lead-time so that an effort could be made to improve efficiency. The AoA signal at the NTF originates from onboard instrumentation then travels through a number of different systems including the signal conditioner, digital voltmeter, and the data system where the AoA angle is calculated. It is then fed into a closed loop control system that sets the model position. Each process along this path adds to the time per data point affecting the efficiency of the data taking process. Due to the nature of the closed loop feed back AoA control and the signal path, it takes approximately 18 seconds to take one pitch pause point with a typical AoA increment. Options are being investigated to reduce the time delay between points by modifying the signal path. These options include: reduced signal filtering, using analog channels instead of a digital volt meter (DVM), re-routing the signal directly to the AoA control computer and implementing new control algorithms. Each of these has potential to reduce the positioning time and together the savings could be significant. These timesaving efforts are essential but must be weighed against possible loss of data quality. For example, a reduction in filtering can introduce noise into the signal and using analog channels could result in some loss of accuracy. Data quality assessments need to be performed concurrently with timesaving techniques since data quality parameters are essential in maintaining facility integrity. This paper will highlight time saving efforts being undertaken or studied at the NTF. It will outline the instrumentation and computer systems involved in setting of the model pitch attitude then suggest changes to the process and discuss how these system changes would effect the time between data points. It also discusses the issue of data quality and how the potential efficiency changes in the system could affect it. Lastly, it will discuss the possibility of using an open loop control system and give some pros and cons of this method.

  20. Multiple objects tracking with HOGs matching in circular windows

    NASA Astrophysics Data System (ADS)

    Miramontes-Jaramillo, Daniel; Kober, Vitaly; Díaz-Ramírez, Víctor H.

    2014-09-01

    In recent years tracking applications with development of new technologies like smart TVs, Kinect, Google Glass and Oculus Rift become very important. When tracking uses a matching algorithm, a good prediction algorithm is required to reduce the search area for each object to be tracked as well as processing time. In this work, we analyze the performance of different tracking algorithms based on prediction and matching for a real-time tracking multiple objects. The used matching algorithm utilizes histograms of oriented gradients. It carries out matching in circular windows, and possesses rotation invariance and tolerance to viewpoint and scale changes. The proposed algorithm is implemented in a personal computer with GPU, and its performance is analyzed in terms of processing time in real scenarios. Such implementation takes advantage of current technologies and helps to process video sequences in real-time for tracking several objects at the same time.

  1. Occupational Noise Reduction in CNC Striping Process

    NASA Astrophysics Data System (ADS)

    Mahmad Khairai, Kamarulzaman; Shamime Salleh, Nurul; Razlan Yusoff, Ahmad

    2018-03-01

    Occupational noise hearing loss with high level exposure is common occupational hazards. In CNC striping process, employee that exposed to high noise level for a long time as 8-hour contributes to hearing loss, create physical and psychological stress that reduce productivity. In this paper, CNC stripping process with high level noises are measured and reduced to the permissible noise exposure. First condition is all machines shutting down and second condition when all CNC machine under operations. For both conditions, noise exposures were measured to evaluate the noise problems and sources. After improvement made, the noise exposures were measured to evaluate the effectiveness of reduction. The initial average noise level at the first condition is 95.797 dB (A). After the pneumatic system with leakage was solved, the noise reduced to 55.517 dB (A). The average noise level at the second condition is 109.340 dB (A). After six machines were gathered at one area and cover that area with plastic curtain, the noise reduced to 95.209 dB (A). In conclusion, the noise level exposure in CNC striping machine is high and exceed the permissible noise exposure can be reduced to acceptable levels. The reduction of noise level in CNC striping processes enhanced productivity in the industry.

  2. Fast and fully-scalable synthesis of reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Abdolhosseinzadeh, Sina; Asgharzadeh, Hamed; Seop Kim, Hyoung

    2015-05-01

    Exfoliation of graphite is a promising approach for large-scale production of graphene. Oxidation of graphite effectively facilitates the exfoliation process, yet necessitates several lengthy washing and reduction processes to convert the exfoliated graphite oxide (graphene oxide, GO) to reduced graphene oxide (RGO). Although filtration, centrifugation and dialysis have been frequently used in the washing stage, none of them is favorable for large-scale production. Here, we report the synthesis of RGO by sonication-assisted oxidation of graphite in a solution of potassium permanganate and concentrated sulfuric acid followed by reduction with ascorbic acid prior to any washing processes. GO loses its hydrophilicity during the reduction stage which facilitates the washing step and reduces the time required for production of RGO. Furthermore, simultaneous oxidation and exfoliation significantly enhance the yield of few-layer GO. We hope this one-pot and fully-scalable protocol paves the road toward out of lab applications of graphene.

  3. Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao

    2016-01-01

    Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.

  4. Investigation on the Practicality of Developing Reduced Thermal Models

    NASA Technical Reports Server (NTRS)

    Lombardi, Giancarlo; Yang, Kan

    2015-01-01

    Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.

  5. Improving wait times to care for individuals with multimorbidities and complex conditions using value stream mapping.

    PubMed

    Sampalli, Tara; Desy, Michel; Dhir, Minakshi; Edwards, Lynn; Dickson, Robert; Blackmore, Gail

    2015-04-05

    Recognizing the significant impact of wait times for care for individuals with complex chronic conditions, we applied a LEAN methodology, namely - an adaptation of Value Stream Mapping (VSM) to meet the needs of people with multiple chronic conditions and to improve wait times without additional resources or funding. Over an 18-month time period, staff applied a patient-centric approach that included LEAN methodology of VSM to improve wait times to care. Our framework of evaluation was grounded in the needs and perspectives of patients and individuals waiting to receive care. Patient centric views were obtained through surveys such as Patient Assessment of Chronic Illness Care (PACIC) and process engineering based questions. In addition, LEAN methodology, VSM was added to identify non-value added processes contributing to wait times. The care team successfully reduced wait times to 2 months in 2014 with no wait times for care anticipated in 2015. Increased patient engagement and satisfaction are also outcomes of this innovative initiative. In addition, successful transformations and implementation have resulted in resource efficiencies without increase in costs. Patients have shown significant improvements in functional health following Integrated Chronic Care Service (ICCS) intervention. The methodology will be applied to other chronic disease management areas in Capital Health and the province. Wait times to care in the management of multimoribidities and other complex conditions can add a significant burden not only on the affected individuals but also on the healthcare system. In this study, a novel and modified LEAN methodology has been applied to embed the voice of the patient in care delivery processes and to reduce wait times to care in the management of complex chronic conditions. © 2015 by Kerman University of Medical Sciences.

  6. Experts in Fast-Ball Sports Reduce Anticipation Timing Cost by Developing Inhibitory Control

    ERIC Educational Resources Information Center

    Nakamoto, Hiroki; Mori, Shiro

    2012-01-01

    The present study was conducted to examine the relationship between expertise in movement correction and rate of movement reprogramming within limited time periods, and to clarify the specific cognitive processes regarding superior reprogramming ability in experts. Event-related potentials (ERPs) were recorded in baseball experts (n = 7) and…

  7. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  8. Convergence Acceleration and Documentation of CFD Codes for Turbomachinery Applications

    NASA Technical Reports Server (NTRS)

    Marquart, Jed E.

    2005-01-01

    The development and analysis of turbomachinery components for industrial and aerospace applications has been greatly enhanced in recent years through the advent of computational fluid dynamics (CFD) codes and techniques. Although the use of this technology has greatly reduced the time required to perform analysis and design, there still remains much room for improvement in the process. In particular, there is a steep learning curve associated with most turbomachinery CFD codes, and the computation times need to be reduced in order to facilitate their integration into standard work processes. Two turbomachinery codes have recently been developed by Dr. Daniel Dorney (MSFC) and Dr. Douglas Sondak (Boston University). These codes are entitled Aardvark (for 2-D and quasi 3-D simulations) and Phantom (for 3-D simulations). The codes utilize the General Equation Set (GES), structured grid methodology, and overset O- and H-grids. The codes have been used with success by Drs. Dorney and Sondak, as well as others within the turbomachinery community, to analyze engine components and other geometries. One of the primary objectives of this study was to establish a set of parametric input values which will enhance convergence rates for steady state simulations, as well as reduce the runtime required for unsteady cases. The goal is to reduce the turnaround time for CFD simulations, thus permitting more design parametrics to be run within a given time period. In addition, other code enhancements to reduce runtimes were investigated and implemented. The other primary goal of the study was to develop enhanced users manuals for Aardvark and Phantom. These manuals are intended to answer most questions for new users, as well as provide valuable detailed information for the experienced user. The existence of detailed user s manuals will enable new users to become proficient with the codes, as well as reducing the dependency of new users on the code authors. In order to achieve the objectives listed, the following tasks were accomplished: 1) Parametric Study Of Preconditioning Parameters And Other Code Inputs; 2) Code Modifications To Reduce Runtimes; 3) Investigation Of Compiler Options To Reduce Code Runtime; and 4) Development/Enhancement of Users Manuals for Aardvark and Phantom

  9. Applying Toyota production system techniques for medication delivery: improving hospital safety and efficiency.

    PubMed

    Newell, Terry L; Steinmetz-Malato, Laura L; Van Dyke, Deborah L

    2011-01-01

    The inpatient medication delivery system used at a large regional acute care hospital in the Midwest had become antiquated and inefficient. The existing 24-hr medication cart-fill exchange process with delivery to the patients' bedside did not always provide ordered medications to the nursing units when they were needed. In 2007 the principles of the Toyota Production System (TPS) were applied to the system. Project objectives were to improve medication safety and reduce the time needed for nurses to retrieve patient medications. A multidisciplinary team was formed that included representatives from nursing, pharmacy, informatics, quality, and various operational support departments. Team members were educated and trained in the tools and techniques of TPS, and then designed and implemented a new pull system benchmarking the TPS Ideal State model. The newly installed process, providing just-in-time medication availability, has measurably improved delivery processes as well as patient safety and satisfaction. Other positive outcomes have included improved nursing satisfaction, reduced nursing wait time for delivered medications, and improved efficiency in the pharmacy. After a successful pilot on two nursing units, the system is being extended to the rest of the hospital. © 2010 National Association for Healthcare Quality.

  10. Containerless Processing in Reduced Gravity Using the TEMPUS Facility

    NASA Technical Reports Server (NTRS)

    Roger, Jan R.; Robinson, Michael B.

    1996-01-01

    Containerless processing provides a high purity environment for the study of high-temperature, very reactive materials. It is an important method which provides access to the metastable state of an undercooled melt. In the absence of container walls, the nucleation rate is greatly reduced and undercooling up to (Tm-Tn)/Tm approx. 0.2 can be obtained, where Tm and Tn are the melting and nucleation temperatures, respectively. Electromagnetic levitation represents a method particularly well-suited for the study of metallic melts. The TEMPUS facility is a research instrument designed to perform electromagnetic levitation studies in reduced gravity. It provides temperatures up to 2600 C, levitation of several grams of material and access to the undercooled state for an extended period of time (up to hours).

  11. Using data to make decisions and drive results: a LEAN implementation strategy.

    PubMed

    Panning, Rick

    2005-03-28

    During the process of facility planning, Fairview Laboratory Services utilized LEAN manufacturing to maximize efficiency, simplify processes, and improve laboratory support of patient care services. By incorporating the LEAN program's concepts in our pilot program, we were able to reduce turnaround time by 50%, improve productivity by greater than 40%, reduce costs by 31%, save more than 440 square feet of space, standardize work practices, reduce errors and error potential, continuously measure performance, eliminate excess unused inventory and visual noise, and cross-train 100% of staff in the core laboratory. In addition, we trained a core team of people that is available to coordinate future LEAN projects in the laboratory and other areas of the organization.

  12. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  13. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  14. A Parallel Pipelined Renderer for the Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Chiueh, Tzi-Cker; Ma, Kwan-Liu

    1997-01-01

    This paper presents a strategy for efficiently rendering time-varying volume data sets on a distributed-memory parallel computer. Time-varying volume data take large storage space and visualizing them requires reading large files continuously or periodically throughout the course of the visualization process. Instead of using all the processors to collectively render one volume at a time, a pipelined rendering process is formed by partitioning processors into groups to render multiple volumes concurrently. In this way, the overall rendering time may be greatly reduced because the pipelined rendering tasks are overlapped with the I/O required to load each volume into a group of processors; moreover, parallelization overhead may be reduced as a result of partitioning the processors. We modify an existing parallel volume renderer to exploit various levels of rendering parallelism and to study how the partitioning of processors may lead to optimal rendering performance. Two factors which are important to the overall execution time are re-source utilization efficiency and pipeline startup latency. The optimal partitioning configuration is the one that balances these two factors. Tests on Intel Paragon computers show that in general optimal partitionings do exist for a given rendering task and result in 40-50% saving in overall rendering time.

  15. Effects of morphology parameters on anti-icing performance in superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanh-Binh; Park, Seungchul; Lim, Hyuneui

    2018-03-01

    In this paper, we report the contributions of actual ice-substrate contact area and nanopillar height to passive anti-icing performance in terms of adhesion force and freezing time. Well-textured nanopillars with various parameters were fabricated via colloidal lithography and a dry etching process. The nanostructured quartz surface was coated with low-energy material to confer water-repellent properties. These superhydrophobic surfaces were investigated to determine the parameters essential for reducing adhesion strength and delaying freezing time. A well-textured surface with nanopillars of very small top diameter, regardless of height, could reduce adhesion force and delay freezing time in a subsequent de-icing process. Small top diameters of nanopillars also ensured the metastable Cassie-Baxter state based on energy barrier calculations. The results demonstrated the important role of areal fraction in anti-icing efficiency, and the negligible contribution of texture height. This insight into icing phenomena should lead to design of improved ice-phobic surfaces in the future.

  16. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  17. An integrated gateway for various PHDs in U-healthcare environments.

    PubMed

    Park, KeeHyun; Pak, JuGeon

    2012-01-01

    We propose an integrated gateway for various personal health devices (PHDs). This gateway receives measurements from various PHDs and conveys them to a remote monitoring server (MS). It provides two kinds of transmission modes: immediate transmission and integrated transmission. The former mode operates if a measurement exceeds a predetermined threshold or in the case of an emergency. In the latter mode, the gateway retains the measurements instead of forwarding them. When the reporting time comes, the gateway extracts all the stored measurements, integrates them into one message, and transmits the integrated message to the MS. Through this mechanism, the transmission overhead can be reduced. On the basis of the proposed gateway, we construct a u-healthcare system comprising an activity monitor, a medication dispenser, and a pulse oximeter. The evaluation results show that the size of separate messages from various PHDs is reduced through the integration process, and the process does not require much time; the integration time is negligible.

  18. An Integrated Gateway for Various PHDs in U-Healthcare Environments

    PubMed Central

    Park, KeeHyun; Pak, JuGeon

    2012-01-01

    We propose an integrated gateway for various personal health devices (PHDs). This gateway receives measurements from various PHDs and conveys them to a remote monitoring server (MS). It provides two kinds of transmission modes: immediate transmission and integrated transmission. The former mode operates if a measurement exceeds a predetermined threshold or in the case of an emergency. In the latter mode, the gateway retains the measurements instead of forwarding them. When the reporting time comes, the gateway extracts all the stored measurements, integrates them into one message, and transmits the integrated message to the MS. Through this mechanism, the transmission overhead can be reduced. On the basis of the proposed gateway, we construct a u-healthcare system comprising an activity monitor, a medication dispenser, and a pulse oximeter. The evaluation results show that the size of separate messages from various PHDs is reduced through the integration process, and the process does not require much time; the integration time is negligible. PMID:22899891

  19. Aerial somersault performance under three visual conditions.

    PubMed

    Hondzinski, J M; Darling, W G

    2001-07-01

    Experiments were designed to examine the visual contributions to performance of back aerial double somersaults by collegiate acrobats. Somersaults were performed on a trampoline under three visual conditions: (a) NORMAL acuity; (b) REDUCED acuity (subjects wore special contacts that blocked light reflected onto the central retina); and (c) NO VISION. Videotaped skill performances were rated by two NCAA judges and digitized for kinematic analyses. Subjects' performance scores were similar in NORMAL and REDUCED conditions and lowest in the NO VISION condition. Control of body movement, indicated by time-to-contact, was most variable in the NO VISION condition. Profiles of angular head and neck velocity revealed that when subjects could see, they slowed their heads prior to touchdown in time to process optical flow information and prepare for landing. There was not always enough time to process vision associated with object identification and prepare for touchdown. It was concluded that collegiate acrobats do not need to identify objects for their best back aerial double somersault performance.

  20. The impact of health information technology on disparity of process of care.

    PubMed

    Lee, Jinhyung

    2015-04-01

    Disparities in the quality of health care and treatment among racial or ethnic groups can result from unequal access to medical care, disparate treatments for similar severities of symptoms, and wide divergence in general health status among individuals. Such disparities may be eliminated through better use of health information technology (IT). Investment in health IT could foster better coordinated care, improve guideline compliance, and reduce the likelihood of redundant testing, thereby encouraging more equitable treatment for underprivileged populations. However, there is little research exploring the impact of health IT investment on disparities of process of care. This study examines the impact of health IT investment on waiting times - from admission to the date of first principle procedure - among different racial and ethnic groups, using patient and hospital data for the state of California collected from 2001 to 2007. The final sample includes 14,056,930 patients admitted with medical diseases to 316 unique, acute-care hospitals over a seven-year period. The linear random intercept and slope model was employed to examine the impacts of health IT investment on waiting time, while controlling for patient, disease, and hospital characteristics. Greater health IT investment was associated with shorter waiting times, and the reduction in waiting times was greater for non-White than for White patients. This indicates that minority populations could benefit from health IT investment with regard to process of care. Investments in health IT may reduce disparities in process of care.

  1. Reducing door-to-needle times using Toyota's lean manufacturing principles and value stream analysis.

    PubMed

    Ford, Andria L; Williams, Jennifer A; Spencer, Mary; McCammon, Craig; Khoury, Naim; Sampson, Tomoko R; Panagos, Peter; Lee, Jin-Moo

    2012-12-01

    Earlier tissue-type plasminogen activator (tPA) treatment for acute ischemic stroke increases efficacy, prompting national efforts to reduce door-to-needle times. We used lean process improvement methodology to develop a streamlined intravenous tPA protocol. In early 2011, a multidisciplinary team analyzed the steps required to treat patients with acute ischemic stroke with intravenous tPA using value stream analysis (VSA). We directly compared the tPA-treated patients in the "pre-VSA" epoch with the "post-VSA" epoch with regard to baseline characteristics, protocol metrics, and clinical outcomes. The VSA revealed several tPA protocol inefficiencies: routing of patients to room, then to CT, then back to the room; serial processing of workflow; and delays in waiting for laboratory results. On March 1, 2011, a new protocol incorporated changes to minimize delays: routing patients directly to head CT before the patient room, using parallel process workflow, and implementing point-of-care laboratories. In the pre and post-VSA epochs, 132 and 87 patients were treated with intravenous tPA, respectively. Compared with pre-VSA, door-to-needle times and percent of patients treated ≤60 minutes from hospital arrival were improved in the post-VSA epoch: 60 minutes versus 39 minutes (P<0.0001) and 52% versus 78% (P<0.0001), respectively, with no change in symptomatic hemorrhage rate. Lean process improvement methodology can expedite time-dependent stroke care without compromising safety.

  2. Parametric Study of a YAV-8B Harrier in Ground Effect using Time-Dependent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Pandya, Shishir; Chaderjian, Neal; Ahmad, Jasim; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A process is described which enables the generation of 35 time-dependent viscous solutions for a YAV-8B Harrier in ground effect in one week. Overset grids are used to model the complex geometry of the Harrier aircraft and the interaction of its jets with the ground plane and low-speed ambient flow. The time required to complete this parametric study is drastically reduced through the use of process automation, modern computational platforms, and parallel computing. Moreover, a dual-time-stepping algorithm is described which improves solution robustness. Unsteady flow visualization and a frequency domain analysis are also used to identify and correlated key flow structures with the time variation of lift.

  3. A Q-Ising model application for linear-time image segmentation

    NASA Astrophysics Data System (ADS)

    Bentrem, Frank W.

    2010-10-01

    A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.

  4. Investigation of approximate models of experimental temperature characteristics of machines

    NASA Astrophysics Data System (ADS)

    Parfenov, I. V.; Polyakov, A. N.

    2018-05-01

    This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.

  5. An Effective Semantic Event Matching System in the Internet of Things (IoT) Environment.

    PubMed

    Alhakbani, Noura; Hassan, Mohammed Mehedi; Ykhlef, Mourad

    2017-09-02

    IoT sensors use the publish/subscribe model for communication to benefit from its decoupled nature with respect to space, time, and synchronization. Because of the heterogeneity of communicating parties, semantic decoupling is added as a fourth dimension. The added semantic decoupling complicates the matching process and reduces its efficiency. Our proposed algorithm clusters subscriptions and events according to topic and performs the matching process within these clusters, which increases the throughput by reducing the matching time from the range of 16-18 ms to 2-4 ms. Moreover, the accuracy of matching is improved when subscriptions must be fully approximated, as demonstrated by an over 40% increase in F-score results. This work shows the benefit of clustering, as well as the improvement in the matching accuracy and efficiency achieved using this approach.

  6. Reducing Missed Laboratory Results: Defining Temporal Responsibility, Generating User Interfaces for Test Process Tracking, and Retrospective Analyses to Identify Problems

    PubMed Central

    Tarkan, Sureyya; Plaisant, Catherine; Shneiderman, Ben; Hettinger, A. Zachary

    2011-01-01

    Researchers have conducted numerous case studies reporting the details on how laboratory test results of patients were missed by the ordering medical providers. Given the importance of timely test results in an outpatient setting, there is limited discussion of electronic versions of test result management tools to help clinicians and medical staff with this complex process. This paper presents three ideas to reduce missed results with a system that facilitates tracking laboratory tests from order to completion as well as during follow-up: (1) define a workflow management model that clarifies responsible agents and associated time frame, (2) generate a user interface for tracking that could eventually be integrated into current electronic health record (EHR) systems, (3) help identify common problems in past orders through retrospective analyses. PMID:22195201

  7. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform

    PubMed Central

    Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711

  8. Efficient protein structure search using indexing methods

    PubMed Central

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively. PMID:23691543

  9. Efficient protein structure search using indexing methods.

    PubMed

    Kim, Sungchul; Sael, Lee; Yu, Hwanjo

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.

  10. Data-driven management using quantitative metric and automatic auditing program (QMAP) improves consistency of radiation oncology processes.

    PubMed

    Yu, Naichang; Xia, Ping; Mastroianni, Anthony; Kolar, Matthew D; Chao, Samuel T; Greskovich, John F; Suh, John H

    Process consistency in planning and delivery of radiation therapy is essential to maintain patient safety and treatment quality and efficiency. Ensuring the timely completion of each critical clinical task is one aspect of process consistency. The purpose of this work is to report our experience in implementing a quantitative metric and automatic auditing program (QMAP) with a goal of improving the timely completion of critical clinical tasks. Based on our clinical electronic medical records system, we developed a software program to automatically capture the completion timestamp of each critical clinical task while providing frequent alerts of potential delinquency. These alerts were directed to designated triage teams within a time window that would offer an opportunity to mitigate the potential for late completion. Since July 2011, 18 metrics were introduced in our clinical workflow. We compared the delinquency rates for 4 selected metrics before the implementation of the metric with the delinquency rate of 2016. One-tailed Student t test was used for statistical analysis RESULTS: With an average of 150 daily patients on treatment at our main campus, the late treatment plan completion rate and late weekly physics check were reduced from 18.2% and 8.9% in 2011 to 4.2% and 0.1% in 2016, respectively (P < .01). The late weekly on-treatment physician visit rate was reduced from 7.2% in 2012 to <1.6% in 2016. The yearly late cone beam computed tomography review rate was reduced from 1.6% in 2011 to <0.1% in 2016. QMAP is effective in reducing late completions of critical tasks, which can positively impact treatment quality and patient safety by reducing the potential for errors resulting from distractions, interruptions, and rush in completion of critical tasks. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  11. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevik, James; Pamminger, Michael; Wallner, Thomas

    Interest in natural gas as an alternative fuel source to petroleum fuels for light-duty vehicle applications has increased due to its domestic availability and stable price compared to gasoline. With its higher hydrogen-to-carbon ratio, natural gas has the potential to reduce engine out carbon dioxide emissions, which has shown to be a strong greenhouse gas contributor. For part-load conditions, the lower flame speeds of natural gas can lead to an increased duration in the inflammation process with traditional port-injection. Direct-injection of natural gas can increase in-cylinder turbulence and has the potential to reduce problems typically associated with port-injection of naturalmore » gas, such as lower flame speeds and poor dilution tolerance. A study was designed and executed to investigate the effects of direct-injection of natural gas at part-load conditions. Steady-state tests were performed on a single-cylinder research engine representative of current gasoline direct-injection engines. Tests were performed with direct-injection in the central and side location. The start of injection was varied under stoichiometric conditions in order to study the effects on the mixture formation process. In addition, exhaust gas recirculation was introduced at select conditions in order to investigate the dilution tolerance. Relevant combustion metrics were then analyzed for each scenario. Experimental results suggest that regardless of the injector location, varying the start of injection has a strong impact on the mixture formation process. Delaying the start of injection from 300 to 120°CA BTDC can reduce the early flame development process by nearly 15°CA. While injecting into the cylinder after the intake valves have closed has shown to produce the fastest combustion process, this does not necessarily lead to the highest efficiency, due to increases in pumping and wall heat losses. When comparing the two injection configurations, the side location shows the best performance in terms of combustion metrics and efficiencies. For both systems, part-load dilution tolerance is affected by the injection timing, due to the induced turbulence from the gaseous injection event. CFD simulation results have shown that there is a fundamental difference in how the two injection locations affect the mixture formation process. Delayed injection timing increases the turbulence level in the cylinder at the time of the spark, but reduces the available time for proper mixing. Side injection delivers a gaseous jet that interacts more effectively with the intake induced flow field, and this improves the engine performance in terms of efficiency.« less

  13. Improve the Efficiency of the Service Process as a Result of the Muda Ideology

    NASA Astrophysics Data System (ADS)

    Lorenc, Augustyn; Przyłuski, Krzysztof

    2018-06-01

    The aim of the paper was to improve service processes carried out by Knorr-Bremse Systemy Kolejowe Polska sp. z o.o. Particularly, emphasise unnecessary movements and physical efforts of employees. The indirect goal was to find a solution in the simplest possible way using the Muda ideology. In order to improve the service process at the beginning was executed the process mapping for the devices to be repaired, ie. brake callipers, electro-hydraulic units and auxiliary release units. The processes were assessed and shown as Pareto-Lorenz analysis. In order to determine the most time consuming process. Based on the obtained results use of a column crane with articulated arm was proposed to facilitate the transfer of heavy components between areas. The final step was to assess the effectiveness of the proposed solution in terms of time saving. From the company perspective results of the analysis are important. The proposed solution not only reduces total service time but also contributes to crew's work comfort.

  14. Schematization and Sentence Processing by Foreign Language Learners: A Reading-Time Experiment and a Stimulated-Recall Analysis

    ERIC Educational Resources Information Center

    Tode, Tomoko

    2012-01-01

    This article examines how learners of English as a foreign language process reduced relative clauses (RRCs) from the perspective of usage-based language learning, which posits that language knowledge forms a hierarchy from item-based knowledge consisting only of entrenched frequent exemplars to more advanced schematized knowledge. Twenty-eight…

  15. Implementation of a real-time statistical process control system in hardwood sawmills

    Treesearch

    Timothy M. Young; Brian H. Bond; Jan Wiedenbeck

    2007-01-01

    Variation in sawmill processes reduces the financial benefit of converting fiber from a log into lumber. Lumber is intentionally oversized during manufacture to allow for sawing variation, shrinkage from drying, and final surfacing. This oversizing of lumber due to sawing variation requires higher operating targets and leads to suboptimal fiber recovery. For more than...

  16. Out of control little-used clinical assets are draining healthcare budgets.

    PubMed

    Horblyuk, Ruslan; Kaneta, Kristopher; McMillen, Gary L; Mullins, Christopher; O'Brien, Thomas M; Roy, Ankita

    2012-07-01

    To improve utilization and reduce the cost of maintaining mobile clinical equipment, healthcare organization leaders should do the following: Select an initial asset group to target. Conduct a physical inventory. Evaluate the organization's asset "ecosystem." Optimize workflow processes. Phase in new processes, and phase out inventory. Devote time to change management. Develop a replacement strategy.

  17. NDCEE Annual Technologies Publication

    DTIC Science & Technology

    2003-04-01

    Engineering Center TBP Thermophilic (Biological) Process TCP Trivalent chromium pretreatment 3-D Three-dimensional TNT 2,4,6 trinitrotoluene TTU Transit-Time...to be able to restore worn, improperly machined or salvaged service parts. Trivalent Chromium Plating: This process eliminates the use of chromic...acid, thereby reducing health risks to operators. Trivalent chromium forms insoluble mineral precipitates in groundwater, which eliminates the chemical

  18. Micromechanical Machining Processes and their Application to Aerospace Structures, Devices and Systems

    NASA Technical Reports Server (NTRS)

    Friedrich, Craig R.; Warrington, Robert O.

    1995-01-01

    Micromechanical machining processes are those micro fabrication techniques which directly remove work piece material by either a physical cutting tool or an energy process. These processes are direct and therefore they can help reduce the cost and time for prototype development of micro mechanical components and systems. This is especially true for aerospace applications where size and weight are critical, and reliability and the operating environment are an integral part of the design and development process. The micromechanical machining processes are rapidly being recognized as a complementary set of tools to traditional lithographic processes (such as LIGA) for the fabrication of micromechanical components. Worldwide efforts in the U.S., Germany, and Japan are leading to results which sometimes rival lithography at a fraction of the time and cost. Efforts to develop processes and systems specific to aerospace applications are well underway.

  19. Radiation safety in the cardiac catheterization lab: A time series quality improvement initiative.

    PubMed

    Abuzeid, Wael; Abunassar, Joseph; Leis, Jerome A; Tang, Vicky; Wong, Brian; Ko, Dennis T; Wijeysundera, Harindra C

    Interventional cardiologists have one of the highest annual radiation exposures yet systems of care that promote radiation safety in cardiac catheterization labs are lacking. This study sought to reduce the frequency of radiation exposure, for PCI procedures, above 1.5Gy in labs utilizing a Phillips system at our local institution by 40%, over a 12-month period. We performed a time series study to assess the impact of different interventions on the frequency of radiation exposure above 1.5Gy. Process measures were percent of procedures where collimation and magnification were used and percent of completion of online educational modules. Balancing measures were the mean number of cases performed and mean fluoroscopy time. Information sessions, online modules, policies and posters were implemented followed by the introduction of a new lab with a novel software (AlluraClarity©) to reduce radiation dose. There was a significant reduction (91%, p<0.05) in the frequency of radiation exposure above 1.5Gy after utilizing a novel software (AlluraClarity©) in a new Phillips lab. Process measures of use of collimation (95.0% to 98.0%), use of magnification (20.0% to 14.0%) and completion of online modules (62%) helped track implementation. The mean number of cases performed and mean fluoroscopy time did not change significantly. While educational strategies had limited impact on reducing radiation exposure, implementing a novel software system provided the most effective means of reducing radiation exposure. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  20. Advanced oxidation processes on doxycycline degradation: monitoring of antimicrobial activity and toxicity.

    PubMed

    Spina-Cruz, Mylena; Maniero, Milena Guedes; Guimarães, José Roberto

    2018-05-08

    Advanced oxidation processes (AOPs) have been highly efficient in degrading contaminants of emerging concern (CEC). This study investigated the efficiency of photolysis, peroxidation, photoperoxidation, and ozonation at different pH values to degrade doxycycline (DC) in three aqueous matrices: fountain, tap, and ultrapure water. More than 99.6% of DC degradation resulted from the UV/H 2 O 2 and ozonation processes. Also, to evaluate the toxicity of the original solution and throughout the degradation time, antimicrobial activity tests were conducted using Gram-positive (Bacillus subtilis) and Gram-negative (Escherichia coli) bacteria, and acute toxicity test using the bioluminescent marine bacterium (Vibrio fischeri). Antimicrobial activity reduced as the drug degradation increased in UV/H 2 O 2 and ozonation processes, wherein the first process only 6 min was required to reduce 100% of both bacteria activity. In ozonation, 27.7 mg L -1 of ozone was responsible for reducing 100% of the antimicrobial activity. When applied the photoperoxidation process, an increase in the toxicity occurred as the high levels of degradation were achieved; it means that toxic intermediates were formed. The ozonated solutions did not present toxicity.

  1. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites.

    PubMed

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-03-13

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59-60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties.

  2. Cure Cycle Optimization of Rapidly Cured Out-Of-Autoclave Composites

    PubMed Central

    Dong, Anqi; Zhao, Yan; Zhao, Xinqing; Yu, Qiyong

    2018-01-01

    Out-of-autoclave prepreg typically needs a long cure cycle to guarantee good properties as the result of low processing pressure applied. It is essential to reduce the manufacturing time, achieve real cost reduction, and take full advantage of out-of-autoclave process. The focus of this paper is to reduce the cure cycle time and production cost while maintaining high laminate quality. A rapidly cured out-of-autoclave resin and relative prepreg were independently developed. To determine a suitable rapid cure procedure for the developed prepreg, the effect of heating rate, initial cure temperature, dwelling time, and post-cure time on the final laminate quality were evaluated and the factors were then optimized. As a result, a rapid cure procedure was determined. The results showed that the resin infiltration could be completed at the end of the initial cure stage and no obvious void could be seen in the laminate at this time. The laminate could achieve good internal quality using the optimized cure procedure. The mechanical test results showed that the laminates had a fiber volume fraction of 59–60% with a final glass transition temperature of 205 °C and excellent mechanical strength especially the flexural properties. PMID:29534048

  3. Developing a performance data suite to facilitate lean improvement in a chemotherapy day unit.

    PubMed

    Lingaratnam, Senthil; Murray, Danielle; Carle, Amber; Kirsa, Sue W; Paterson, Rebecca; Rischin, Danny

    2013-07-01

    A multidisciplinary team from the Peter MacCallum Cancer Centre in Melbourne, Australia, developed a performance data suite to support a service improvement project based on lean manufacturing principles in its 19-chair chemotherapy day unit (CDU) and cytosuite chemotherapy production facility. The aims of the project were to reduce patient wait time and improve equity of access to the CDU. A project team consisting of a pharmacist and CDU nurse supported the management team for 10 months in engaging staff and customers to identify waste in processes, analyze root causes, eliminate non-value-adding steps, reduce variation, and level workloads to improve quality and flow. Process mapping, staff and patient tracking and opinion surveys, medical record audits, and interrogation of electronic treatment records were undertaken. This project delivered a 38% reduction in median wait time on the day (from 32 to 20 minutes; P < .01), 7-day reduction in time to commencement of treatment for patients receiving combined chemoradiotherapy regimens (from 25 to 18 days; P < .01), and 22% reduction in wastage associated with expired drug and pharmacy rework (from 29% to 7%; P < .01). Improvements in efficiency enabled the cytosuite to increase the percentage of product manufactured within 10 minutes of appointment times by 29% (from 47% to 76%; P < .01). A lean improvement methodology provided a robust framework for improved understanding and management of complex system constraints within a CDU, resulting in improved access to treatment and reduced waiting times on the day.

  4. Tribo-functionalizing Si and SU8 materials by surface modification for application in MEMS/NEMS actuator-based devices

    NASA Astrophysics Data System (ADS)

    Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.

    2011-01-01

    Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.

  5. Developing infrared array controller with software real time operating system

    NASA Astrophysics Data System (ADS)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  6. Improving the Aircraft Design Process Using Web-Based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)

    2000-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  7. Improving the Aircraft Design Process Using Web-based Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.

    2003-01-01

    Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.

  8. The effect of some heat treatment parameters on the dimensional stability of AISI D2

    NASA Astrophysics Data System (ADS)

    Surberg, Cord Henrik; Stratton, Paul; Lingenhöle, Klaus

    2008-01-01

    The tool steel AISI D2 is usually processed by vacuum hardening followed by multiple tempering cycles. It has been suggested that a deep cold treatment in between the hardening and tempering processes could reduce processing time and improve the final properties and dimensional stability. Hardened blocks were then subjected to various combinations of single and multiple tempering steps (520 and 540 °C) and deep cold treatments (-90, -120 and -150 °C). The greatest dimensional stability was achieved by deep cold treatments at the lowest temperature used and was independent of the deep cold treatment time.

  9. Coherent diffractive imaging of time-evolving samples with improved temporal resolution

    DOE PAGES

    Ulvestad, A.; Tripathi, A.; Hruszkewycz, S. O.; ...

    2016-05-19

    Bragg coherent x-ray diffractive imaging is a powerful technique for investigating dynamic nanoscale processes in nanoparticles immersed in reactive, realistic environments. Its temporal resolution is limited, however, by the oversampling requirements of three-dimensional phase retrieval. Here, we show that incorporating the entire measurement time series, which is typically a continuous physical process, into phase retrieval allows the oversampling requirement at each time step to be reduced, leading to a subsequent improvement in the temporal resolution by a factor of 2-20 times. The increased time resolution will allow imaging of faster dynamics and of radiation-dose-sensitive samples. Furthermore, this approach, which wemore » call "chrono CDI," may find use in improving the time resolution in other imaging techniques.« less

  10. Eye-Tracking and Corpus-Based Analyses of Syntax-Semantics Interactions in Complement Coercion

    PubMed Central

    Lowder, Matthew W.; Gordon, Peter C.

    2016-01-01

    Previous work has shown that the difficulty associated with processing complex semantic expressions is reduced when the critical constituents appear in separate clauses as opposed to when they appear together in the same clause. We investigated this effect further, focusing in particular on complement coercion, in which an event-selecting verb (e.g., began) combines with a complement that represents an entity (e.g., began the memo). Experiment 1 compared reading times for coercion versus control expressions when the critical verb and complement appeared together in a subject-extracted relative clause (SRC) (e.g., The secretary that began/wrote the memo) compared to when they appeared together in a simple sentence. Readers spent more time processing coercion expressions than control expressions, replicating the typical coercion cost. In addition, readers spent less time processing the verb and complement in SRCs than in simple sentences; however, the magnitude of the coercion cost did not depend on sentence structure. In contrast, Experiment 2 showed that the coercion cost was reduced when the complement appeared as the head of an object-extracted relative clause (ORC) (e.g., The memo that the secretary began/wrote) compared to when the constituents appeared together in an SRC. Consistent with the eye-tracking results of Experiment 2, a corpus analysis showed that expressions requiring complement coercion are more frequent when the constituents are separated by the clause boundary of an ORC compared to when they are embedded together within an SRC. The results provide important information about the types of structural configurations that contribute to reduced difficulty with complex semantic expressions, as well as how these processing patterns are reflected in naturally occurring language. PMID:28529960

  11. Real-time windowing in imaging radar using FPGA technique

    NASA Astrophysics Data System (ADS)

    Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique

    2005-02-01

    The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.

  12. Advanced flow noise reducing acoustic sensor arrays

    NASA Astrophysics Data System (ADS)

    Fine, Kevin; Drzymkowski, Mark; Cleckler, Jay

    2009-05-01

    SARA, Inc. has developed microphone arrays that are as effective at reducing flow noise as foam windscreens and sufficiently rugged for tough battlefield environments. These flow noise reducing (FNR) sensors have a metal body and are flat and conformally mounted so they can be attached to the roofs of land vehicles and are resistant to scrapes from branches. Flow noise at low Mach numbers is created by turbulent eddies moving with the fluid flow and inducing pressure variations on microphones. Our FNR sensors average the pressure over the diameter (~20 cm) of their apertures, reducing the noise created by all but the very largest eddies. This is in contrast to the acoustic wave which has negligible variation over the aperture at the frequencies of interest (f less or equal than 400 Hz). We have also post-processed the signals to further reduce the flow noise. Two microphones separated along the flow direction exhibit highly correlated noise. The time shift of the correlation corresponds to the time for the eddies in the flow to travel between the microphones. We have created linear microphone arrays parallel to the flow and have reduced flow noise as much as 10 to 15 dB by subtracting time-shifted signals.

  13. Maximizing efficiency on trauma surgeon rounds.

    PubMed

    Ramaniuk, Aliaksandr; Dickson, Barbara J; Mahoney, Sean; O'Mara, Michael S

    2017-01-01

    Rounding by trauma surgeons is a complex multidisciplinary team-based process in the inpatient setting. Implementation of lean methodology aims to increase understanding of the value stream and eliminate nonvalue-added (NVA) components. We hypothesized that analysis of trauma rounds with education and intervention would improve surgeon efficacy. Level 1 trauma center with 4300 admissions per year. Average non-intensive care unit census was 55. Five full-time attending trauma surgeons were evaluated. Value-added (VA) and NVA components of rounding were identified. The components of each patient interaction during daily rounds were documented. Summary data were presented to the surgeons. An action plan of improvement was provided at group and individual interventions. Change plans were presented to the multidisciplinary team. Data were recollected 6 mo after intervention. The percent of interactions with NVA components decreased (16.0% to 10.7%, P = 0.0001). There was no change between the two periods in time of evaluation of individual patients (4.0 and 3.5 min, P = 0.43). Overall time to complete rounds did not change. There was a reduction in the number of interactions containing NVA components (odds ratio = 2.5). The trauma surgeons were able to reduce the NVA components of rounds. We did not see a decrease in rounding time or individual patient time. This implies that surgeons were able to reinvest freed time into patient care, or that the NVA components were somehow not increasing process time. Direct intervention for isolated improvements can be effective in the rounding process, and efforts should be focused upon improving the value of time spent rather than reducing time invested. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Just in Time in Space or Space Based JIT

    NASA Technical Reports Server (NTRS)

    VanOrsdel, Kathleen G.

    1995-01-01

    Our satellite systems are mega-buck items. In today's cost conscious world, we need to reduce the overall costs of satellites if our space program is to survive. One way to accomplish this would be through on-orbit maintenance of parts on the orbiting craft. In order to accomplish maintenance at a low cost I advance the hypothesis of having parts and pieces (spares) waiting. Waiting in the sense of having something when you need it, or just-in-time. The JIT concept can actually be applied to space processes. Its definition has to be changed just enough to encompass the needs of space. Our space engineers tell us which parts and pieces the satellite systems might be needing once in orbit. These items are stored in space for the time of need and can be ready when they are needed -- or Space Based JIT. When a system has a problem, the repair facility is near by and through human or robotics intervention, it can be brought back into service. Through a JIT process, overall system costs could be reduced as standardization of parts is built into satellite systems to facilitate reduced numbers of parts being stored. Launch costs will be contained as fewer spare pieces need to be included in the launch vehicle and the space program will continue to thrive even in this era of reduced budgets. The concept of using an orbiting parts servicer and human or robotics maintenance/repair capabilities would extend satellite life-cycle and reduce system replacement launches. Reductions of this nature throughout the satellite program result in cost savings.

  15. Allocating time to future tasks: the effect of task segmentation on planning fallacy bias.

    PubMed

    Forsyth, Darryl K; Burt, Christopher D B

    2008-06-01

    The scheduling component of the time management process was used as a "paradigm" to investigate the allocation of time to future tasks. In three experiments, we compared task time allocation for a single task with the summed time allocations given for each subtask that made up the single task. In all three, we found that allocated time for a single task was significantly smaller than the summed time allocated to the individual subtasks. We refer to this as the segmentation effect. In Experiment 3, we asked participants to give estimates by placing a mark on a time line, and found that giving time allocations in the form of rounded close approximations probably does not account for the segmentation effect. We discuss the results in relation to the basic processes used to allocate time to future tasks and the means by which planning fallacy bias might be reduced.

  16. Development of an automated processing system for potential fishing zone forecast

    NASA Astrophysics Data System (ADS)

    Ardianto, R.; Setiawan, A.; Hidayat, J. J.; Zaky, A. R.

    2017-01-01

    The Institute for Marine Research and Observation (IMRO) - Ministry of Marine Affairs and Fisheries Republic of Indonesia (MMAF) has developed a potential fishing zone (PFZ) forecast using satellite data, called Peta Prakiraan Daerah Penangkapan Ikan (PPDPI). Since 2005, IMRO disseminates everyday PPDPI maps for fisheries marine ports and 3 days average for national areas. The accuracy in determining the PFZ and processing time of maps depend much on the experience of the operators creating them. This paper presents our research in developing an automated processing system for PPDPI in order to increase the accuracy and shorten processing time. PFZ are identified by combining MODIS sea surface temperature (SST) and chlorophyll-a (CHL) data in order to detect the presence of upwelling, thermal fronts and biological productivity enhancement, where the integration of these phenomena generally representing the PFZ. The whole process involves data download, map geo-process as well as layout that are carried out automatically by Python and ArcPy. The results showed that the automated processing system could be used to reduce the operator’s dependence on determining PFZ and speed up processing time.

  17. Effect of Lean Processes on Surgical Wait Times and Efficiency in a Tertiary Care Veterans Affairs Medical Center.

    PubMed

    Valsangkar, Nakul P; Eppstein, Andrew C; Lawson, Rick A; Taylor, Amber N

    2017-01-01

    There are an increasing number of veterans in the United States, and the current delay and wait times prevent Veterans Affairs institutions from fully meeting the needs of current and former service members. Concrete strategies to improve throughput at these facilities have been sparse. To identify whether lean processes can be used to improve wait times for surgical procedures in Veterans Affairs hospitals. Databases in the Veterans Integrated Service Network 11 Data Warehouse, Veterans Health Administration Support Service Center, and Veterans Information Systems and Technology Architecture/Dynamic Host Configuration Protocol were queried to assess changes in wait times for elective general surgical procedures and clinical volume before, during, and after implementation of lean processes over 3 fiscal years (FYs) at a tertiary care Veterans Affairs medical center. All patients evaluated by the general surgery department through outpatient clinics, clinical video teleconferencing, and e-consultations from October 2011 through September 2014 were included. Patients evaluated through the emergency department or as inpatient consults were excluded. The surgery service and systems redesign service held a value stream analysis in FY 2013, culminating in multiple rapid process improvement workshops. Multidisciplinary teams identified systemic inefficiencies and strategies to improve interdepartmental and patient communication to reduce canceled consultations and cases, diagnostic rework, and no-shows. High-priority triage with enhanced operating room flexibility was instituted to reduce scheduling wait times. General surgery department pilot projects were then implemented mid-FY 2013. Planned outcome measures included wait time, clinic and telehealth volume, number of no-shows, and operative volume. Paired t tests were used to identify differences in outcome measures after the institution of reforms. Following rapid process improvement workshop project rollouts, mean (SD) patient wait times for elective general surgical procedures decreased from 33.4 (8.3) days in FY 2012 to 26.0 (9.5) days in FY 2013 (P = .02). In FY 2014, mean (SD) wait times were half the value of the previous FY at 12.0 (2.1) days (P = .07). This was a 3-fold decrease from wait times in FY 2012 (P = .02). Operative volume increased from 931 patients in FY 2012 to 1090 in FY 2013 and 1072 in FY 2014. Combined clinic, telehealth, and e-consultation encounters increased from 3131 in FY 2012 to 3460 in FY 2013 and 3517 in FY 2014, while the number of no-shows decreased from 366 in FY 2012 to 227 in FY 2014 (P = .02). Improvement in the overall surgical patient experience can stem from multidisciplinary collaboration among systems redesign personnel, clinicians, and surgical staff to reduce systemic inefficiencies. Monitoring and follow-up of system efficiency measures and the employment of lean practices and process improvements can have positive short- and long-term effects on wait times, clinical throughput, and patient care and satisfaction.

  18. Application of bacteriophages to reduce Salmonella contamination on workers' boots in rendering-processing environment.

    PubMed

    Gong, C; Jiang, X; Wang, J

    2017-10-01

    Workers' boots are considered one of the re-contamination routes of Salmonella for rendered meals in the rendering-processing environment. This study was conducted to evaluate the efficacy of a bacteriophage cocktail for reducing Salmonella on workers' boots and ultimately for preventing Salmonella re-contamination of rendered meals. Under laboratory conditions, biofilms of Salmonella Typhimurium avirulent strain 8243 formed on rubber templates or boots were treated with a bacteriophage cocktail of 6 strains (ca. 9 log PFU/mL) for 6 h at room temperature. Bacteriophage treatments combined with sodium hypochlorite (400 ppm) or 30-second brush scrubbing also were investigated for a synergistic effect on reducing Salmonella biofilms. Sodium magnesium (SM) buffer and sodium hypochlorite (400 ppm) were used as controls. To reduce indigenous Salmonella on workers' boots, a field study was conducted to apply a bacteriophage cocktail and other combined treatments 3 times within one wk in a rendering-processing environment. Prior to and after bacteriophage treatments, Salmonella populations on the soles of rubber boots were swabbed and enumerated on XLT-4, Miller-Mallinson or CHROMagar™ plates. Under laboratory conditions, Salmonella biofilms formed on rubber templates and boots were reduced by 95.1 to 99.999% and 91.5 to 99.2%, respectively. In a rendering-processing environment (ave. temperature: 19.3°C; ave. relative humidity: 48%), indigenous Salmonella populations on workers' boots were reduced by 84.2, 92.9, and 93.2% after being treated with bacteriophages alone, bacteriophages + sodium hypochlorite, and bacteriophages + scrubbing for one wk, respectively. Our results demonstrated the effectiveness of bacteriophage treatments in reducing Salmonella contamination on the boots in both laboratory and the rendering-processing environment. © 2017 Poultry Science Association Inc.

  19. Modeling Production Plant Forming Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhee, M; Becker, R; Couch, R

    2004-09-22

    Engineering has simulation tools and experience in modeling forming processes. Y-12 personnel have expressed interest in validating our tools and experience against their manufacturing process activities such as rolling, casting, and forging etc. We have demonstrated numerical capabilities in a collaborative DOE/OIT project with ALCOA that is nearing successful completion. The goal was to use ALE3D to model Alcoa's slab rolling process in order to demonstrate a computational tool that would allow Alcoa to define a rolling schedule that would minimize the probability of ingot fracture, thus reducing waste and energy consumption. It is intended to lead to long-term collaborationmore » with Y-12 and perhaps involvement with other components of the weapons production complex. Using simulations to aid in design of forming processes can: decrease time to production; reduce forming trials and associated expenses; and guide development of products with greater uniformity and less scrap.« less

  20. A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing.

    PubMed

    Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi

    2009-01-01

    Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs.

  1. A Non-Intrusive GMA Welding Process Quality Monitoring System Using Acoustic Sensing

    PubMed Central

    Cayo, Eber Huanca; Alfaro, Sadek Crisostomo Absi

    2009-01-01

    Most of the inspection methods used for detection and localization of welding disturbances are based on the evaluation of some direct measurements of welding parameters. This direct measurement requires an insertion of sensors during the welding process which could somehow alter the behavior of the metallic transference. An inspection method that evaluates the GMA welding process evolution using a non-intrusive process sensing would allow not only the identification of disturbances during welding runs and thus reduce inspection time, but would also reduce the interference on the process caused by the direct sensing. In this paper a nonintrusive method for weld disturbance detection and localization for weld quality evaluation is demonstrated. The system is based on the acoustic sensing of the welding electrical arc. During repetitive tests in welds without disturbances, the stability acoustic parameters were calculated and used as comparison references for the detection and location of disturbances during the weld runs. PMID:22399990

  2. Process for the physical segregation of minerals

    DOEpatents

    Yingling, Jon C.; Ganguli, Rajive

    2004-01-06

    With highly heterogeneous groups or streams of minerals, physical segregation using online quality measurements is an economically important first stage of the mineral beneficiation process. Segregation enables high quality fractions of the stream to bypass processing, such as cleaning operations, thereby reducing the associated costs and avoiding the yield losses inherent in any downstream separation process. The present invention includes various methods for reliably segregating a mineral stream into at least one fraction meeting desired quality specifications while at the same time maximizing yield of that fraction.

  3. The effects of sleep deprivation on item and associative recognition memory.

    PubMed

    Ratcliff, Roger; Van Dongen, Hans P A

    2018-02-01

    Sleep deprivation adversely affects the ability to perform cognitive tasks, but theories range from predicting an overall decline in cognitive functioning because of reduced stability in attentional networks to specific deficits in various cognitive domains or processes. We measured the effects of sleep deprivation on two memory tasks, item recognition ("was this word in the list studied") and associative recognition ("were these two words studied in the same pair"). These tasks test memory for information encoded a few minutes earlier and so do not address effects of sleep deprivation on working memory or consolidation after sleep. A diffusion model was used to decompose accuracy and response time distributions to produce parameter estimates of components of cognitive processing. The model assumes that over time, noisy evidence from the task stimulus is accumulated to one of two decision criteria, and parameters governing this process are extracted and interpreted in terms of distinct cognitive processes. Results showed that sleep deprivation reduces drift rate (evidence used in the decision process), with little effect on the other components of the decision process. These results contrast with the effects of aging, which show little decline in item recognition but large declines in associative recognition. The results suggest that sleep deprivation degrades the quality of information stored in memory and that this may occur through degraded attentional processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Parallelization of a spatial random field characterization process using the Method of Anchored Distributions and the HTCondor high throughput computing system

    NASA Astrophysics Data System (ADS)

    Osorio-Murillo, C. A.; Over, M. W.; Frystacky, H.; Ames, D. P.; Rubin, Y.

    2013-12-01

    A new software application called MAD# has been coupled with the HTCondor high throughput computing system to aid scientists and educators with the characterization of spatial random fields and enable understanding the spatial distribution of parameters used in hydrogeologic and related modeling. MAD# is an open source desktop software application used to characterize spatial random fields using direct and indirect information through Bayesian inverse modeling technique called the Method of Anchored Distributions (MAD). MAD relates indirect information with a target spatial random field via a forward simulation model. MAD# executes inverse process running the forward model multiple times to transfer information from indirect information to the target variable. MAD# uses two parallelization profiles according to computational resources available: one computer with multiple cores and multiple computers - multiple cores through HTCondor. HTCondor is a system that manages a cluster of desktop computers for submits serial or parallel jobs using scheduling policies, resources monitoring, job queuing mechanism. This poster will show how MAD# reduces the time execution of the characterization of random fields using these two parallel approaches in different case studies. A test of the approach was conducted using 1D problem with 400 cells to characterize saturated conductivity, residual water content, and shape parameters of the Mualem-van Genuchten model in four materials via the HYDRUS model. The number of simulations evaluated in the inversion was 10 million. Using the one computer approach (eight cores) were evaluated 100,000 simulations in 12 hours (10 million - 1200 hours approximately). In the evaluation on HTCondor, 32 desktop computers (132 cores) were used, with a processing time of 60 hours non-continuous in five days. HTCondor reduced the processing time for uncertainty characterization by a factor of 20 (1200 hours reduced to 60 hours.)

  5. Effect of high-pressure processing and milk on the anthocyanin composition and antioxidant capacity of strawberry-based beverages.

    PubMed

    Tadapaneni, Ravi Kiran; Banaszewski, Katarzyna; Patazca, Eduardo; Edirisinghe, Indika; Cappozzo, Jack; Jackson, Lauren; Burton-Freeman, Britt

    2012-06-13

    The present study investigated processing strategies and matrix effects on the antioxidant capacity (AC) and polyphenols (PP) content of fruit-based beverages: (1) strawberry powder (Str) + dairy, D-Str; (2) Str + water, ND-Str; (3) dairy + no Str, D-NStr. Beverages were subjected to high-temperature-short-time (HTST) and high-pressure processing (HPP). AC and PP were measured before and after processing and after a 5 week shelf-life study. Unprocessed D-Str had significantly lower AC compared to unprocessed ND-Str. Significant reductions in AC were apparent in HTST- compared to HPP-processed beverages (up to 600 MPa). PP content was significantly reduced in D-Str compared to ND-Str and in response to HPP and HTST in all beverages. After storage (5 weeks), AC and PP were reduced in all beverages compared to unprocessed and week 0 processed beverages. These findings indicate potentially negative effects of milk and processing on AC and PP of fruit-based beverages.

  6. Improving Overall Equipment Effectiveness Using CPM and MOST: A Case Study of an Indonesian Pharmaceutical Company

    NASA Astrophysics Data System (ADS)

    Omega, Dousmaris; Andika, Aditya

    2017-12-01

    This paper discusses the results of a research conducted on the production process of an Indonesian pharmaceutical company. The company is experiencing low performance in the Overall Equipment Effectiveness (OEE) metric. The OEE of the company machines are below world class standard. The machine that has the lowest OEE is the filler machine. Through observation and analysis, it is found that the cleaning process of the filler machine consumes significant amount of time. The long duration of the cleaning process happens because there is no structured division of jobs between cleaning operators, differences in operators’ ability, and operators’ inability in utilizing available cleaning equipment. The company needs to improve the cleaning process. Therefore, Critical Path Method (CPM) analysis is conducted to find out what activities are critical in order to shorten and simplify the cleaning process in the division of tasks. Afterwards, The Maynard Operation and Sequence Technique (MOST) method is used to reduce ineffective movement and specify the cleaning process standard time. From CPM and MOST, it is obtained the shortest time of the cleaning process is 1 hour 28 minutes and the standard time is 1 hour 38.826 minutes.

  7. Quality of care for elderly patients hospitalized for pneumonia in the United States, 2006 to 2010.

    PubMed

    Lee, Jonathan S; Nsa, Wato; Hausmann, Leslie R M; Trivedi, Amal N; Bratzler, Dale W; Auden, Dana; Mor, Maria K; Baus, Kristie; Larbi, Fiona M; Fine, Michael J

    2014-11-01

    Nearly every US acute care hospital reports publicly on adherence to recommended processes of care for patients hospitalized with pneumonia. However, it remains uncertain how much performance of these process measures has improved over time or whether performance is associated with superior patient outcomes. To describe trends in processes of care, mortality, and readmission for elderly patients hospitalized for pneumonia and to assess the independent associations between processes and outcomes of care. Retrospective cohort study conducted from January 1, 2006, to December 31, 2010, at 4740 US acute care hospitals. The cohort included 1 818 979 cases of pneumonia in elderly (≥65 years), Medicare fee-for-service patients who were eligible for at least 1 of 7 pneumonia inpatient processes of care tracked by the Centers for Medicare & Medicaid Services (CMS). Annual performance rates for 7 pneumonia processes of care and an all-or-none composite of these measures; and 30-day, all-cause mortality and hospital readmission, adjusted for patient and hospital characteristics. Adjusted annual performance rates for all 7 CMS processes of care (expressed in percentage points per year) increased significantly from 2006 to 2010, ranging from 1.02 for antibiotic initiation within 6 hours to 5.30 for influenza vaccination (P < .001). All 7 measures were performed in more than 92% of eligible cases in 2010. The all-or-none composite demonstrated the largest adjusted relative increase over time (6.87 percentage points per year; P < .001) and was achieved in 87.4% of cases in 2010. Adjusted annual mortality decreased by 0.09 percentage points per year (P < .001), driven primarily by decreasing mortality in the subgroup not treated in the intensive care unit (ICU) (-0.18 percentage points per year; P < .001). Adjusted annual readmission rates decreased significantly by 0.25 percentage points per year (P < .001). All 7 processes of care were independently associated with reduced 30-day mortality, and 5 were associated with reduced 30-day readmission. Performance of processes of care for elderly patients hospitalized for pneumonia improved substantially from 2006 to 2010. Adjusted 30-day mortality declined slightly over time primarily owing to improved survival among non-ICU patients, and all individual processes of care were independently associated with reduced mortality.

  8. Concept of Operations Visualization in Support of Ares I Production

    NASA Technical Reports Server (NTRS)

    Chilton, James H.; Smith, Daid Alan

    2008-01-01

    Boeing was selected in 2007 to manufacture Ares I Upper Stage and Instrument Unit according to NASA's design which would require the use of the latest manufacturing and integration processes to meet NASA budget and schedule targets. Past production experience has established that the majority of the life cycle cost is established during the initial design process. Concept of Operations (CONOPs) visualizations/simulations help to reduce life cycle cost during the early design stage. Production and operation visualizations can reduce tooling, factory capacity, safety, and build process risks while spreading program support across government, academic, media and public constituencies. The NASA/Boeing production visualization (DELMIA; Digital Enterprise Lean Manufacturing Interactive Application) promotes timely, concurrent and collaborative producibility analysis (Boeing)while supporting Upper Stage Design Cycles (NASA). The DELMIA CONOPs visualization reduced overall Upper Stage production flow time at the manufacturing facility by over 100 man-days to 312.5 man-days and helped to identify technical access issues. The NASA/Boeing Interactive Concept of Operations (ICON) provides interactive access to Ares using real mission parameters, allows users to configure the mission which encourages ownership and identifies areas for improvement, allows mission operations or spacecraft detail to be added as needed, and provides an effective, low coast advocacy, outreach and education tool.

  9. Comparing Eye Tracking with Electrooculography for Measuring Individual Sentence Comprehension Duration

    PubMed Central

    Müller, Jana Annina; Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas

    2016-01-01

    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. PMID:27764125

  10. Proposal of Heuristic Algorithm for Scheduling of Print Process in Auto Parts Supplier

    NASA Astrophysics Data System (ADS)

    Matsumoto, Shimpei; Okuhara, Koji; Ueno, Nobuyuki; Ishii, Hiroaki

    We are interested in the print process on the manufacturing processes of auto parts supplier as an actual problem. The purpose of this research is to apply our scheduling technique developed in university to the actual print process in mass customization environment. Rationalization of the print process is depending on the lot sizing. The manufacturing lead time of the print process is long, and in the present method, production is done depending on worker’s experience and intuition. The construction of an efficient production system is urgent problem. Therefore, in this paper, in order to shorten the entire manufacturing lead time and to reduce the stock, we reexamine the usual method of the lot sizing rule based on heuristic technique, and we propose the improvement method which can plan a more efficient schedule.

  11. Batching alternatives for Phase I retrieval wastes to be processed in WRAP Module 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayancsik, B.A.

    1994-10-13

    During the next two decades, the transuranic (TRU) waste now stored in the 200 Area burial trenches and storage buildings is to be retrieved, processed in the Waste Receiving and Processing (WRAP) Module 1 facility, and shipped to a final disposal facility. The purpose of this document is to identify the criteria that can be used to batch suspect TRU waste, currently in retrievable storage, for processing through the WRAP Module 1 facility. These criteria are then used to generate a batch plan for Phase 1 Retrieval operations, which will retrieve the waste located in Trench 4C-04 of the 200more » West Area burial ground. The reasons for batching wastes for processing in WRAP Module 1 include reducing the exposure of workers and the environment to hazardous material and ionizing radiation; maximizing the efficiency of the retrieval, processing, and disposal processes by reducing costs, time, and space throughout the process; reducing analytical sampling and analysis; and reducing the amount of cleanup and decontamination between process runs. The criteria selected for batching the drums of retrieved waste entering WRAP Module 1 are based on the available records for the wastes sent to storage as well as knowledge of the processes that generated these wastes. The batching criteria identified in this document include the following: waste generator; type of process used to generate or package the waste; physical waste form; content of hazardous/dangerous chemicals in the waste; radiochemical type and quantity of waste; drum weight; and special waste types. These criteria were applied to the waste drums currently stored in Trench 4C-04. At least one batching scheme is shown for each of the criteria listed above.« less

  12. Evaluation of pollution prevention options to reduce styrene emissions from fiber-reinforced plastic open molding processes.

    PubMed

    Nunez, C M; Ramsey, G H; Kong, E J; Bahner, M A; Wright, R S; Clayton, C A; Baskir, J N

    1999-03-01

    Pollution prevention (P2) options to reduce styrene emissions, such as new materials and application equipment, are commercially available to the operators of open molding processes. However, information is lacking on the emissions reduction that these options can achieve. To meet this need, the U.S. Environmental Protection Agency's (EPA) Air Pollution Prevention and Control Division, working in collaboration with Research Triangle Institute, measured styrene emissions for several of these P2 options. In addition, the emission factors calculated from these test results were compared with the existing EPA emission factors for gel coat sprayup and resin applications. Results show that styrene emissions can be reduced by up to 52% by using controlled spraying (i.e., reducing overspray), low-styrene and styrene-suppressed materials, and nonatomizing application equipment. Also, calculated emission factors were 1.6-2.5 times greater than the mid-range EPA emission factors for the corresponding gel coat and resin application. These results indicate that facilities using existing EPA emission factors to estimate emissions in open molding processes are likely to underestimate actual emissions. Facilities should investigate the applicability and feasibility of these P2 options to reduce their styrene emissions.

  13. Process optimization for particle removal on blank chrome mask plates in preparation for resist application

    NASA Astrophysics Data System (ADS)

    Osborne, Stephen; Smith, Eryn; Woster, Eric; Pelayo, Anthony

    2002-03-01

    As integrated circuits require smaller lines to provide the memory and processing capability for tomorrow's marketplace, the photomask industry is adopting higher contrast resists to improve photomask lithography. Photomask yield for several high-contrast resist recipes may be improved by coating masks at the mask shop. When coating at a mask shop, an effective method is available that uses coat/bake cluster tools to ensure blanks are clean prior to coating. Many high-contrast resists are available, and some are more susceptible to time-dependent performance factors than conventional resists. One of these factors is the time between coating and writing. Although future methods may reduce the impact of this factor, one current trend is to reduce this time by coating plates at the mask shop just prior to writing. Establishing an effective process to clean blanks prior to coating is necessary for product quality control and is a new task that is critical for maskmakers who previously purchased mask plates but have decided to begin coating them within their facility. This paper provides a strategy and method to be used within coat/bake cluster tools to remove particle contamination from mask blanks. The process uses excimer-UV ionizing radiation and ozone to remove organic contaminants, and then uses a wet process combined with megasonic agitation, surfactant, and spin forces. Megasonic agitation with surfactant lifts up particles, while the convective outflow of water enhances centripetal shear without accumulating harmful charge.

  14. Adaptive real-time methodology for optimizing energy-efficient computing

    DOEpatents

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  15. Laser Balancing

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Mechanical Technology, Incorporated developed a fully automatic laser machining process that allows more precise balancing removes metal faster, eliminates excess metal removal and other operator induced inaccuracies, and provides significant reduction in balancing time. Manufacturing costs are reduced as a result.

  16. An integrated precipitation and ion-exchange chromatography process for antibody manufacturing: Process development strategy and continuous chromatography exploration.

    PubMed

    Großhans, Steffen; Wang, Gang; Fischer, Christian; Hubbuch, Jürgen

    2018-01-19

    In the past decades, research was carried out to find cost-efficient alternatives to Protein A chromatography as a capture step in monoclonal antibody (mAb) purification processes. In this work, polyethylene glycol (PEG) precipitation has shown promising results in the case of mAb yield and purity. Especially with respect to continuous processing, PEG precipitation has many advantages, like low cost of goods, simple setup, easy scalability, and the option to handle perfusion reactors. Nevertheless, replacing Protein A has the disadvantage of renouncing a platform unit operation as well. Furthermore, PEG precipitation is not capable of reducing high molecular weight impurities (HMW) like aggregates or DNA. To overcome these challenges, an integrated process strategy combining PEG precipitation with cation-exchange chromatography (CEX) for purification of a mAb is presented. This work discusses the process strategy as well as the associated fast, easy, and material-saving process development platform. These were implemented through the combination of high-throughput methods with empirical and mechanistic modeling. The strategy allows the development of a common batch process. Additionally, it is feasible to develop a continuous process. In the presented case study, a mAb provided from cell culture fluid (HCCF) was purified. The precipitation and resolubilization conditions as well as the chromatography method were optimized, and the mutual influence of all steps was investigated. A mAb yield of over 95.0% and a host cell protein (HCP) reduction of over 99.0% could be shown. At the same time, the aggregate level was reduced from 3.12% to 1.20% and the DNA level was reduced by five orders of magnitude. Furthermore, the mAb was concentrated three times to a final concentration of 11.9mg/mL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. MapReduce in the Cloud: A Use Case Study for Efficient Co-Occurrence Processing of MEDLINE Annotations with MeSH.

    PubMed

    Kreuzthaler, Markus; Miñarro-Giménez, Jose Antonio; Schulz, Stefan

    2016-01-01

    Big data resources are difficult to process without a scaled hardware environment that is specifically adapted to the problem. The emergence of flexible cloud-based virtualization techniques promises solutions to this problem. This paper demonstrates how a billion of lines can be processed in a reasonable amount of time in a cloud-based environment. Our use case addresses the accumulation of concept co-occurrence data in MEDLINE annotation as a series of MapReduce jobs, which can be scaled and executed in the cloud. Besides showing an efficient way solving this problem, we generated an additional resource for the scientific community to be used for advanced text mining approaches.

  18. Reduction of produced elementary sulfur in denitrifying sulfide removal process.

    PubMed

    Zhou, Xu; Liu, Lihong; Chen, Chuan; Ren, Nanqi; Wang, Aijie; Lee, Duu-Jong

    2011-05-01

    Denitrifying sulfide removal (DSR) processes simultaneously convert sulfide, nitrate, and chemical oxygen demand from industrial wastewater into elemental sulfur, dinitrogen gas, and carbon dioxide, respectively. The failure of a DSR process is signaled by high concentrations of sulfide in reactor effluent. Conventionally, DSR reactor failure is blamed for overcompetition for heterotroph to autotroph communities. This study indicates that the elementary sulfur produced by oxidizing sulfide that is a recoverable resource from sulfide-laden wastewaters can be reduced back to sulfide by sulfur-reducing Methanobacterium sp. The Methanobacterium sp. was stimulated with excess organic carbon (acetate) when nitrite was completely consumed by heterotrophic denitrifiers. Adjusting hydraulic retention time of a DSR reactor when nitrite is completely consumed provides an additional control variable for maximizing DSR performance.

  19. Photonic sensing in highly concentrated biotechnical processes by photon density wave spectroscopy

    NASA Astrophysics Data System (ADS)

    Hass, Roland; Sandmann, Michael; Reich, Oliver

    2017-04-01

    Photon Density Wave (PDW) spectroscopy is introduced as a new approach for photonic sensing in highly concentrated biotechnical processes. It independently quantifies the absorption and reduced scattering coefficient calibration-free and as a function of time, thus describing the optical properties in the vis/NIR range of the biomaterial during their processing. As examples of industrial relevance, enzymatic milk coagulation, beer mashing, and algae cultivation in photo bioreactors are discussed.

  20. OGC Dashboard

    EPA Pesticide Factsheets

    The Office of General Counsel (OGC) has an ongoing business process engineering and business process automation initiative which has helped the office reduce administrative labor costs while increasing employee effectiveness. Supporting this effort is a system of automated routines accessible through a portal' interface called OGC Dashboard. The dashboard helps OGC track work progress, legal case load, written work products such as legal briefs and advice, and scheduling processes such as employee leave plans (via calendar) and travel compensatory time off.

  1. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  2. Robotic Processing Of Rocket-Engine Nozzles

    NASA Technical Reports Server (NTRS)

    Gilbert, Jeffrey L.; Maslakowski, John E.; Gutow, David A.; Deily, David C.

    1994-01-01

    Automated manufacturing cell containing computer-controlled robotic processing system developed to implement some important related steps in fabrication of rocket-engine nozzles. Performs several tedious and repetitive fabrication, measurement, adjustment, and inspection processes and subprocesses now performed manually. Offers advantages of reduced processing time, greater consistency, excellent collection of data, objective inspections, greater productivity, and simplified fixturing. Also affords flexibility: by making suitable changes in hardware and software, possible to modify process and subprocesses. Flexibility makes work cell adaptable to fabrication of heat exchangers and other items structured similarly to rocket nozzles.

  3. Improving the delivery of care and reducing healthcare costs with the digitization of information.

    PubMed

    Noffsinger, R; Chin, S

    2000-01-01

    In the coming years, the digitization of information and the Internet will be extremely powerful in reducing healthcare costs while assisting providers in the delivery of care. One example of healthcare inefficiency that can be managed through information digitization is the process of prescription writing. Due to the handwritten and verbal communication surrounding prescription writing, as well as the multiple tiers of authorizations, the prescription drug process causes extensive financial waste as well as medical errors, lost time, and even fatal accidents. Electronic prescription management systems are being designed to address these inefficiencies. By utilizing new electronic prescription systems, physicians not only prescribe more accurately, but also improve formulary compliance thereby reducing pharmacy utilization. These systems expand patient care by presenting proactive alternatives at the point of prescription while reducing costs and providing additional benefits for consumers and healthcare providers.

  4. Multiple seeding for the growth of bulk GdBCO-Ag superconductors with single grain behaviour

    NASA Astrophysics Data System (ADS)

    Shi, Y.; Durrell, J. H.; Dennis, A. R.; Huang, K.; Namburi, D. K.; Zhou, D.; Cardwell, D. A.

    2017-01-01

    Rare earth-barium-copper oxide bulk superconductors fabricated in large or complicated geometries are required for a variety of engineering applications. Initiating crystal growth from multiple seeds reduces the time taken to melt-process individual samples and can reduce the problem of poor crystal texture away from the seed. Grain boundaries between regions of independent crystal growth can reduce significantly the flow of current due to crystallographic misalignment and the agglomeration of impurity phases. Enhanced supercurrent flow at such boundaries has been achieved by minimising the depth of the boundary between A growth sectors generated during the melt growth process by reducing second phase agglomerations and by a new technique for initiating crystal growth that minimises the misalignment between different growth regions. The trapped magnetic fields measured for the resulting samples exhibit a single trapped field peak indicating they are equivalent to conventional single grains.

  5. Burnishing of rotatory parts to improve surface quality

    NASA Astrophysics Data System (ADS)

    Celaya, A.; López de Lacalle, L. N.; Albizuri, J.; Alberdi, R.

    2009-11-01

    In this paper, the use of rolling burnishing process to improve the final quality of railway and automotive workpieces is studied. The results are focused on the improvement of the manufacturing processes of rotary workpieces used in railway and automotion industry, attending to generic target of achieving `maximum surface quality with minimal process time'. Burnishing is a finishing operation in which plastic deformation of surface irregularities occurs by applying pressure through a very hard element, a roller or a ceramic ball. This process gives additional advantages to the workpiece such as good surface roughness, increased hardness and high compressive residual stresses. The effect of the initial turning conditions on the final burnishing operation has also been studied. The results show that feeds used in the initial rough turning have little influence in the surface finish of the burnished workpieces. So, the process times of the combined turning and burnishing processes can be reduced, optimizing the shaft's machining process.

  6. EnergySolution's Clive Disposal Facility Operational Research Model - 13475

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nissley, Paul; Berry, Joanne

    2013-07-01

    EnergySolutions owns and operates a licensed, commercial low-level radioactive waste disposal facility located in Clive, Utah. The Clive site receives low-level radioactive waste from various locations within the United States via bulk truck, containerised truck, enclosed truck, bulk rail-cars, rail boxcars, and rail inter-modals. Waste packages are unloaded, characterized, processed, and disposed of at the Clive site. Examples of low-level radioactive waste arriving at Clive include, but are not limited to, contaminated soil/debris, spent nuclear power plant components, and medical waste. Generators of low-level radioactive waste typically include nuclear power plants, hospitals, national laboratories, and various United States government operatedmore » waste sites. Over the past few years, poor economic conditions have significantly reduced the number of shipments to Clive. With less revenue coming in from processing shipments, Clive needed to keep its expenses down if it was going to maintain past levels of profitability. The Operational Research group of EnergySolutions were asked to develop a simulation model to help identify any improvement opportunities that would increase overall operating efficiency and reduce costs at the Clive Facility. The Clive operations research model simulates the receipt, movement, and processing requirements of shipments arriving at the facility. The model includes shipment schedules, processing times of various waste types, labor requirements, shift schedules, and site equipment availability. The Clive operations research model has been developed using the WITNESS{sup TM} process simulation software, which is developed by the Lanner Group. The major goals of this project were to: - identify processing bottlenecks that could reduce the turnaround time from shipment arrival to disposal; - evaluate the use (or idle time) of labor and equipment; - project future operational requirements under different forecasted scenarios. By identifying processing bottlenecks and unused equipment and/or labor, improvements to operating efficiency could be determined and appropriate cost saving measures implemented. Model runs forecasting various scenarios helped illustrate potential impacts of certain conditions (e.g. 20% decrease in shipments arrived), variables (e.g. 20% decrease in labor), or other possible situations. (authors)« less

  7. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  8. Periodic, On-Demand, and User-Specified Information Reconciliation

    NASA Technical Reports Server (NTRS)

    Kolano, Paul

    2007-01-01

    Automated sequence generation (autogen) signifies both a process and software used to automatically generate sequences of commands to operate various spacecraft. Autogen requires fewer workers than are needed for older manual sequence-generation processes and reduces sequence-generation times from weeks to minutes. The autogen software comprises the autogen script plus the Activity Plan Generator (APGEN) program. APGEN can be used for planning missions and command sequences. APGEN includes a graphical user interface that facilitates scheduling of activities on a time line and affords a capability to automatically expand, decompose, and schedule activities.

  9. GPU real-time processing in NA62 trigger system

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-01-01

    A commercial Graphics Processing Unit (GPU) is used to build a fast Level 0 (L0) trigger system tested parasitically with the TDAQ (Trigger and Data Acquisition systems) of the NA62 experiment at CERN. In particular, the parallel computing power of the GPU is exploited to perform real-time fitting in the Ring Imaging CHerenkov (RICH) detector. Direct GPU communication using a FPGA-based board has been used to reduce the data transmission latency. The performance of the system for multi-ring reconstrunction obtained during the NA62 physics run will be presented.

  10. Lost in a Giant Database: The Potentials and Pitfalls of Secondary Analysis for Deaf Education

    ERIC Educational Resources Information Center

    Kluwin, T. N.; Morris, C. S.

    2006-01-01

    Secondary research or archival research is the analysis of data collected by another person or agency. It offers several advantages, including reduced cost, a less time-consuming research process, and access to larger populations and thus greater generalizability. At the same time, it offers several limitations, including the fact that the…

  11. Time Dependent Analytical and Optical Studies of Heat Balanced Internal Combustion Engine Flow Fields.

    DTIC Science & Technology

    1980-11-01

    to auto ignite in color cinematography of the process. It appears the above interaction reduces classical wall quench(14 ) as the reaction continues...vivid blue hue while the core reaction is white. Continuation of the reaction is seen in the first four frames of Fig. V-3; this figure covers the time

  12. The effect of processing conditions on the GaAs/plasma-grown insulator interface

    NASA Technical Reports Server (NTRS)

    Hshieh, F. I.; Borrego, J. M.; Ghandhi, S. K.

    1986-01-01

    The effect of processing conditions on the interface state density was evaluated from C-V measurements on metal-oxide-semiconductor capacitors. The optimum processing conditions for the minimum surface state density was found to be related to the postoxidation annealing temperature and time, and was independent of chemical treatments prior to oxidation. Annealing at the optimum condition (i.e., at 350 C for 1 h in either nitrogen or hydrogen gas, with or without an aluminum pattern on the oxide) reduces the fast surface state density by about one order of magnitude. By using a nitrogen/oxygen plasma, the static dielectric constant of the oxide decreased as the N/O ratio was increased, and nitrogen was incorporated into the oxide. In addition, the fast surface state density was reduced as a result of this nitridation process.

  13. A Proposed Time Transfer Experiment Between the USA and the South Pacific

    DTIC Science & Technology

    1991-12-01

    1 nanosecond, The corrected position will be traris~nitted by both the time transfer modem and the existing TV line sync dissemination process...communications satellite (AUSSAT K1) (Figure 5), With after-the- fact ephemeris correction , this is useful to the 20 nanosecond level. The second...spheric corrections will ultimately reduce ephemeris related time transfer errors to the 1 nanosecond level. The corrected position will be transmitted

  14. Using Tic-Tac Software to Reduce Anxiety-Related Behaviour in Adults with Autism and Learning Difficulties during Waiting Periods: A Pilot Study

    ERIC Educational Resources Information Center

    Campillo, Cristina; Herrera, Gerardo; Remírez de Ganuza, Conchi; Cuesta, José L.; Abellán, Raquel; Campos, Arturo; Navarro, Ignacio; Sevilla, Javier; Pardo, Carlos; Amati, Fabián

    2014-01-01

    Deficits in the perception of time and processing of changes across time are commonly observed in individuals with autism. This pilot study evaluated the efficacy of the use of the software tool Tic-Tac, designed to make time visual, in three adults with autism and learning difficulties. This research focused on applying the tool in waiting…

  15. [Performance development of a university operating room after implementation of a central operating room management].

    PubMed

    Waeschle, R M; Sliwa, B; Jipp, M; Pütz, H; Hinz, J; Bauer, M

    2016-08-01

    The difficult financial situation in German hospitals requires measures for improvement in process quality. Associated increases in revenues in the high income field "operating room (OR) area" are increasingly the responsibility of OR management but it has not been shown that the introduction of an efficiency-oriented management leads to an increase in process quality and revenues in the operating theatre. Therefore the performance in the operating theatre of the University Medical Center Göttingen was analyzed for working days in the core operating time from 7.45 a.m. to 3.30 p.m. from 2009 to 2014. The achievement of process target times for the morning surgery start time and the turnover times of anesthesia and OR-nurses were calculated as indicators of process quality. The number of operations and cumulative incision-suture time were also analyzed as aggregated performance indicators. In order to assess the development of revenues in the operating theatre, the revenues from diagnosis-related groups (DRG) in all inpatient and occupational accident cases, adjusted for the regional basic case value from 2009, were calculated for each year. The development of revenues was also analyzed after deduction of revenues resulting from altered economic case weighting. It could be shown that the achievement of process target values for the morning surgery start time could be improved by 40 %, the turnover times for anesthesia reduced by 50 % and for the OR-nurses by 36 %. Together with the introduction of central planning for reallocation, an increase in operation numbers of 21 % and cumulative incision-suture times of 12% could be realized. Due to these additional operations the DRG revenues in 2014 could be increased to 132 % compared to 2009 or 127 % if the revenues caused by economic case weighting were excluded. The personnel complement in anesthesia (-1.7 %) and OR-nurses (+2.6 %) as well as anesthetists (+6.7 %) increased less compared to the revenues or were slightly reduced. This improvement in process quality and cumulative incision-suture times as well as the increase in revenues, reflect the positive impact of an efficiency-oriented central OR management. The OR management releases due to measures of process optimization the necessary personnel and time resources and therefore achieves the basic prerequisites for increased revenues of surgical disciplines. The method presented can be used by other hospitals as a guideline to analyze performance development.

  16. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  17. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  18. Clinical process cost analysis.

    PubMed

    Marrin, C A; Johnson, L C; Beggs, V L; Batalden, P B

    1997-09-01

    New systems of reimbursement are exerting enormous pressure on clinicians and hospitals to reduce costs. Using cheaper supplies or reducing the length of stay may be a satisfactory short-term solution, but the best strategy for long-term success is radical reduction of costs by reengineering the processes of care. However, few clinicians or institutions know the actual costs of medical care; nor do they understand, in detail, the activities involved in the delivery of care. Finally, there is no accepted method for linking the two. Clinical process cost analysis begins with the construction of a detailed flow diagram incorporating each activity in the process of care. The cost of each activity is then calculated, and the two are linked. This technique was applied to Diagnosis Related Group 75 to analyze the real costs of the operative treatment of lung cancer at one institution. Total costs varied between $6,400 and $7,700. The major driver of costs was personnel time, which accounted for 55% of the total. Forty percent of the total cost was incurred in the operating room. The cost of care decreased progressively during hospitalization. Clinical process cost analysis provides detailed information about the costs and processes of care. The insights thus obtained may be used to reduce costs by reengineering the process.

  19. Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.

    PubMed

    Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei

    2014-02-01

    Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.

  20. Effect of acid hydrolysis on regenerated kenaf core membrane produced using aqueous alkaline-urea systems.

    PubMed

    Padzil, Farah Nadia Mohammad; Zakaria, Sarani; Chia, Chin Hua; Jaafar, Sharifah Nabihah Syed; Kaco, Hatika; Gan, Sinyee; Ng, Peivun

    2015-06-25

    Bleached kenaf core pulps (BKC) were hydrolyzed in H2SO4 (0.5M) at different time (0min to 90min) at room temperature. After the hydrolysis process, the viscosity average molecular weight (Mŋ) for BKC sample has reduced from 14.5×10(4) to 2.55×10(4). The hydrolyzed BKC was then dissolved in NaOH:urea:water and in LiOH:urea:water mixed solvent at the ratio of 7:12:81 and 4.6:15:80.4, respectively. The increased in hydrolysis time has decreased Mŋ of cellulose leading to easy dissolution process. Higher porosity and transparency with lower crystallinity index (CrI) of regenerated membrane produced can be achieved as the Mŋ reduced. The properties of membrane were observed through FESEM, UV-vis spectrophotometer and XRD. This study has proven that acid hydrolysis has reduced the Mŋ of cellulose, thus, enhanced the properties of regenerated membrane produced with assisted by alkaline/urea system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Education, income and alcohol misuse: a stress process model.

    PubMed

    Elliott, Marta; Lowman, Jennifer

    2015-01-01

    This study applies stress process theory to study and explain the negative association between socioeconomic status (SES) and alcohol misuse. SES is theorized to reduce alcohol misuse by reducing exposure to stressors and increasing access to resources. The National Co-Morbidity panel sample (N = 4,979) interviewed in 1990-1992 and 2000-2002 are analyzed to estimate direct and indirect pathways between SES and alcohol misuse over time via stressors and resources. Higher education and income predict decreased alcohol misuse via internal and external locus of control. External locus of control is associated with increased alcohol intake over time, whereas internal locus of control is associated with a lower likelihood of developing future alcohol-related disorders. Income is also associated with increased alcohol misuse via religiosity, which is more common among people of low income, and protects against alcohol misuse. SES is negatively associated with alcohol misuse because low SES increases people's perceptions that their lives are determined by luck, and reduces their sense of personal control. However, low income has a countervailing negative influence on alcohol misuse via its association with religiosity.

  2. Increasing reticle inspection efficiency and reducing wafer print-checks using automated defect classification and simulation

    NASA Astrophysics Data System (ADS)

    Ryu, Sung Jae; Lim, Sung Taek; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan

    2013-09-01

    IC fabs inspect critical masks on a regular basis to ensure high wafer yields. These requalification inspections are costly for many reasons including the capital equipment, system maintenance, and labor costs. In addition, masks typically remain in the "requal" phase for extended, non-productive periods of time. The overall "requal" cycle time in which reticles remain non-productive is challenging to control. Shipping schedules can slip when wafer lots are put on hold until the master critical layer reticle is returned to production. Unfortunately, substituting backup critical layer reticles can significantly reduce an otherwise tightly controlled process window adversely affecting wafer yields. One major requal cycle time component is the disposition process of mask inspections containing hundreds of defects. Not only is precious non-productive time extended by reviewing hundreds of potentially yield-limiting detections, each additional classification increases the risk of manual review techniques accidentally passing real yield limiting defects. Even assuming all defects of interest are flagged by operators, how can any person's judgment be confident regarding lithographic impact of such defects? The time reticles spend away from scanners combined with potential yield loss due to lithographic uncertainty presents significant cycle time loss and increased production costs. Fortunately, a software program has been developed which automates defect classification with simulated printability measurement greatly reducing requal cycle time and improving overall disposition accuracy. This product, called ADAS (Auto Defect Analysis System), has been tested in both engineering and high-volume production environments with very successful results. In this paper, data is presented supporting significant reduction for costly wafer print checks, improved inspection area productivity, and minimized risk of misclassified yield limiting defects.

  3. The impact of cognitive load on reward evaluation.

    PubMed

    Krigolson, Olave E; Hassall, Cameron D; Satel, Jason; Klein, Raymond M

    2015-11-19

    The neural systems that afford our ability to evaluate rewards and punishments are impacted by a variety of external factors. Here, we demonstrate that increased cognitive load reduces the functional efficacy of a reward processing system within the human medial-frontal cortex. In our paradigm, two groups of participants used performance feedback to estimate the exact duration of one second while electroencephalographic (EEG) data was recorded. Prior to performing the time estimation task, both groups were instructed to keep their eyes still and avoid blinking in line with well established EEG protocol. However, during performance of the time-estimation task, one of the two groups was provided with trial-to-trial-feedback about their performance on the time-estimation task and their eye movements to induce a higher level of cognitive load relative to participants in the other group who were solely provided with feedback about the accuracy of their temporal estimates. In line with previous work, we found that the higher level of cognitive load reduced the amplitude of the feedback-related negativity, a component of the human event-related brain potential associated with reward evaluation within the medial-frontal cortex. Importantly, our results provide further support that increased cognitive load reduces the functional efficacy of a neural system associated with reward processing. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Total quality management in orthodontic practice.

    PubMed

    Atta, A E

    1999-12-01

    Quality is the buzz word for the new Millennium. Patients demand it, and we must serve it. Yet one must identify it. Quality is not imaging or public relations; it is a business process. This short article presents quality as a balance of three critical notions: core clinical competence, perceived values that our patients seek and want, and the cost of quality. Customer satisfaction is a variable that must be identified for each practice. In my practice, patients perceive quality as communication and time, be it treatment or waiting time. Time is a value and cost that must be managed effectively. Total quality management is a business function; it involves diagnosis, design, implementation, and measurement of the process, the people, and the service. Kazien is a function that reduces value services, eliminates waste, and manages time and cost in the process. Total quality management is a total commitment for continuous improvement.

  5. Association Between Fungal Contamination and Eye Bank-Prepared Endothelial Keratoplasty Tissue: Temperature-Dependent Risk Factors and Antifungal Supplementation of Optisol-Gentamicin and Streptomycin.

    PubMed

    Brothers, Kimberly M; Shanks, Robert M Q; Hurlbert, Susan; Kowalski, Regis P; Tu, Elmer Y

    2017-11-01

    Fungal contamination and infection from donor tissues processed for endothelial keratoplasty is a growing concern, prompting analysis of donor tissues after processing. To determine whether eyebank-processed endothelial keratoplasty tissue is at higher risk of contamination than unprocessed tissue and to model eyebank processing with regard to room temperature exposure on Candida growth in optisol-gentamicin and streptomycin (GS) with and without antifungal supplementation. An examination of the 2013 Eversight Eyebank Study follow-up database for risk factors associated with post-keratoplasty infection identified an increased risk of positive fungal rim culture results in tissue processed for endothelial keratoplasty vs unprocessed tissue. Processing steps at room temperature were hypothesized as a potential risk factor for promotion of fungal growth between these 2 processes. Candida albicans, Candida glabrata, and Candida parapsilosis endophthalmitis isolates were each inoculated into optisol-GS and subjected to 2 different room temperature incubation regimens reflective of current corneal tissue handling protocols. Eversight Eyebank Study outcomes and measures were follow-up inquiries from 6592 corneal transplants. Efficacy study outcomes and measures were fungal colony-forming units from inoculated vials of optisol-GS taken at 2 different processing temperatures. Donor rim culture results were 3 times more likely to be positive for fungi in endothelial keratoplasty-processed eyes (1.14%) than for other uses (0.37%) (difference, 0.77%; 95% CI, 0.17-.1.37) (P = .009). In vitro, increased room temperature incubation of optisol-GS increased growth of Candida species over time. The addition of caspofungin and voriconazole decreased growth of Candida in a species-dependent manner. Detectable Candida growth in donor rim cultures, associated with a higher rate of post keratoplasty infection, is seen in endothelial keratoplasty tissue vs other uses at the time of transplantation, likely owing in part to eyebank preparation processes extending the time of tissue warming. Reduced room temperature incubation and the addition of antifungal agents decreased growth of Candida species in optisol-GS and should be further explored to reduce the risk of infection.

  6. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  7. How to handle 6GBytes a night and not get swamped

    NASA Technical Reports Server (NTRS)

    Allsman, R.; Alcock, C.; Axelrod, T.; Bennett, D.; Cook, K.; Park, H.-S.; Griest, K.; Marshall, S.; Perlmutter, S.; Stubbs, C.

    1992-01-01

    The Macho Project has undertaken a 5 year effort to search for dark matter in the halo of the Galaxy by scanning the Magellanic Clouds for micro-lensing events. Each evening's raw image data will be reduced in real-time into the observed stars' photometric measurements. The actual search for micro-lensing events will be a post-processing operation. The theoretical prediction of the rate of such events necessitates the collection of a large number of repeated exposures. The project designed camera subsystem delivers 64 Mbytes per exposure with exposures typically occurring every 500 seconds. An ideal evening's observing will provide 6 Gbytes of raw image data and 40 Mbytes of reduced photometric measurements. Recognizing the difficulty of digging out from a snowballing cascade of raw data, the project requires the real-time reduction of each evening's data. The software team's implementation strategy centered on this non-negotiable mandate. Accepting the reality that 2 full time people needed to implement the core real-time control and data management system within 6 months, off-the-shelf vendor components were explored to provide quick solutions to the classic needs for file management, data management, and process control. Where vendor solutions were lacking, state-of-the-art models were used for hand tailored subsystems. In particular, petri nets manage process control, memory mapped bulletin boards provide interprocess communication between the multi-tasked processes, and C++ class libraries provide memory mapped, disk resident databases. The differences between the implementation strategy and the final implementation reality are presented. The necessity of validating vendor product claims are explored. Both the successful and hindsight decisions enabling the collection and processing of the nightly data barrage are reviewed.

  8. FLAME DENITRATION AND REDUCTION OF URANIUM NITRATE TO URANIUM DIOXIDE

    DOEpatents

    Hedley, W.H.; Roehrs, R.J.; Henderson, C.M.

    1962-06-26

    A process is given for converting uranyl nitrate solution to uranium dioxide. The process comprises spraying fine droplets of aqueous uranyl nitrate solution into a hightemperature hydrocarbon flame, said flame being deficient in oxygen approximately 30%, retaining the feed in the flame for a sufficient length of time to reduce the nitrate to the dioxide, and recovering uranium dioxide. (AEC)

  9. The Roles of Beaconing and Dead Reckoning in Human Virtual Navigation

    ERIC Educational Resources Information Center

    Bodily, Kent D.; Daniel, Thomas A.; Sturz, Bradley R.

    2012-01-01

    Beaconing is a process in which the distance between a visual landmark and current position is reduced in order to return to a location. In contrast, dead reckoning is a process in which vestibular, kinesthetic and/or optic flow cues are utilized to update speed of movement, elapsed time of movement, and direction of movement to return to a…

  10. Putting ROSE to Work: A Proposed Application of a Request-Oriented Scheduling Engine for Space Station Operations

    NASA Technical Reports Server (NTRS)

    Jaap, John; Muery, Kim

    2000-01-01

    Scheduling engines are found at the core of software systems that plan and schedule activities and resources. A Request-Oriented Scheduling Engine (ROSE) is one that processes a single request (adding a task to a timeline) and then waits for another request. For the International Space Station, a robust ROSE-based system would support multiple, simultaneous users, each formulating requests (defining scheduling requirements), submitting these requests via the internet to a single scheduling engine operating on a single timeline, and immediately viewing the resulting timeline. ROSE is significantly different from the engine currently used to schedule Space Station operations. The current engine supports essentially one person at a time, with a pre-defined set of requirements from many payloads, working in either a "batch" scheduling mode or an interactive/manual scheduling mode. A planning and scheduling process that takes advantage of the features of ROSE could produce greater customer satisfaction at reduced cost and reduced flow time. This paper describes a possible ROSE-based scheduling process and identifies the additional software component required to support it. Resulting changes to the management and control of the process are also discussed.

  11. Influence of operational parameters on nitrogen removal efficiency and microbial communities in a full-scale activated sludge process.

    PubMed

    Kim, Young Mo; Cho, Hyun Uk; Lee, Dae Sung; Park, Donghee; Park, Jong Moon

    2011-11-01

    To improve the efficiency of total nitrogen (TN) removal, solid retention time (SRT) and internal recycling ratio controls were selected as operating parameters in a full-scale activated sludge process treating high strength industrial wastewater. Increased biomass concentration via SRT control enhanced TN removal. Also, decreasing the internal recycling ratio restored the nitrification process, which had been inhibited by phenol shock loading. Therefore, physiological alteration of the bacterial populations by application of specific operational strategies may stabilize the activated sludge process. Additionally, two dominant ammonia oxidizing bacteria (AOB) populations, Nitrosomonas europaea and Nitrosomonas nitrosa, were observed in all samples with no change in the community composition of AOB. In a nitrification tank, it was observed that the Nitrobacter populations consistently exceeded those of the Nitrospira within the nitrite oxidizing bacteria (NOB) community. Through using quantitative real-time PCR (qPCR), nirS, the nitrite reducing functional gene, was observed to predominate in the activated sludge of an anoxic tank, whereas there was the least amount of the narG gene, the nitrate reducing functional gene. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Reduction of pasteurization temperature leads to lower bacterial outgrowth in pasteurized fluid milk during refrigerated storage: a case study.

    PubMed

    Martin, N H; Ranieri, M L; Wiedmann, M; Boor, K J

    2012-01-01

    Bacterial numbers over refrigerated shelf-life were enumerated in high-temperature, short-time (HTST) commercially pasteurized fluid milk for 15 mo before and 15 mo after reducing pasteurization temperature from 79.4°C (175°F) [corrected] to 76.1°C (169°F). Total bacterial counts were measured in whole fat, 2% fat, and fat-free milk products on the day of processing as well as throughout refrigerated storage (6°C) at 7, 14, and 21 d postprocessing. Mean total bacterial counts were significantly lower immediately after processing as well as at 21 d postprocessing in samples pasteurized at 76.1°C versus samples pasteurized at 79.4°C. In addition to mean total bacterial counts, changes in bacterial numbers over time (i.e., bacterial growth) were analyzed and were lower during refrigerated storage of products pasteurized at the lower temperature. Lowering the pasteurization temperature for unflavored fluid milk processed in a commercial processing facility significantly reduced bacterial growth during refrigerated storage. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Process improvement for regulatory analyses of custom-blend fertilizers.

    PubMed

    Wegner, Keith A

    2014-01-01

    Chemical testing of custom-blend fertilizers is essential to ensure that the products meet the formulation requirements. For purposes of proper crop nutrition and consumer protection, regulatory oversight promotes compliance and particular attention to blending and formulation specifications. Analyses of custom-blend fertilizer products must be performed and reported within a very narrow window in order to be effective. The Colorado Department of Agriculture's Biochemistry Laboratory is an ISO 17025 accredited facility and conducts analyses of custom-blend fertilizer products primarily during the spring planting season. Using the Lean Six Sigma (LSS) process, the Biochemistry Laboratory has reduced turnaround times from as much as 45 days to as little as 3 days. The LSS methodology focuses on waste reduction through identifying: non-value-added steps, unneeded process reviews, optimization of screening and confirmatory analyses, equipment utilization, nonessential reporting requirements, and inefficient personnel deployment. Eliminating these non-value-added activities helped the laboratory significantly shorten turnaround time and reduce costs. Key improvement elements discovered during the LSS process included: focused sample tracking, equipment redundancy, strategic supply stocking, batch size optimization, critical sample paths, elimination of nonessential QC reviews, and more efficient personnel deployment.

  14. Time-driven activity-based costing: A dynamic value assessment model in pediatric appendicitis.

    PubMed

    Yu, Yangyang R; Abbas, Paulette I; Smith, Carolyn M; Carberry, Kathleen E; Ren, Hui; Patel, Binita; Nuchtern, Jed G; Lopez, Monica E

    2017-06-01

    Healthcare reform policies are emphasizing value-based healthcare delivery. We hypothesize that time-driven activity-based costing (TDABC) can be used to appraise healthcare interventions in pediatric appendicitis. Triage-based standing delegation orders, surgical advanced practice providers, and a same-day discharge protocol were implemented to target deficiencies identified in our initial TDABC model. Post-intervention process maps for a hospital episode were created using electronic time stamp data for simple appendicitis cases during February to March 2016. Total personnel and consumable costs were determined using TDABC methodology. The post-intervention TDABC model featured 6 phases of care, 33 processes, and 19 personnel types. Our interventions reduced duration and costs in the emergency department (-41min, -$23) and pre-operative floor (-57min, -$18). While post-anesthesia care unit duration and costs increased (+224min, +$41), the same-day discharge protocol eliminated post-operative floor costs (-$306). Our model incorporating all three interventions reduced total direct costs by 11% ($2753.39 to $2447.68) and duration of hospitalization by 51% (1984min to 966min). Time-driven activity-based costing can dynamically model changes in our healthcare delivery as a result of process improvement interventions. It is an effective tool to continuously assess the impact of these interventions on the value of appendicitis care. II, Type of study: Economic Analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Reducing Physical Risk Factors in Construction Work Through a Participatory Intervention: Protocol for a Mixed-Methods Process Evaluation.

    PubMed

    Ajslev, Jeppe; Brandt, Mikkel; Møller, Jeppe Lykke; Skals, Sebastian; Vinstrup, Jonas; Jakobsen, Markus Due; Sundstrup, Emil; Madeleine, Pascal; Andersen, Lars Louis

    2016-05-26

    Previous research has shown that reducing physical workload among workers in the construction industry is complicated. In order to address this issue, we developed a process evaluation in a formative mixed-methods design, drawing on existing knowledge of the potential barriers for implementation. We present the design of a mixed-methods process evaluation of the organizational, social, and subjective practices that play roles in the intervention study, integrating technical measurements to detect excessive physical exertion measured with electromyography and accelerometers, video documentation of working tasks, and a 3-phased workshop program. The evaluation is designed in an adapted process evaluation framework, addressing recruitment, reach, fidelity, satisfaction, intervention delivery, intervention received, and context of the intervention companies. Observational studies, interviews, and questionnaires among 80 construction workers organized in 20 work gangs, as well as health and safety staff, contribute to the creation of knowledge about these phenomena. At the time of publication, the process of participant recruitment is underway. Intervention studies are challenging to conduct and evaluate in the construction industry, often because of narrow time frames and ever-changing contexts. The mixed-methods design presents opportunities for obtaining detailed knowledge of the practices intra-acting with the intervention, while offering the opportunity to customize parts of the intervention.

  16. Face and body perception in schizophrenia: a configural processing deficit?

    PubMed

    Soria Bauser, Denise; Thoma, Patrizia; Aizenberg, Victoria; Brüne, Martin; Juckel, Georg; Daum, Irene

    2012-01-30

    Face and body perception rely on common processing mechanisms and activate similar but not identical brain networks. Patients with schizophrenia show impaired face perception, and the present study addressed for the first time body perception in this group. Seventeen patients diagnosed with schizophrenia or schizoaffective disorder were compared to 17 healthy controls on standardized tests assessing basic face perception skills (identity discrimination, memory for faces, recognition of facial affect). A matching-to-sample task including emotional and neutral faces, bodies and cars either in an upright or in an inverted position was administered to assess potential category-specific performance deficits and impairments of configural processing. Relative to healthy controls, schizophrenia patients showed poorer performance on the tasks assessing face perception skills. In the matching-to-sample task, they also responded more slowly and less accurately than controls, regardless of the stimulus category. Accuracy analysis showed significant inversion effects for faces and bodies across groups, reflecting configural processing mechanisms; however reaction time analysis indicated evidence of reduced inversion effects regardless of category in schizophrenia patients. The magnitude of the inversion effects was not related to clinical symptoms. Overall, the data point towards reduced configural processing, not only for faces but also for bodies and cars in individuals with schizophrenia. © 2011 Elsevier Ltd. All rights reserved.

  17. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  18. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  19. Reducing The Station-to Station Variability of Umkehr Ozone Trends Using SAGE Measurements

    NASA Technical Reports Server (NTRS)

    Newchurch, Mike; Allen, Mark; Cunnold, Derek; Herman, Ben; Mateer, Carl

    2000-01-01

    This proposed research sought to use SAGE I and II ozone and aerosol measurements to reduce the variability in ozone trends, principally, but not exclusively, in layer 8 (40 km) derived from multiple Umkehr stations. Building on our experience with both SAGE and Umkehr data, we proposed to commence at the very beginning of the Umkehr process (measured radiance ratios) and proceed through the fitting and inversion processes in conjunction with radiative transfer calculations to establish a consistent, reliable time series of Umkehr ozone profiles at a number of stations. We expected to be able to reconcile the present discrepancies between SAGE and Umkehr trends in the upper stratosphere and, in particular, to reduce the variability in trend estimates among mid-latitude Umkehr stations.

  20. Temporally Specific Divided Attention Tasks in Young Adults Reveal the Temporal Dynamics of Episodic Encoding Failures in Elderly Adults

    PubMed Central

    Johnson, Ray; Nessler, Doreen; Friedman, David

    2013-01-01

    Nessler, Johnson, Bersick, and Friedman (D. Nessler, R. Johnson, Jr., M. Bersick, & D. Friedman, 2006, On why the elderly have normal semantic retrieval but deficient episodic encoding: A study of left inferior frontal ERP activity, NeuroImage, Vol. 30, pp. 299–312) found that, compared with young adults, older adults show decreased event-related brain potential (ERP) activity over posterior left inferior prefrontal cortex (pLIPFC) in a 400- to 1,400-ms interval during episodic encoding. This altered brain activity was associated with significantly decreased recognition performance and reduced recollection-related brain activity at retrieval (D. Nessler, D. Friedman, R. Johnson, Jr., & M. Bersick, 2007, Does repetition engender the same retrieval processes in young and older adults? NeuroReport, Vol. 18, pp. 1837–1840). To test the hypothesis that older adults’ well-documented episodic retrieval deficit is related to reduced pLIPFC activity at encoding, we used a novel divided attention task in healthy young adults that was specifically timed to disrupt encoding in either the 1st or 2nd half of a 300- to 1,400-ms interval. The results showed that diverting resources for 550 ms during either half of this interval reproduced the 4 characteristic aspects of the older participants’ retrieval performance: normal semantic retrieval during encoding, reduced subsequent episodic recognition and recall, reduced recollection-related ERP activity, and the presence of “compensatory” brain activity. We conclude that part of older adults’ episodic memory deficit is attributable to altered pLIPFC activity during encoding due to reduced levels of available processing resources. Moreover, the findings also provide insights into the nature and timing of the putative “compensatory” processes posited to be used by older adults in an attempt to compensate for age-related decline in cognitive function. These results support the scaffolding account of compensation, in which the recruitment of additional cognitive processes is an adaptive response across the life span. PMID:23276214

Top