Sample records for processing time compared

  1. Enhanced round robin CPU scheduling with burst time based time quantum

    NASA Astrophysics Data System (ADS)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  2. Waste water processing technology for Space Station Freedom - Comparative test data analysis

    NASA Technical Reports Server (NTRS)

    Miernik, Janie H.; Shah, Burt H.; Mcgriff, Cindy F.

    1991-01-01

    Comparative tests were conducted to choose the optimum technology for waste water processing on SSF. A thermoelectric integrated membrane evaporation (TIMES) subsystem and a vapor compression distillation subsystem (VCD) were built and tested to compare urine processing capability. Water quality, performance, and specific energy were compared for conceptual designs intended to function as part of the water recovery and management system of SSF. The VCD is considered the most mature and efficient technology and was selected to replace the TIMES as the baseline urine processor for SSF.

  3. NICE guidance: a comparative study of the introduction of the single technology appraisal process and comparison with guidance from Scottish Medicines Consortium

    PubMed Central

    Waugh, Norman; Sharma, Pawana; Sculpher, Mark; Walker, Andrew

    2012-01-01

    Objectives To compare the timelines and recommendations of the Scottish Medicines Consortium (SMC) and National Institute of Health and Clinical Excellence (NICE), in particular since the single technology assessment (STA) process was introduced in 2005. Design Comparative study of drug appraisals published by NICE and SMC. Setting NICE and SMC. Participants All drugs appraised by SMC and NICE, from establishment of each organisation until August 2010, were included. Data were gathered from published reports on the NICE website, SMC annual reports and European Medicines Agency website. Primary and secondary outcome measures Primary outcome was time from marketing authorisation until publication of first guidance. The final outcome for each drug was documented. Drug appraisals by NICE (before and after the introduction of the STA process) and SMC were compared. Results NICE and SMC appraised 140 drugs, 415 were appraised by SMC alone and 102 by NICE alone. NICE recommended, with or without restriction, 90% of drugs and SMC 80%. SMC published guidance more quickly than NICE (median 7.4 compared with 21.4 months). Overall, the STA process reduced the average time to publication compared with multiple technology assessments (median 16.1 compared with 22.8 months). However, for cancer medications, the STA process took longer than multiple technology assessment (25.2 compared with 20.0 months). Conclusions Proportions of drugs recommended for NHS use by SMC and NICE are similar. SMC publishes guidance more quickly than NICE. The STA process has improved the time to publication but not for cancer drugs. The lengthier time for NICE guidance is partly due to measures to provide transparency and the widespread consultation during the NICE process. PMID:22290398

  4. Processing of visually presented clock times.

    PubMed

    Goolkasian, P; Park, D C

    1980-11-01

    The encoding and representation of visually presented clock times was investigated in three experiments utilizing a comparative judgment task. Experiment 1 explored the effects of comparing times presented in different formats (clock face, digit, or word), and Experiment 2 examined angular distance effects created by varying positions of the hands on clock faces. In Experiment 3, encoding and processing differences between clock faces and digitally presented times were directly measured. Same/different reactions to digitally presented times were faster than to times presented on a clock face, and this format effect was found to be a result of differences in processing that occurred after encoding. Angular separation also had a limited effect on processing. The findings are interpreted within the framework of theories that refer to the importance of representational codes. The applicability to the data of Bank's semantic-coding theory, Paivio's dual-coding theory, and the levels-of-processing view of memory are discussed.

  5. The effects of quantity and depth of processing on children's time perception.

    PubMed

    Arlin, M

    1986-08-01

    Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.

  6. Beyond the sticker price: including and excluding time in comparing food prices.

    PubMed

    Yang, Yanliang; Davis, George C; Muth, Mary K

    2015-07-01

    An ongoing debate in the literature is how to measure the price of food. Most analyses have not considered the value of time in measuring the price of food. Whether or not the value of time is included in measuring the price of a food may have important implications for classifying foods based on their relative cost. The purpose of this article is to compare prices that exclude time (time-exclusive price) with prices that include time (time-inclusive price) for 2 types of home foods: home foods using basic ingredients (home recipes) vs. home foods using more processed ingredients (processed recipes). The time-inclusive and time-exclusive prices are compared to determine whether the time-exclusive prices in isolation may mislead in drawing inferences regarding the relative prices of foods. We calculated the time-exclusive price and time-inclusive price of 100 home recipes and 143 processed recipes and then categorized them into 5 standard food groups: grains, proteins, vegetables, fruit, and dairy. We then examined the relation between the time-exclusive prices and the time-inclusive prices and dietary recommendations. For any food group, the processed food time-inclusive price was always less than the home recipe time-inclusive price, even if the processed food's time-exclusive price was more expensive. Time-inclusive prices for home recipes were especially higher for the more time-intensive food groups, such as grains, vegetables, and fruit, which are generally underconsumed relative to the guidelines. Focusing only on the sticker price of a food and ignoring the time cost may lead to different conclusions about relative prices and policy recommendations than when the time cost is included. © 2015 American Society for Nutrition.

  7. Heat penetration attributes of milkfish (Chanos chanos) thermal processed in flexible pouches: a comparative study between steam application and water immersion.

    PubMed

    Adepoju, Mary A; Omitoyin, Bamidele O; Mohan, Chitradurga O; Zynudheen, Aliyam A

    2017-05-01

    The difference in the heating penetration characteristics of product processed in retort by steam-air application and water immersion was studied. Fresh milkfish ( Chanos chanos ) packed in dry pack and in oil medium, both in flexible pouches, was thermal processed to minimum F 0 value of 7.77 at 121.1°C. Heat penetration values were recorded for each minute of processing with the aid Ellab (TM 9608, Denmark) temperature recorder. Retort come up time to achieve 121.1°C was observed to be less in steam-air which invariably led to a lower Ball's process time (B) and the total process time (T) observed in steam-air as compared to water immersion. Obtained data were plotted on a semi-logarithmic paper with temperature deficit on x -axis against time on the y -axis.

  8. Different Gestalt Processing for Different Actions? Comparing Object-Directed Reaching and Looking Time Measures

    ERIC Educational Resources Information Center

    Vishton, P.M.; Ware, E.A.; Badger, A.N.

    2005-01-01

    Six experiments compared the Gestalt processing that mediates infant reaching and looking behaviors. Experiment 1 demonstrated that the positioning and timing of 8- and 9-month-olds' reaching was influenced by remembered relative motion. Experiment 2 suggested that a visible gap, without this relative motion, was not sufficient to produce these…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  10. Real-time fMRI processing with physiological noise correction - Comparison with off-line analysis.

    PubMed

    Misaki, Masaya; Barzigar, Nafise; Zotev, Vadim; Phillips, Raquel; Cheng, Samuel; Bodurka, Jerzy

    2015-12-30

    While applications of real-time functional magnetic resonance imaging (rtfMRI) are growing rapidly, there are still limitations in real-time data processing compared to off-line analysis. We developed a proof-of-concept real-time fMRI processing (rtfMRIp) system utilizing a personal computer (PC) with a dedicated graphic processing unit (GPU) to demonstrate that it is now possible to perform intensive whole-brain fMRI data processing in real-time. The rtfMRIp performs slice-timing correction, motion correction, spatial smoothing, signal scaling, and general linear model (GLM) analysis with multiple noise regressors including physiological noise modeled with cardiac (RETROICOR) and respiration volume per time (RVT). The whole-brain data analysis with more than 100,000voxels and more than 250volumes is completed in less than 300ms, much faster than the time required to acquire the fMRI volume. Real-time processing implementation cannot be identical to off-line analysis when time-course information is used, such as in slice-timing correction, signal scaling, and GLM. We verified that reduced slice-timing correction for real-time analysis had comparable output with off-line analysis. The real-time GLM analysis, however, showed over-fitting when the number of sampled volumes was small. Our system implemented real-time RETROICOR and RVT physiological noise corrections for the first time and it is capable of processing these steps on all available data at a given time, without need for recursive algorithms. Comprehensive data processing in rtfMRI is possible with a PC, while the number of samples should be considered in real-time GLM. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Comparative effectiveness of colony-stimulating factors in febrile neutropenia prophylaxis: how results are affected by research design.

    PubMed

    Henk, Henry J; Li, Xiaoyan; Becker, Laura K; Xu, Hairong; Gong, Qi; Deeter, Robert G; Barron, Richard L

    2015-01-01

    To examine the impact of research design on results in two published comparative effectiveness studies. Guidelines for comparative effectiveness research have recommended incorporating disease process in study design. Based on the recommendations, we develop a checklist of considerations and apply the checklist in review of two published studies on comparative effectiveness of colony-stimulating factors. Both studies used similar administrative claims data, but different methods, which resulted in directionally different estimates. Major design differences between the two studies include: whether the timing of intervention in disease process was identified and whether study cohort and outcome assessment period were defined based on this temporal relationship. Disease process and timing of intervention should be incorporated into the design of comparative effectiveness studies.

  12. Effects of extrusion temperature and dwell time on aflatoxin levels in cottonseed.

    PubMed

    Buser, Michael D; Abbas, Hamed K

    2002-04-24

    Cottonseed is an economical source of protein and is commonly used in balancing livestock rations; however, its use is typically limited by protein, fat, gossypol, and aflatoxin contents. Whole cottonseed was extruded to determine if the temperature and dwell time (multiple stages of processing) associated with the process affected aflatoxin levels. The extrusion temperature study showed that aflatoxin levels were reduced by an additional 33% when the cottonseed was extruded at 160 degrees C as compared to 104 degrees C. Furthermore, the multiple-pass extrusion study indicated that aflatoxin levels were reduced by an additional 55% when the cottonseed was extruded four times as compared to one time. To estimate the aflatoxin reductions due to extrusion temperature and dwell time, the least mean fits obtained for the individual studies were combined. Total estimated reductions of 55% (three stages of processing at 104 degrees C), 50% (two stages of processing at 132 degrees C), and 47% (one stage of processing at 160 degrees C) were obtained from the combined equations. If the extreme conditions (four stages of processing at 160 degrees C) of the evaluation studies are applied to the combined temperature and processing equation, the resulting aflatoxin reduction would be 76%.

  13. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    PubMed

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  14. Operating Room Time Savings with the Use of Splint Packs: A Randomized Controlled Trial

    PubMed Central

    Gonzalez, Tyler A.; Bluman, Eric M.; Palms, David; Smith, Jeremy T.; Chiodo, Christopher P.

    2016-01-01

    Background: The most expensive variable in the operating room (OR) is time. Lean Process Management is being used in the medical field to improve efficiency in the OR. Streamlining individual processes within the OR is crucial to a comprehensive time saving and cost-cutting health care strategy. At our institution, one hour of OR time costs approximately $500, exclusive of supply and personnel costs. Commercially prepared splint packs (SP) contain all components necessary for plaster-of-Paris short-leg splint application and have the potential to decrease splint application time and overall costs by making it a more lean process. We conducted a randomized controlled trial comparing OR time savings between SP use and bulk supply (BS) splint application. Methods: Fifty consecutive adult operative patients on whom post-operative short-leg splint immobilization was indicated were randomized to either a control group using BS or an experimental group using SP. One orthopaedic surgeon (EMB) prepared and applied all of the splints in a standardized fashion. Retrieval time, preparation time, splint application time, and total splinting time for both groups were measured and statistically analyzed. Results: The retrieval time, preparation time and total splinting time were significantly less (p<0.001) in the SP group compared with the BS group. There was no significant difference in application time between the SP group and BS group. Conclusion: The use of SP made the process of splinting more lean. This has resulted in an average of 2 minutes 52 seconds saved in total splinting time compared to BS, making it an effective cost-cutting and time saving technique. For high volume ORs, use of splint packs may contribute to substantial time and cost savings without impacting patient safety. PMID:26894212

  15. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    NASA Astrophysics Data System (ADS)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  16. Computerized Orders with Standardized Concentrations Decrease Dispensing Errors of Continuous Infusion Medications for Pediatrics

    PubMed Central

    Sowan, Azizeh K.; Vaidya, Vinay U.; Soeken, Karen L.; Hilmas, Elora

    2010-01-01

    OBJECTIVES The use of continuous infusion medications with individualized concentrations may increase the risk for errors in pediatric patients. The objective of this study was to evaluate the effect of computerized prescriber order entry (CPOE) for continuous infusions with standardized concentrations on frequency of pharmacy processing errors. In addition, time to process handwritten versus computerized infusion orders was evaluated and user satisfaction with CPOE as compared to handwritten orders was measured. METHODS Using a crossover design, 10 pharmacists in the pediatric satellite within a university teaching hospital were given test scenarios of handwritten and CPOE order sheets and asked to process infusion orders using the pharmacy system in order to generate infusion labels. Participants were given three groups of orders: five correct handwritten orders, four handwritten orders written with deliberate errors, and five correct CPOE orders. Label errors were analyzed and time to complete the task was recorded. RESULTS Using CPOE orders, participants required less processing time per infusion order (2 min, 5 sec ± 58 sec) compared with time per infusion order in the first handwritten order sheet group (3 min, 7 sec ± 1 min, 20 sec) and the second handwritten order sheet group (3 min, 26 sec ± 1 min, 8 sec), (p<0.01). CPOE eliminated all error types except wrong concentration. With CPOE, 4% of infusions processed contained errors, compared with 26% of the first group of handwritten orders and 45% of the second group of handwritten orders (p<0.03). Pharmacists were more satisfied with CPOE orders when compared with the handwritten method (p=0.0001). CONCLUSIONS CPOE orders saved pharmacists' time and greatly improved the safety of processing continuous infusions, although not all errors were eliminated. pharmacists were overwhelmingly satisfied with the CPOE orders PMID:22477811

  17. Process fault detection and nonlinear time series analysis for anomaly detection in safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, T.L.; Mullen, M.F.; Wangen, L.E.

    In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less

  18. Note: Quasi-real-time analysis of dynamic near field scattering data using a graphics processing unit

    NASA Astrophysics Data System (ADS)

    Cerchiari, G.; Croccolo, F.; Cardinaux, F.; Scheffold, F.

    2012-10-01

    We present an implementation of the analysis of dynamic near field scattering (NFS) data using a graphics processing unit. We introduce an optimized data management scheme thereby limiting the number of operations required. Overall, we reduce the processing time from hours to minutes, for typical experimental conditions. Previously the limiting step in such experiments, the processing time is now comparable to the data acquisition time. Our approach is applicable to various dynamic NFS methods, including shadowgraph, Schlieren and differential dynamic microscopy.

  19. Effect of enzyme concentration, addition of water and incubation time on increase in yield of starch from potato.

    PubMed

    Sit, Nandan; Agrawal, U S; Deka, Sankar C

    2014-05-01

    Enzymatic treatment process for starch extraction from potato was investigated using cellulase enzyme and compared with conventional process. The effects of three parameters, cellulase enzyme concentration, incubation time and addition of water were evaluated for increase in starch yield as compared to the conventional process i.e., without using enzyme. A two-level full factorial design was used to study the process. The results indicated that all the main parameters and their interactions are statistically significant. Enzyme concentration and incubation time had a positive effect on the increase in starch yield while addition of water had a negative effect. The increase in starch yield ranged from 1.9% at low enzyme concentration and incubation time and high addition of water to a maximum of 70% increase from conventional process in starch yield was achieved when enzyme concentration and incubation time were high and addition of water was low suggesting water present in the ground potato meal is sufficient for access to the enzyme with in the slurry ensuring adequate contact with the substrate.

  20. Impact of point-of-care implementation of Xpert® MTB/RIF: product vs. process innovation.

    PubMed

    Schumacher, S G; Thangakunam, B; Denkinger, C M; Oliver, A A; Shakti, K B; Qin, Z Z; Michael, J S; Luo, R; Pai, M; Christopher, D J

    2015-09-01

    Both product innovation (e.g., more sensitive tests) and process innovation (e.g., a point-of-care [POC] testing programme) could improve patient outcomes. To study the respective contributions of product and process innovation in improving patient outcomes. We implemented a POC programme using Xpert(®) MTB/RIF in an out-patient clinic of a tertiary care hospital in India. We measured the impact of process innovation by comparing time to diagnosis with routine testing vs. POC testing. We measured the impact of product innovation by comparing accuracy and time to diagnosis using smear microscopy vs. POC Xpert. We enrolled 1012 patients over a 15-month period. Xpert had high accuracy, but the incremental value of one Xpert over two smears was only 6% (95%CI 3-12). Implementing Xpert as a routine laboratory test did not reduce the time to diagnosis compared to smear-based diagnosis. In contrast, the POC programme reduced the time to diagnosis by 5.5 days (95%CI 4.3-6.7), but required dedicated staff and substantial adaptation of clinic workflow. Process innovation by way of a POC Xpert programme had a greater impact on time to diagnosis than the product per se, and can yield important improvements in patient care that are complementary to those achieved by introducing innovative technologies.

  1. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  2. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  3. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    PubMed

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  4. Autocorrelated process control: Geometric Brownian Motion approach versus Box-Jenkins approach

    NASA Astrophysics Data System (ADS)

    Salleh, R. M.; Zawawi, N. I.; Gan, Z. F.; Nor, M. E.

    2018-04-01

    Existing of autocorrelation will bring a significant effect on the performance and accuracy of process control if the problem does not handle carefully. When dealing with autocorrelated process, Box-Jenkins method will be preferred because of the popularity. However, the computation of Box-Jenkins method is too complicated and challenging which cause of time-consuming. Therefore, an alternative method which known as Geometric Brownian Motion (GBM) is introduced to monitor the autocorrelated process. One real case of furnace temperature data is conducted to compare the performance of Box-Jenkins and GBM methods in monitoring autocorrelation process. Both methods give the same results in terms of model accuracy and monitoring process control. Yet, GBM is superior compared to Box-Jenkins method due to its simplicity and practically with shorter computational time.

  5. A comparative study on visual choice reaction time for different colors in females.

    PubMed

    Balakrishnan, Grrishma; Uppinakudru, Gurunandan; Girwar Singh, Gaur; Bangera, Shobith; Dutt Raghavendra, Aswini; Thangavel, Dinesh

    2014-01-01

    Reaction time is one of the important methods to study a person's central information processing speed and coordinated peripheral movement response. Visual choice reaction time is a type of reaction time and is very important for drivers, pilots, security guards, and so forth. Previous studies were mainly on simple reaction time and there are very few studies on visual choice reaction time. The aim of our study was to compare the visual choice reaction time for red, green, and yellow colors of 60 healthy undergraduate female volunteers. After giving adequate practice, visual choice reaction time was recorded for red, green, and yellow colors using reaction time machine (RTM 608, Medicaid, Chandigarh). Repeated measures of ANOVA and Bonferroni multiple comparison were used for analysis and P < 0.05 was considered statistically significant. The results showed that both red and green had significantly less choice visual choice reaction (P values <0.0001 and 0.0002) when compared with yellow. This could be because individual color mental processing time for yellow color is more than red and green.

  6. Is Time-Based Prospective Remembering Mediated by Self-Initiated Rehearsals? Role of Incidental Cues, Ongoing Activity, Age, and Motivation

    ERIC Educational Resources Information Center

    Kvavilashvili, Lia; Fisher, Laura

    2007-01-01

    The present research examined self-reported rehearsal processes in naturalistic time-based prospective memory tasks (Study 1 and 2) and compared them with the processes in event-based tasks (Study 3). Participants had to remember to phone the experimenter either at a prearranged time (a time-based task) or after receiving a certain text message…

  7. Process connectivity in a naturally prograding river delta

    NASA Astrophysics Data System (ADS)

    Sendrowski, Alicia; Passalacqua, Paola

    2017-03-01

    River deltas are lowland systems that can display high hydrological connectivity. This connectivity can be structural (morphological connections), functional (control of fluxes), and process connectivity (information flow from system drivers to sinks). In this work, we quantify hydrological process connectivity in Wax Lake Delta, coastal Louisiana, by analyzing couplings among external drivers (discharge, tides, and wind) and water levels recorded at five islands and one channel over summer 2014. We quantify process connections with information theory, a branch of mathematics concerned with the communication of information. We represent process connections as a network; variables serve as network nodes and couplings as network links describing the strength, direction, and time scale of information flow. Comparing process connections at long (105 days) and short (10 days) time scales, we show that tides exhibit daily synchronization with water level, with decreasing strength from downstream to upstream, and that tides transfer information as tides transition from spring to neap. Discharge synchronizes with water level and the time scale of its information transfer compares well to physical travel times through the system, computed with a hydrodynamic model. Information transfer and physical transport show similar spatial patterns, although information transfer time scales are larger than physical travel times. Wind events associated with water level setup lead to increased process connectivity with highly variable information transfer time scales. We discuss the information theory results in the context of the hydrologic behavior of the delta, the role of vegetation as a connector/disconnector on islands, and the applicability of process networks as tools for delta modeling results.

  8. Cycle time and cost reduction in large-size optics production

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric; Courtney, Tom

    2005-09-01

    Optical fabrication process steps have remained largely unchanged for decades. Raw glass blanks have been rough-machined, generated to near net shape, loose abrasive or fine bound diamond ground and then polished. This set of processes is sequential and each subsequent operation removes the damage and micro cracking induced by the prior operational step. One of the long-lead aspects of this process has been the glass polishing. Primarily, this has been driven by the need to remove relatively large volumes of glass material compared to the polishing removal rate to ensure complete damage removal. The secondary time driver has been poor convergence to final figure and the corresponding polish-metrology cycles. The overall cycle time and resultant cost due to labor, equipment utilization and shop efficiency is increased, often significantly, when the optical prescription is aspheric. In addition to the long polishing cycle times, the duration of the polishing time is often very difficult to predict given that current polishing processes are not deterministic processes. This paper will describe a novel approach to large optics finishing, relying on several innovative technologies to be presented and illustrated through a variety of examples. The cycle time reductions enabled by this approach promises to result in significant cost and lead-time reductions for large size optics. In addition, corresponding increases in throughput will provide for less capital expenditure per square meter of optic produced. This process, comparative cycles time estimates and preliminary results will be discussed.

  9. Sample size calculations for comparative clinical trials with over-dispersed Poisson process data.

    PubMed

    Matsui, Shigeyuki

    2005-05-15

    This paper develops a new formula for sample size calculations for comparative clinical trials with Poisson or over-dispersed Poisson process data. The criteria for sample size calculations is developed on the basis of asymptotic approximations for a two-sample non-parametric test to compare the empirical event rate function between treatment groups. This formula can accommodate time heterogeneity, inter-patient heterogeneity in event rate, and also, time-varying treatment effects. An application of the formula to a trial for chronic granulomatous disease is provided. Copyright 2004 John Wiley & Sons, Ltd.

  10. Effect of closed-loop order processing on the time to initial antimicrobial therapy.

    PubMed

    Panosh, Nicole; Rew, Richardd; Sharpe, Michelle

    2012-08-15

    The results of a study comparing the average time to initiation of i.v. antimicrobial therapy with closed-versus open-loop order entry and processing are reported. A retrospective cohort study was performed to compare order-to-administration times for initial doses of i.v. antimicrobials before and after a closed-loop order-processing system including computerized prescriber order entry (CPOE) was implemented at a large medical center. A total of 741 i.v. antimicrobial administrations to adult patients during designated five-month preimplementation and postimplementation study periods were assessed. Drug-use reports generated by the pharmacy database were used to identify order-entry times, and medication administration records were reviewed to determine times of i.v. antimicrobial administration. The mean ± S.D. order-to-administration times before and after the implementation of the CPOE system and closed-loop order processing were 3.18 ± 2.60 and 2.00 ± 1.89 hours, respectively, a reduction of 1.18 hours (p < 0.0001). Closed-loop order processing was associated with significant reductions in the average time to initiation of i.v. therapy in all patient care areas evaluated (cardiology, general medicine, and oncology). The study results suggest that CPOE-based closed-loop order processing can play an important role in achieving compliance with current practice guidelines calling for increased efforts to ensure the prompt initiation of i.v. antimicrobials for severe infections (e.g., sepsis, meningitis). Implementation of a closed-loop order-processing system resulted in a significant decrease in order-to-administration times for i.v. antimicrobial therapy.

  11. Temporal Proof Methodologies for Real-Time Systems,

    DTIC Science & Technology

    1990-09-01

    real time systems that communicate either through shared variables or by message passing and real time issues such as time-outs, process priorities (interrupts) and process scheduling. The authors exhibit two styles for the specification of real - time systems . While the first approach uses bounded versions of temporal operators the second approach allows explicit references to time through a special clock variable. Corresponding to two styles of specification the authors present and compare two fundamentally different proof

  12. Executive functioning and processing speed in age-related differences in time estimation: a comparison of young, old, and very old adults.

    PubMed

    Baudouin, Alexia; Isingrini, Michel; Vanneste, Sandrine

    2018-01-25

    Age-related differences in time estimation were examined by comparing the temporal performance of young, young-old, and old-old adults, in relation to two major theories of cognitive aging: executive decline and cognitive slowing. We tested the hypothesis that processing speed and executive function are differentially involved in timing depending on the temporal task used. We also tested the assumption of greater age-related effects in time estimation in old-old participants. Participants performed two standard temporal tasks: duration production and duration reproduction. They also completed tests measuring executive function and processing speed. Findings supported the view that executive function is the best mediator of reproduction performance and inversely that processing speed is the best mediator of production performance. They also showed that young-old participants provide relatively accurate temporal judgments compared to old-old participants. These findings are discussed in terms of compensation mechanisms in aging.

  13. System for monitoring an industrial or biological process

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Vilim, Rick B.; White, Andrew M.

    1998-01-01

    A method and apparatus for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT.

  14. System for monitoring an industrial or biological process

    DOEpatents

    Gross, K.C.; Wegerich, S.W.; Vilim, R.B.; White, A.M.

    1998-06-30

    A method and apparatus are disclosed for monitoring and responding to conditions of an industrial process. Industrial process signals, such as repetitive manufacturing, testing and operational machine signals, are generated by a system. Sensor signals characteristic of the process are generated over a time length and compared to reference signals over the time length. The industrial signals are adjusted over the time length relative to the reference signals, the phase shift of the industrial signals is optimized to the reference signals and the resulting signals output for analysis by systems such as SPRT. 49 figs.

  15. Reaction times of normal listeners to laryngeal, alaryngeal, and synthetic speech.

    PubMed

    Evitts, Paul M; Searl, Jeff

    2006-12-01

    The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS). Cognitive processing load was indexed by listener reaction time (RT). To account for significant durational differences among the modes of speech, an RT ratio was calculated (stimulus duration divided by RT). Results indicated that the cognitive processing load was greater for ES and EL relative to normal speech. TE and normal speech did not differ in terms of RT ratio, suggesting fairly comparable cognitive demands placed on the listener. SS required greater cognitive processing load than normal and alaryngeal speech. The results are discussed relative to alaryngeal speech intelligibility and the role of the listener. Potential clinical applications and directions for future research are also presented.

  16. Neural Network of Predictive Motor Timing in the Context of Gender Differences

    PubMed Central

    Lošák, Jan; Kašpárek, Tomáš; Vaníček, Jiří; Bareš, Martin

    2016-01-01

    Time perception is an essential part of our everyday lives, in both the prospective and the retrospective domains. However, our knowledge of temporal processing is mainly limited to the networks responsible for comparing or maintaining specific intervals or frequencies. In the presented fMRI study, we sought to characterize the neural nodes engaged specifically in predictive temporal analysis, the estimation of the future position of an object with varying movement parameters, and the contingent neuroanatomical signature of differences in behavioral performance between genders. The established dominant cerebellar engagement offers novel evidence in favor of a pivotal role of this structure in predictive short-term timing, overshadowing the basal ganglia reported together with the frontal cortex as dominant in retrospective temporal processing in the subsecond spectrum. Furthermore, we discovered lower performance in this task and massively increased cerebellar activity in women compared to men, indicative of strategy differences between the genders. This promotes the view that predictive temporal computing utilizes comparable structures in the retrospective timing processes, but with a definite dominance of the cerebellum. PMID:27019753

  17. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    PubMed

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  18. Highly comparative time-series analysis: the empirical structure of time series and their methods

    PubMed Central

    Fulcher, Ben D.; Little, Max A.; Jones, Nick S.

    2013-01-01

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines. PMID:23554344

  19. Enhanced visuomotor processing of phobic images in blood-injury-injection fear.

    PubMed

    Haberkamp, Anke; Schmidt, Thomas

    2014-04-01

    Numerous studies have identified attentional biases and processing enhancements for fear-relevant stimuli in individuals with specific phobias. However, this has not been conclusively shown in blood-injury-injection (BII) phobia, which has rarely been investigated even though it has features distinct from all other specific phobias. The present study aims to fill that gap and compares the time-course of visuomotor processing of phobic stimuli (i.e., pictures of small injuries) in BII-fearful (n=19) and non-anxious control participants (n=23) by using a response priming paradigm. In BII-fearful participants, phobic stimuli produced larger priming effects and lower response times compared to neutral stimuli, whereas non-anxious control participants showed no such differences. Because these effects are fully present in the fastest responses, they indicate an enhancement in early visuomotor processing of injury pictures in BII-fearful participants. These results are comparable to the enhanced processing of phobic stimuli in other specific phobias (i.e., spider phobia). Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Modelling of Sub-daily Hydrological Processes Using Daily Time-Step Models: A Distribution Function Approach to Temporal Scaling

    NASA Astrophysics Data System (ADS)

    Kandel, D. D.; Western, A. W.; Grayson, R. B.

    2004-12-01

    Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).

  1. Near-realtime simulations of biolelectric activity in small mammalian hearts using graphical processing units

    PubMed Central

    Vigmond, Edward J.; Boyle, Patrick M.; Leon, L. Joshua; Plank, Gernot

    2014-01-01

    Simulations of cardiac bioelectric phenomena remain a significant challenge despite continual advancements in computational machinery. Spanning large temporal and spatial ranges demands millions of nodes to accurately depict geometry, and a comparable number of timesteps to capture dynamics. This study explores a new hardware computing paradigm, the graphics processing unit (GPU), to accelerate cardiac models, and analyzes results in the context of simulating a small mammalian heart in real time. The ODEs associated with membrane ionic flow were computed on traditional CPU and compared to GPU performance, for one to four parallel processing units. The scalability of solving the PDE responsible for tissue coupling was examined on a cluster using up to 128 cores. Results indicate that the GPU implementation was between 9 and 17 times faster than the CPU implementation and scaled similarly. Solving the PDE was still 160 times slower than real time. PMID:19964295

  2. The association between time scarcity, sociodemographic correlates and consumption of ultra-processed foods among parents in Norway: a cross-sectional study.

    PubMed

    Djupegot, Ingrid Laukeland; Nenseth, Camilla Bengtson; Bere, Elling; Bjørnarå, Helga Birgit Torgeirsdotter; Helland, Sissel Heidi; Øverby, Nina Cecilie; Torstveit, Monica Klungland; Stea, Tonje Holte

    2017-05-15

    Use of ultra-processed foods has expanded rapidly over the last decades and high consumption has been positively associated with risk of e.g. overweight, obesity and type 2 diabetes. Ultra-processed foods offer convenience as they require minimal time for preparation. It is therefore reasonable to assume that such foods are consumed more often among people who experience time scarcity. The main aim of this study was to investigate the association between time scarcity and consumption of ultra-processed foods among parents of 2-year olds in Norway. A secondary aim was to investigate the association between sociodemographic correlates, weight status and consumption of ultra-processed foods. This cross-sectional study included 497 participants. Chi-square and cross tabulations were used to calculate proportions of high vs. low consumption of ultra-processed foods in relation to time scarcity, sociodemographic correlates and weight status. Binary logistic regression analyses were performed to test the relationship between independent variables and consumption of ultra-processed foods. Participants reporting medium and high time scarcity were more likely to have a high consumption of ultra-processed dinner products (OR = 3. 68, 95% CI = 2. 32-5.84 and OR = 3.10, 1.80-5.35, respectively) and fast foods (OR = 2.60, 1.62-4.18 and OR = 1.90, 1.08-3.32, respectively) compared to those with low time scarcity. Further, participants with medium time scarcity were more likely to have a high consumption of snacks and soft drinks compared to participants with low time scarcity (OR = 1.63, 1.06-2.49). Finally, gender, ethnicity, educational level, number of children in the household and weight status were identified as important factors associated with the consumption of certain types of ultra-processed foods. Results from the present study showed that time scarcity, various sociodemographic factors and weight status was associated with consumption of processed foods. Future studies with a longitudinal design are needed to further explore these patterns over a longer period of time.

  3. On time-dependent diffusion coefficients arising from stochastic processes with memory

    NASA Astrophysics Data System (ADS)

    Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.

    2017-08-01

    Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.

  4. International collaborative project to compare and monitor the nutritional composition of processed foods.

    PubMed

    Dunford, Elizabeth; Webster, Jacqui; Metzler, Adriana Blanco; Czernichow, Sebastien; Ni Mhurchu, Cliona; Wolmarans, Petro; Snowdon, Wendy; L'Abbe, Mary; Li, Nicole; Maulik, Pallab K; Barquera, Simon; Schoj, Verónica; Allemandi, Lorena; Samman, Norma; de Menezes, Elizabete Wenzel; Hassell, Trevor; Ortiz, Johana; Salazar de Ariza, Julieta; Rahman, A Rashid; de Núñez, Leticia; Garcia, Maria Reyes; van Rossum, Caroline; Westenbrink, Susanne; Thiam, Lim Meng; MacGregor, Graham; Neal, Bruce

    2012-12-01

    Chronic diseases are the leading cause of premature death and disability in the world with overnutrition a primary cause of diet-related ill health. Excess energy intake, saturated fat, sugar, and salt derived from processed foods are a major cause of disease burden. Our objective is to compare the nutritional composition of processed foods between countries, between food companies, and over time. Surveys of processed foods will be done in each participating country using a standardized methodology. Information on the nutrient composition for each product will be sought either through direct chemical analysis, from the product label, or from the manufacturer. Foods will be categorized into 14 groups and 45 categories for the primary analyses which will compare mean levels of nutrients at baseline and over time. Initial commitments to collaboration have been obtained from 21 countries. This collaborative approach to the collation and sharing of data will enable objective and transparent tracking of processed food composition around the world. The information collected will support government and food industry efforts to improve the nutrient composition of processed foods around the world.

  5. A real-world, multi-site, observational study of infusion time and treatment satisfaction with rheumatoid arthritis patients treated with intravenous golimumab or infliximab.

    PubMed

    Daniel, Shoshana R; McDermott, John D; Le, Cathy; Pierce, Christine A; Ziskind, Michael A; Ellis, Lorie A

    2018-05-25

    To assess real-world infusion times for golimumab (GLM-IV) and infliximab (IFX) for rheumatoid arthritis (RA) patients and factors associated with treatment satisfaction. An observational study assessed infusion time including: clinic visit duration, RA medication preparation and infusion time, and infusion process time. Satisfaction was assessed by a modified Treatment Satisfaction Questionnaire for Medication (patient) and study-specific questionnaires (patient and clinic personnel). Comparative statistical testing for patient data utilized analysis of variance for continuous measures, and Fisher's exact or Chi-square test for categorical measures. Multivariate analysis was performed for the primary time endpoints and patient satisfaction. One hundred and fifty patients were enrolled from six US sites (72 GLM-IV, 78 IFX). The majority of patients were female (80.0%) and Caucasian (88.7%). GLM-IV required fewer vials per infusion (3.7) compared to IFX (4.9; p = .0001). Clinic visit duration (minutes) was shorter for GLM-IV (65.1) compared to IFX (153.1; p < .0001), as was total infusion time for RA medication (32.8 GLM-IV, 119.5 IFX; p < .0001) and infusion process times (45.8 GLM-IV, 134.1 IFX; p < .0001). Patients treated with GLM-IV reported higher satisfaction ratings with infusion time (p < .0001) and total visit time (p = .0003). Clinic personnel reported higher satisfaction with GLM-IV than IFX specific to medication preparation time, ease of mixing RA medication, frequency of patients requiring pre-medication, and infusion time. Findings may not be representative of care delivery for all RA infusion practices or RA patients. Shorter overall clinic visit duration, infusion process, and RA medication infusion times were observed for GLM-IV compared to IFX. A shorter duration in infusion time was associated with higher patient and clinic personnel satisfaction ratings.

  6. Stable and verifiable state estimation methods and systems with spacecraft applications

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Wu, Yeong-Wei Andy (Inventor)

    2001-01-01

    The stability of a recursive estimator process (e.g., a Kalman filter is assured for long time periods by periodically resetting an error covariance P(t.sub.n) of the system to a predetermined reset value P.sub.r. The recursive process is thus repetitively forced to start from a selected covariance and continue for a time period that is short compared to the system's total operational time period. The time period in which the process must maintain its numerical stability is significantly reduced as is the demand on the system's numerical stability. The process stability for an extended operational time period T.sub.o is verified by performing the resetting step at the end of at least one reset time period T.sub.r whose duration is less than the operational time period T.sub.o and then confirming stability of the process over the reset time period T.sub.r. Because the recursive process starts from a selected covariance at the beginning of each reset time period T.sub.r, confirming stability of the process over at least one reset time period substantially confirms stability over the longer operational time period T.sub.o.

  7. Root cause analysis of laboratory turnaround times for patients in the emergency department.

    PubMed

    Fernandes, Christopher M B; Worster, Andrew; Hill, Stephen; McCallum, Catherine; Eva, Kevin

    2004-03-01

    Laboratory investigations are essential to patient care and are conducted routinely in emergency departments (EDs). This study reports the turnaround times at an academic, tertiary care ED, using root cause analysis to identify potential areas of improvement. Our objectives were to compare the laboratory turnaround times with established benchmarks and identify root causes for delays. Turnaround and process event times for a consecutive sample of hemoglobin and potassium measurements were recorded during an 8-day study period using synchronized time stamps. A log transformation (ln [minutes + 1]) was performed to normalize the time data, which were then compared with established benchmarks using one-sample t tests. The turnaround time for hemoglobin was significantly less than the established benchmark (n = 140, t = -5.69, p < 0.001) and that of potassium was significantly greater (n = 121, t = 12.65, p < 0.001). The hemolysis rate was 5.8%, with 0.017% of samples needing recollection. Causes of delays included order-processing time, a high proportion (43%) of tests performed on patients who had been admitted but were still in the ED waiting for a bed, and excessive laboratory process times for potassium. The turnaround time for hemoglobin (18 min) met the established benchmark, but that for potassium (49 min) did not. Root causes for delay were order-processing time, excessive queue and instrument times for potassium and volume of tests for admitted patients. Further study of these identified causes of delays is required to see whether laboratory TATs can be reduced.

  8. Brainstem timing: implications for cortical processing and literacy.

    PubMed

    Banai, Karen; Nicol, Trent; Zecker, Steven G; Kraus, Nina

    2005-10-26

    The search for a unique biological marker of language-based learning disabilities has so far yielded inconclusive findings. Previous studies have shown a plethora of auditory processing deficits in learning disabilities at both the perceptual and physiological levels. In this study, we investigated the association among brainstem timing, cortical processing of stimulus differences, and literacy skills. To that end, brainstem timing and cortical sensitivity to acoustic change [mismatch negativity (MMN)] were measured in a group of children with learning disabilities and normal-learning children. The learning-disabled (LD) group was further divided into two subgroups with normal and abnormal brainstem timing. MMNs, literacy, and cognitive abilities were compared among the three groups. LD individuals with abnormal brainstem timing were more likely to show reduced processing of acoustic change at the cortical level compared with both normal-learning individuals and LD individuals with normal brainstem timing. This group was also characterized by a more severe form of learning disability manifested by poorer reading, listening comprehension, and general cognitive ability. We conclude that abnormal brainstem timing in learning disabilities is related to higher incidence of reduced cortical sensitivity to acoustic change and to deficient literacy skills. These findings suggest that abnormal brainstem timing may serve as a reliable marker of a subgroup of individuals with learning disabilities. They also suggest that faulty mechanisms of neural timing at the brainstem may be the biological basis of malfunction in this group.

  9. Amplified Self-replication of DNA Origami Nanostructures through Multi-cycle Fast-annealing Process

    NASA Astrophysics Data System (ADS)

    Zhou, Feng; Zhuo, Rebecca; He, Xiaojin; Sha, Ruojie; Seeman, Nadrian; Chaikin, Paul

    We have developed a non-biological self-replication process using templated reversible association of components and irreversible linking with annealing and UV cycles. The current method requires a long annealing time, up to several days, to achieve the specific self-assembly of DNA nanostructures. In this work, we accomplished the self-replication with a shorter time and smaller replication rate per cycle. By decreasing the ramping time, we obtained the comparable replication yield within 90 min. Systematic studies show that the temperature and annealing time play essential roles in the self-replication process. In this manner, we can amplify the self-replication process to a factor of 20 by increasing the number of cycles within the same amount of time.

  10. Using a Radiofrequency Identification System for Improving the Patient Discharge Process: A Simulation Study.

    PubMed

    Shim, Sung J; Kumar, Arun; Jiao, Roger

    2016-01-01

    A hospital is considering deploying a radiofrequency identification (RFID) system and setting up a new "discharge lounge" to improve the patient discharge process. This study uses computer simulation to model and compare the current process and the new process, and it assesses the impact of the RFID system and the discharge lounge on the process in terms of resource utilization and time taken in the process. The simulation results regarding resource utilization suggest that the RFID system can slightly relieve the burden on all resources, whereas the RFID system and the discharge lounge together can significantly mitigate the nurses' tasks. The simulation results in terms of the time taken demonstrate that the RFID system can shorten patient wait times, staff busy times, and bed occupation times. The results of the study could prove helpful to others who are considering the use of an RFID system in the patient discharge process in hospitals or similar processes.

  11. Comparative Evaluations of Four Specification Methods for Real-Time Systems

    DTIC Science & Technology

    1989-12-01

    December 1989 Comparative Evaluations of Four Specification Methods for Real - Time Systems David P. Wood William G. Wood Specification and Design Methods...Methods for Real - Time Systems Abstract: A number of methods have been proposed in the last decade for the specification of system and software requirements...and software specification for real - time systems . Our process for the identification of methods that meet the above criteria is described in greater

  12. Software engineering aspects of real-time programming concepts

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1986-08-01

    Real-time programming is a discipline of great importance not only in process control, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other. The second part deals with structuring and modularization of technical processes to build reliable and maintainable real time systems. Software-quality and software engineering aspects are considered throughout the paper.

  13. The supplementary motor area in motor and perceptual time processing: fMRI studies.

    PubMed

    Macar, Françoise; Coull, Jennifer; Vidal, Franck

    2006-06-01

    The neural bases of timing mechanisms in the second-to-minute range are currently investigated using multidisciplinary approaches. This paper documents the involvement of the supplementary motor area (SMA) in the encoding of target durations by reporting convergent fMRI data from motor and perceptual timing tasks. Event-related fMRI was used in two temporal procedures, involving (1) the production of an accurate interval as compared to an accurate force, and (2) a dual-task of time and colour discrimination with parametric manipulation of the level of attention attributed to each parameter. The first study revealed greater activation of the SMA proper in skilful control of time compared to force. The second showed that increasing attentional allocation to time increased activity in a cortico-striatal network including the pre-SMA (in contrast with the occipital cortex for increasing attention to colour). Further, the SMA proper was sensitive to the attentional modulation cued prior to the time processing period. Taken together, these data and related literature suggest that the SMA plays a key role in time processing as part of the striato-cortical pathway previously identified by animal studies, human neuropsychology and neuroimaging.

  14. Children's Well-Being during Parents' Marital Disruption Process: A Pooled Time-Series Analysis.

    ERIC Educational Resources Information Center

    Sun, Yongmin; Li, Yuanzhang

    2002-01-01

    Examines the extent to which parents' marital disruption process affects children's academic performance and well-being both before and after parental divorce. Compared with peers in intact families, children of divorce faired less well. Discusses how family resources mediate detrimental effects over time. Similar results are noted for girls and…

  15. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  16. The beneficial effect of testing: an event-related potential study

    PubMed Central

    Bai, Cheng-Hua; Bridger, Emma K.; Zimmer, Hubert D.; Mecklinger, Axel

    2015-01-01

    The enhanced memory performance for items that are tested as compared to being restudied (the testing effect) is a frequently reported memory phenomenon. According to the episodic context account of the testing effect, this beneficial effect of testing is related to a process which reinstates the previously learnt episodic information. Few studies have explored the neural correlates of this effect at the time point when testing takes place, however. In this study, we utilized the ERP correlates of successful memory encoding to address this issue, hypothesizing that if the benefit of testing is due to retrieval-related processes at test then subsequent memory effects (SMEs) should resemble the ERP correlates of retrieval-based processing in their temporal and spatial characteristics. Participants were asked to learn Swahili-German word pairs before items were presented in either a testing or a restudy condition. Memory performance was assessed immediately and 1-day later with a cued recall task. Successfully recalling items at test increased the likelihood that items were remembered over time compared to items which were only restudied. An ERP subsequent memory contrast (later remembered vs. later forgotten tested items), which reflects the engagement of processes that ensure items are recallable the next day were topographically comparable with the ERP correlate of immediate recollection (immediately remembered vs. immediately forgotten tested items). This result shows that the processes which allow items to be more memorable over time share qualitatively similar neural correlates with the processes that relate to successful retrieval at test. This finding supports the notion that testing is more beneficial than restudying on memory performance over time because of its engagement of retrieval processes, such as the re-encoding of actively retrieved memory representations. PMID:26441577

  17. Increased Risk of Revision After Anterior Cruciate Ligament Reconstruction With Soft Tissue Allografts Compared With Autografts: Graft Processing and Time Make a Difference.

    PubMed

    Maletis, Gregory B; Chen, Jason; Inacio, Maria C S; Love, Rebecca M; Funahashi, Tadashi T

    2017-07-01

    The optimal graft for anterior cruciate ligament reconstruction (ACLR) remains controversial. To compare the risk of aseptic revision between bone-patellar tendon-bone (BPTB) autografts, hamstring autografts, and soft tissue allografts. Cohort study; Level of evidence, 2. Prospectively collected ACLR cases reconstructed with BPTB autografts, hamstring autografts, and soft tissue allografts were identified using the Kaiser Permanente ACLR Registry. Aseptic revision was the endpoint. The type of graft and allograft processing method (nonprocessed, <1.8-Mrad irradiation with and without chemical processing [Allowash or AlloTrue], ≥1.8-Mrad irradiation with and without chemical processing, and chemical processing alone [BioCleanse]) were the exposures evaluated. Analyses were adjusted for age, sex, and race. Kaplan-Meier curves and Cox proportional hazards models were employed. The cohort included 14,015 cases: there were 8924 (63.7%) male patients, there were 6397 (45.6%) white patients, 4557 (32.5%) ACLRs used BPTB autografts, 3751 ACLRs (26.8%) used soft tissue allografts, and 5707 (40.7%) ACLRs used hamstring autografts. The median age was 34.6 years for soft tissue allografts, 24.3 years for hamstring autografts, and 22.0 years for BPTB autografts. The crude nonadjusted revision rates were 85 (1.9%) in BPTB autograft cases, 132 (2.3%) in hamstring autograft cases, and 83 (2.2%) in soft tissue allograft cases. After adjusting for age, sex, and race, compared with hamstring autografts, a higher risk of revision was found with allografts with ≥1.8 Mrad without chemical processing after 2.5 years (hazard ratio [HR], 3.88; 95% CI, 1.48-10.12) and ≥1.8 Mrad with chemical processing after 1 year (HR, 3.43; 95% CI, 1.58-7.47) and with BioCleanse processed grafts at any time point (HR, 3.02; 95% CI, 1.40-6.50). Nonprocessed allografts and those irradiated with <1.8 Mrad with or without chemical processing were not found to have a different risk of revision compared with hamstring autografts. Compared with BPTB autografts, a higher risk of revision was seen with hamstring autografts (HR, 1.51; 95% CI, 1.15-1.99) and BioCleanse processed allografts (HR, 4.67; 95% CI, 2.15-10.16). Allografts irradiated with <1.8 Mrad with chemical processing (Allowash or AlloTrue) (HR, 2.19; 95% CI, 1.42-3.38) and without chemical processing (HR, 2.31; 95% CI, 1.40-3.82) had a higher risk of revision, as did allografts with ≥1.8 Mrad without chemical processing after 2 years (HR, 6.30; 95% CI, 3.18-12.48) and ≥1.8 Mrad with chemical processing (Allowash or AlloTrue) after 1 year (HR, 5.03; 95% CI, 2.30-11.00) compared with BPTB autografts. Nonprocessed allografts did not have a higher risk of revision compared with autografts. With the numbers available, direct comparisons between the specific allograft processing methods were not possible. When soft tissue allografts are used for ACLR, processing and time from surgery affect the risk of revision. Tissue processing has a significant effect on the risk of revision surgery, which is most profound with more highly processed grafts and increases with increasing follow-up time. Surgeons and patients need to be aware of the increased risks of revision with the various soft tissue allografts used for ACLR.

  18. Time dependent analysis of assay comparability: a novel approach to understand intra- and inter-site variability over time

    NASA Astrophysics Data System (ADS)

    Winiwarter, Susanne; Middleton, Brian; Jones, Barry; Courtney, Paul; Lindmark, Bo; Page, Ken M.; Clark, Alan; Landqvist, Claire

    2015-09-01

    We demonstrate here a novel use of statistical tools to study intra- and inter-site assay variability of five early drug metabolism and pharmacokinetics in vitro assays over time. Firstly, a tool for process control is presented. It shows the overall assay variability but allows also the following of changes due to assay adjustments and can additionally highlight other, potentially unexpected variations. Secondly, we define the minimum discriminatory difference/ratio to support projects to understand how experimental values measured at different sites at a given time can be compared. Such discriminatory values are calculated for 3 month periods and followed over time for each assay. Again assay modifications, especially assay harmonization efforts, can be noted. Both the process control tool and the variability estimates are based on the results of control compounds tested every time an assay is run. Variability estimates for a limited set of project compounds were computed as well and found to be comparable. This analysis reinforces the need to consider assay variability in decision making, compound ranking and in silico modeling.

  19. Dispersibility of lactose fines as compared to API in dry powders for inhalation.

    PubMed

    Thalberg, Kyrre; Åslund, Simon; Skogevall, Marcus; Andersson, Patrik

    2016-05-17

    This work investigates the dispersion performance of fine lactose particles as function of processing time, and compares it to the API, using Beclomethasone Dipropionate (BDP) as model API. The total load of fine particles is kept constant in the formulations while the proportions of API and lactose fines are varied. Fine particle assessment demonstrates that the lactose fines have higher dispersibility than the API. For standard formulations, processing time has a limited effect on the Fine Particle Fraction (FPF). For formulations containing magnesium stearate (MgSt), FPF of BDP is heavily influenced by processing time, with an initial increase, followed by a decrease at longer mixing times. An equation modeling the observed behavior is presented. Surprisingly, the dispersibility of the lactose fines present in the same formulation remains unaffected by mixing time. Magnesium analysis demonstrates that MgSt is transferred to the fine particles during the mixing process, thus lubrication both BDP and lactose fines, which leads to an increased FPF. Dry particle sizing of the formulations reveals a loss of fine particles at longer mixing times. Incorporation of fine particles into the carrier surfaces is believed to be behind this, and is hence a mechanism of importance as regards the dispersion performance of dry powders for inhalation. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Effect of Temperature, Time, and Material Thickness on the Dehydration Process of Tomato

    PubMed Central

    Correia, A. F. K.; Loro, A. C.; Zanatta, S.; Spoto, M. H. F.; Vieira, T. M. F. S.

    2015-01-01

    This study aimed to evaluate the effects of temperature, time, and thickness of tomatoes fruits during adiabatic drying process. Dehydration, a simple and inexpensive process compared to other conservation methods, is widely used in the food industry in order to ensure a long shelf life for the product due to the low water activity. This study aimed to obtain the best processing conditions to avoid losses and keep product quality. Factorial design and surface response methodology were applied to fit predictive mathematical models. In the dehydration of tomatoes through the adiabatic process, temperature, time, and sample thickness, which greatly contribute to the physicochemical and sensory characteristics of the final product, were evaluated. The optimum drying conditions were 60°C with the lowest thickness level and shorter time. PMID:26904666

  1. Feasibility of using continuous chromatography in downstream processing: Comparison of costs and product quality for a hybrid process vs. a conventional batch process.

    PubMed

    Ötes, Ozan; Flato, Hendrik; Winderl, Johannes; Hubbuch, Jürgen; Capito, Florian

    2017-10-10

    The protein A capture step is the main cost-driver in downstream processing, with high attrition costs especially when using protein A resin not until end of resin lifetime. Here we describe a feasibility study, transferring a batch downstream process to a hybrid process, aimed at replacing batch protein A capture chromatography with a continuous capture step, while leaving the polishing steps unchanged to minimize required process adaptations compared to a batch process. 35g of antibody were purified using the hybrid approach, resulting in comparable product quality and step yield compared to the batch process. Productivity for the protein A step could be increased up to 420%, reducing buffer amounts by 30-40% and showing robustness for at least 48h continuous run time. Additionally, to enable its potential application in a clinical trial manufacturing environment cost of goods were compared for the protein A step between hybrid process and batch process, showing a 300% cost reduction, depending on processed volumes and batch cycles. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Time-of-Day and Appendicitis: Impact on Management and Outcomes

    PubMed Central

    Drake, Frederick Thurston; Mottey, Neli E.; Castelli, Anthony A.; Florence, Michael G.; Johnson, Morris G.; Steele, Scott R.; Thirlby, Richard C.; Flum, David R.

    2017-01-01

    Background Observational research has shown that delayed presentation is associated with perforation in appendicitis. Many factors that impact the ability to present for evaluation are influenced by time-of-day; for example, child care, work, transportation, and primary care office hours. Our objective was to evaluate for an association between care processes or clinical outcomes and presentation time. Methods Prospective cohort of 7,548 adults undergoing appendectomy at 56 hospitals across Washington State. Relative to presentation time, patient characteristics, time to surgery, imaging use, negative appendectomy (NA), and perforation were compared using univariate and multivariate methodologies. Results Overall, 63% of patients presented between noon and midnight. More men presented in the morning; however, race, insurance status, co-morbid conditions, and WBC count did not differ by presentation time. Daytime presenters (6AM-6PM) were less likely to undergo imaging (94% vs. 98% p<0.05) and had a nearly 50% decrease in median pre-operative time (6.0h vs. 8.7h p<0.001). Perforation significantly differed by time-of-day. Patients who presented during the workday (9AM-3PM) had a 30% increase in odds of perforation compared to early morning/late night presenters (adjusted OR 1.29, 95%CI 1.05–1.59). NA did not vary by time-of-day. Conclusions Most patients with appendicitis presented in afternoon/evening. Socioeconomic characteristics did not vary with time-of-presentation. Patients who presented during the workday more often had perforated appendicitis compared to those who presented early morning or late night. Processes of care differed (both time-to-surgery and imaging use). Time-of-day is associated with patient outcomes, process of care, and decisions to present for evaluation; this has implications for surgical workforce planning and quality improvement efforts. PMID:27592212

  3. Comparative study on the removal of COD from POME by electrocoagulation and electro-Fenton methods: Process optimization

    NASA Astrophysics Data System (ADS)

    Chairunnisak, A.; Arifin, B.; Sofyan, H.; Lubis, M. R.; Darmadi

    2018-03-01

    This research focuses on the Chemical Oxygen Demand (COD) treatment in palm oil mill effluent by electrocoagulation and electro-Fenton methods to solve it. Initially, the aqueous solution precipitates in acid condition at pH of about two. This study focuses on the palm oil mill effluent degradation by Fe electrodes in a simple batch reactor. This work is conducted by using different parameters such as voltage, electrolyte concentration of NaCl, volume of H2O2 and operation time. The processing of data resulted is by using response surface method coupled with Box-Behnken design. The electrocoagulation method results in the optimum COD reduction of 94.53% from operating time of 39.28 minutes, 20 volts, and without electrolyte concentration. For electro-Fenton process, experiment points out that voltage 15.78 volts, electrolyte concentration 0.06 M and H2O2 volume 14.79 ml with time 35.92 minutes yield 99.56% degradation. The result concluded that the electro-Fenton process was more effective to degrade COD of the palm-oil-mill effluent compared to electrocoagulation process.

  4. Time-Efficiency Analysis Comparing Digital and Conventional Workflows for Implant Crowns: A Prospective Clinical Crossover Trial.

    PubMed

    Joda, Tim; Brägger, Urs

    2015-01-01

    To compare time-efficiency in the production of implant crowns using a digital workflow versus the conventional pathway. This prospective clinical study used a crossover design that included 20 study participants receiving single-tooth replacements in posterior sites. Each patient received a customized titanium abutment plus a computer-aided design/computer-assisted manufacture (CAD/CAM) zirconia suprastructure (for those in the test group, using digital workflow) and a standardized titanium abutment plus a porcelain-fused-to-metal crown (for those in the control group, using a conventional pathway). The start of the implant prosthetic treatment was established as the baseline. Time-efficiency analysis was defined as the primary outcome, and was measured for every single clinical and laboratory work step in minutes. Statistical analysis was calculated with the Wilcoxon rank sum test. All crowns could be provided within two clinical appointments, independent of the manufacturing process. The mean total production time, as the sum of clinical plus laboratory work steps, was significantly different. The mean ± standard deviation (SD) time was 185.4 ± 17.9 minutes for the digital workflow process and 223.0 ± 26.2 minutes for the conventional pathway (P = .0001). Therefore, digital processing for overall treatment was 16% faster. Detailed analysis for the clinical treatment revealed a significantly reduced mean ± SD chair time of 27.3 ± 3.4 minutes for the test group compared with 33.2 ± 4.9 minutes for the control group (P = .0001). Similar results were found for the mean laboratory work time, with a significant decrease of 158.1 ± 17.2 minutes for the test group vs 189.8 ± 25.3 minutes for the control group (P = .0001). Only a few studies have investigated efficiency parameters of digital workflows compared with conventional pathways in implant dental medicine. This investigation shows that the digital workflow seems to be more time-efficient than the established conventional production pathway for fixed implant-supported crowns. Both clinical chair time and laboratory manufacturing steps could be effectively shortened with the digital process of intraoral scanning plus CAD/CAM technology.

  5. A Self-Aligned a-IGZO Thin-Film Transistor Using a New Two-Photo-Mask Process with a Continuous Etching Scheme.

    PubMed

    Fan, Ching-Lin; Shang, Ming-Chi; Li, Bo-Jyun; Lin, Yu-Zuo; Wang, Shea-Jue; Lee, Win-Der

    2014-08-11

    Minimizing the parasitic capacitance and the number of photo-masks can improve operational speed and reduce fabrication costs. Therefore, in this study, a new two-photo-mask process is proposed that exhibits a self-aligned structure without an etching-stop layer. Combining the backside-ultraviolet (BUV) exposure and backside-lift-off (BLO) schemes can not only prevent the damage when etching the source/drain (S/D) electrodes but also reduce the number of photo-masks required during fabrication and minimize the parasitic capacitance with the decreasing of gate overlap length at same time. Compared with traditional fabrication processes, the proposed process yields that thin-film transistors (TFTs) exhibit comparable field-effect mobility (9.5 cm²/V·s), threshold voltage (3.39 V), and subthreshold swing (0.3 V/decade). The delay time of an inverter fabricated using the proposed process was considerably decreased.

  6. A Self-Aligned a-IGZO Thin-Film Transistor Using a New Two-Photo-Mask Process with a Continuous Etching Scheme

    PubMed Central

    Fan, Ching-Lin; Shang, Ming-Chi; Li, Bo-Jyun; Lin, Yu-Zuo; Wang, Shea-Jue; Lee, Win-Der

    2014-01-01

    Minimizing the parasitic capacitance and the number of photo-masks can improve operational speed and reduce fabrication costs. Therefore, in this study, a new two-photo-mask process is proposed that exhibits a self-aligned structure without an etching-stop layer. Combining the backside-ultraviolet (BUV) exposure and backside-lift-off (BLO) schemes can not only prevent the damage when etching the source/drain (S/D) electrodes but also reduce the number of photo-masks required during fabrication and minimize the parasitic capacitance with the decreasing of gate overlap length at same time. Compared with traditional fabrication processes, the proposed process yields that thin-film transistors (TFTs) exhibit comparable field-effect mobility (9.5 cm2/V·s), threshold voltage (3.39 V), and subthreshold swing (0.3 V/decade). The delay time of an inverter fabricated using the proposed process was considerably decreased. PMID:28788159

  7. Direct analysis in real time mass spectrometry, a process analytical technology tool for real-time process monitoring in botanical drug manufacturing.

    PubMed

    Wang, Lu; Zeng, Shanshan; Chen, Teng; Qu, Haibin

    2014-03-01

    A promising process analytical technology (PAT) tool has been introduced for batch processes monitoring. Direct analysis in real time mass spectrometry (DART-MS), a means of rapid fingerprint analysis, was applied to a percolation process with multi-constituent substances for an anti-cancer botanical preparation. Fifteen batches were carried out, including ten normal operations and five abnormal batches with artificial variations. The obtained multivariate data were analyzed by a multi-way partial least squares (MPLS) model. Control trajectories were derived from eight normal batches, and the qualification was tested by R(2) and Q(2). Accuracy and diagnosis capability of the batch model were then validated by the remaining batches. Assisted with high performance liquid chromatography (HPLC) determination, process faults were explained by corresponding variable contributions. Furthermore, a batch level model was developed to compare and assess the model performance. The present study has demonstrated that DART-MS is very promising in process monitoring in botanical manufacturing. Compared with general PAT tools, DART-MS offers a particular account on effective compositions and can be potentially used to improve batch quality and process consistency of samples in complex matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. A dynamic scheduling algorithm for singe-arm two-cluster tools with flexible processing times

    NASA Astrophysics Data System (ADS)

    Li, Xin; Fung, Richard Y. K.

    2018-02-01

    This article presents a dynamic algorithm for job scheduling in two-cluster tools producing multi-type wafers with flexible processing times. Flexible processing times mean that the actual times for processing wafers should be within given time intervals. The objective of the work is to minimize the completion time of the newly inserted wafer. To deal with this issue, a two-cluster tool is decomposed into three reduced single-cluster tools (RCTs) in a series based on a decomposition approach proposed in this article. For each single-cluster tool, a dynamic scheduling algorithm based on temporal constraints is developed to schedule the newly inserted wafer. Three experiments have been carried out to test the dynamic scheduling algorithm proposed, comparing with the results the 'earliest starting time' heuristic (EST) adopted in previous literature. The results show that the dynamic algorithm proposed in this article is effective and practical.

  9. Time-dependent probability density functions and information geometry in stochastic logistic and Gompertz models

    NASA Astrophysics Data System (ADS)

    Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin

    2017-12-01

    A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.

  10. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  11. Time course of cognitive recovery after propofol anaesthesia: a level of processing approach.

    PubMed

    N'Kaoua, Bernard; Véron, Anne-Lise H; Lespinet, Véronique C; Claverie, Bernard; Sztark, François

    2002-09-01

    The aim of this study was to investigate the time course of recovery of verbal memory after general anaesthesia, as a function of the level (shallow or deep) of processing induced at the time of encoding. Thirty-one patients anaesthetized with propofol and alfentanil were compared with 28 control patients receiving only alfentanil. Memory functions were assessed the day before and 1, 6 and 24 hr after operation. Results show that for the anaesthetized group, shallow processing was impaired for 6 hr after surgery whereas the deeper processing was not recovered even at 24 hr. In addition, no specific effect of age was found.

  12. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.

    NASA Technical Reports Server (NTRS)

    Larsen, Curtis E.

    1988-01-01

    A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.

  14. Autologous Fat Grafting to the Breast Using REVOLVE System to Reduce Clinical Costs.

    PubMed

    Brzezienski, Mark A; Jarrell, John A

    2016-09-01

    With the increasing popularity of fat grafting over the past decade, the techniques for harvest, processing and preparation, and transfer of the fat cells have evolved to improve efficiency and consistency. The REVOLVE System is a fat processing device used in autologous fat grafting which eliminates much of the specialized equipment as well as the labor intensive and time consuming efforts of the original Coleman technique of fat processing. This retrospective study evaluates the economics of fat grafting, comparing traditional Coleman processing to the REVOLVE System. From June 2013 through December 2013, 88 fat grafting cases by a single-surgeon were reviewed. Timed procedures using either the REVOLVE System or Coleman technique were extracted from the group. Data including fat grafting procedure time, harvested volume, harvest and recipient sites, and concurrent procedures were gathered. Cost and utilization assessments were performed comparing the economics between the groups using standard values of operating room costs provided by the study hospital. Thirty-seven patients with timed procedures were identified, 13 of which were Coleman technique patients and twenty-four (24) were REVOLVE System patients. The average rate of fat transfer was 1.77 mL/minute for the Coleman technique and 4.69 mL/minute for the REVOLVE System, which was a statistically significant difference (P < 0.0001) between the 2 groups. Cost analysis comparing the REVOLVE System and Coleman techniques demonstrates a dramatic divergence in the price per mL of transferred fat at 75 mL when using the previously calculated rates for each group. This single surgeon's experience with the REVOLVE System for fat processing establishes economic support for its use in specific high-volume fat grafting cases. Cost analysis comparing the REVOLVE System and Coleman techniques suggests that in cases of planned fat transfer of 75 mL or more, using the REVOLVE System for fat processing is more economically beneficial. This study may serve as a guide to plastic surgeons in deciding which cases might be appropriate for the use of the REVOLVE System and is the first report comparing economics of fat grafting with the traditional Coleman technique and the REVOLVE System.

  15. The effects of time pressure on chess skill: an investigation into fast and slow processes underlying expert performance.

    PubMed

    van Harreveld, Frenk; Wagenmakers, Eric-Jan; van der Maas, Han L J

    2007-09-01

    The ability to play chess is generally assumed to depend on two types of processes: slow processes such as search, and fast processes such as pattern recognition. It has been argued that an increase in time pressure during a game selectively hinders the ability to engage in slow processes. Here we study the effect of time pressure on expert chess performance in order to test the hypothesis that compared to weak players, strong players depend relatively heavily on fast processes. In the first study we examine the performance of players of various strengths at an online chess server, for games played under different time controls. In a second study we examine the effect of time controls on performance in world championship matches. Both studies consistently show that skill differences between players become less predictive of the game outcome as the time controls are tightened. This result indicates that slow processes are at least as important for strong players as they are for weak players. Our findings pose a challenge for current theorizing in the field of expertise and chess.

  16. Standardization of pitch-range settings in voice acoustic analysis.

    PubMed

    Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C

    2009-05-01

    Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.

  17. Considering Time in Orthophotography Production: from a General Workflow to a Shortened Workflow for a Faster Disaster Response

    NASA Astrophysics Data System (ADS)

    Lucas, G.

    2015-08-01

    This article overall deals with production time with orthophoto imagery with medium size digital frame camera. The workflow examination follows two main parts: data acquisition and post-processing. The objectives of the research are fourfold: 1/ gathering time references for the most important steps of orthophoto production (it turned out that literature is missing on this topic); these figures are used later for total production time estimation; 2/ identifying levers for reducing orthophoto production time; 3/ building a simplified production workflow for emergency response: less exigent with accuracy and faster; and compare it to a classical workflow; 4/ providing methodical elements for the estimation of production time with a custom project. In the data acquisition part a comprehensive review lists and describes all the factors that may affect the acquisition efficiency. Using a simulation with different variables (average line length, time of the turns, flight speed) their effect on acquisition efficiency is quantitatively examined. Regarding post-processing, the time references figures were collected from the processing of a 1000 frames case study with 15 cm GSD covering a rectangular area of 447 km2; the time required to achieve each step during the production is written down. When several technical options are possible, each one is tested and time documented so as all alternatives are available. Based on a technical choice with the workflow and using the compiled time reference of the elementary steps, a total time is calculated for the post-processing of the 1000 frames. Two scenarios are compared as regards to time and accuracy. The first one follows the "normal" practices, comprising triangulation, orthorectification and advanced mosaicking methods (feature detection, seam line editing and seam applicator); the second is simplified and make compromise over positional accuracy (using direct geo-referencing) and seamlines preparation in order to achieve orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.

  18. Dual vs. single computer monitor in a Canadian hospital Archiving Department: a study of efficiency and satisfaction.

    PubMed

    Poder, Thomas G; Godbout, Sylvie T; Bellemare, Christian

    This paper describes a comparative study of clinical coding by Archivists (also known as Clinical Coders in some other countries) using single and dual computer monitors. In the present context, processing a record corresponds to checking the available information; searching for the missing physician information; and finally, performing clinical coding. We collected data for each Archivist during her use of the single monitor for 40 hours and during her use of the dual monitor for 20 hours. During the experimental periods, Archivists did not perform other related duties, so we were able to measure the real-time processing of records. To control for the type of records and their impact on the process time required, we categorised the cases as major or minor, based on whether acute care or day surgery was involved. Overall results show that 1,234 records were processed using a single monitor and 647 records using a dual monitor. The time required to process a record was significantly higher (p= .071) with a single monitor compared to a dual monitor (19.83 vs.18.73 minutes). However, the percentage of major cases was significantly higher (p= .000) in the single monitor group compared to the dual monitor group (78% vs. 69%). As a consequence, we adjusted our results, which reduced the difference in time required to process a record between the two systems from 1.1 to 0.61 minutes. Thus, the net real-time difference was only 37 seconds in favour of the dual monitor system. Extrapolated over a 5-year period, this would represent a time savings of 3.1% and generate a net cost savings of $7,729 CAD (Canadian dollars) for each workstation that devoted 35 hours per week to the processing of records. Finally, satisfaction questionnaire responses indicated a high level of satisfaction and support for the dual-monitor system. The implementation of a dual-monitor system in a hospital archiving department is an efficient option in the context of scarce human resources and has the strong support of Archivists.

  19. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  20. Comparison of Conventional and Microwave Treatment on Soymilk for Inactivation of Trypsin Inhibitors and In Vitro Protein Digestibility

    PubMed Central

    Vagadia, Brinda Harish; Raghavan, Vijaya

    2018-01-01

    Soymilk is lower in calories compared to cow’s milk, since it is derived from a plant source (no cholesterol) and is an excellent source of protein. Despite the beneficial factors, soymilk is considered as one of the most controversial foods in the world. It contains serine protease inhibitors which lower its nutritional value and digestibility. Processing techniques for the elimination of trypsin inhibitors and lipoxygenase, which have shorter processing time and lower production costs are required for the large-scale manufacturing of soymilk. In this study, the suitable conditions of time and temperature are optimized during microwave processing to obtain soymilk with maximum digestibility with inactivation of trypsin inhibitors, in comparison to the conventional thermal treatment. The microwave processing conditions at a frequency of 2.45 GHz and temperatures of 70 °C, 85 °C and 100 °C for 2, 5 and 8 min were investigated and were compared to conventional thermal treatments at the same temperature for 10, 20 and 30 min. Response surface methodology is used to design and optimize the experimental conditions. Thermal processing was able to increase digestibility by 7% (microwave) and 11% (conventional) compared to control, while trypsin inhibitor activity reduced to 1% in microwave processing and 3% in conventional thermal treatment when compared to 10% in raw soybean. PMID:29316679

  1. Low-SWaP coincidence processing for Geiger-mode LIDAR video

    NASA Astrophysics Data System (ADS)

    Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.

    2015-05-01

    Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.

  2. Traditional Chinese medicine on the effects of low-intensity laser irradiation on cells

    NASA Astrophysics Data System (ADS)

    Liu, Timon C.; Duan, Rui; Li, Yan; Cai, Xiongwei

    2002-04-01

    In previous paper, process-specific times (PSTs) are defined by use of molecular reaction dynamics and time quantum theory established by TCY Liu et al., and the change of PSTs representing two weakly nonlinearly coupled bio-processes are shown to be parallel, which is called time parallel principle (TPP). The PST of a physiological process (PP) is called physiological time (PT). After the PTs of two PPs are compared with their Yin-Yang property of traditional Chinese medicine (TCM), the PST model of Yin and Yang (YPTM) was put forward: for two related processes, the process of small PST is Yin, and the other process is Yang. The Yin-Yang parallel principle (YPP) was put forward in terms of YPTM and TPP, which is the fundamental principle of TCM. In this paper, we apply it to study TCM on the effects of low intensity laser on cells, and successfully explained observed phenomena.

  3. Consolidation of lunar regolith: Microwave versus direct solar heating

    NASA Technical Reports Server (NTRS)

    Kunitzer, J.; Strenski, D. G.; Yankee, S. J.; Pletka, B. J.

    1991-01-01

    The production of construction materials on the lunar surface will require an appropriate fabrication technique. Two processing methods considered as being suitable for producing dense, consolidated products such as bricks are direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various size. The regolith was considered to be a mare basalt with an overall density of 60 pct. of theoretical. Densification was assumed to take place by vitrification since this process requires moderate amounts of energy and time while still producing dense products. Microwave heating was shown to be significantly faster compared to solar furnace heating for rapid production of realistic-size bricks.

  4. Innovative methods for calculation of freeway travel time using limited data : final report.

    DOT National Transportation Integrated Search

    2008-01-01

    Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...

  5. Applying the relaxation model of interfacial heat transfer to calculate the liquid outflow with supercritical initial parameters

    NASA Astrophysics Data System (ADS)

    Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.

    2017-09-01

    A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.

  6. Industrial process surveillance system

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Singer, Ralph M.; Mott, Jack E.

    1998-01-01

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  7. Industrial process surveillance system

    DOEpatents

    Gross, K.C.; Wegerich, S.W.; Singer, R.M.; Mott, J.E.

    1998-06-09

    A system and method are disclosed for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy. 96 figs.

  8. Industrial Process Surveillance System

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W; Singer, Ralph M.; Mott, Jack E.

    2001-01-30

    A system and method for monitoring an industrial process and/or industrial data source. The system includes generating time varying data from industrial data sources, processing the data to obtain time correlation of the data, determining the range of data, determining learned states of normal operation and using these states to generate expected values, comparing the expected values to current actual values to identify a current state of the process closest to a learned, normal state; generating a set of modeled data, and processing the modeled data to identify a data pattern and generating an alarm upon detecting a deviation from normalcy.

  9. Effect of high-pressure processing and milk on the anthocyanin composition and antioxidant capacity of strawberry-based beverages.

    PubMed

    Tadapaneni, Ravi Kiran; Banaszewski, Katarzyna; Patazca, Eduardo; Edirisinghe, Indika; Cappozzo, Jack; Jackson, Lauren; Burton-Freeman, Britt

    2012-06-13

    The present study investigated processing strategies and matrix effects on the antioxidant capacity (AC) and polyphenols (PP) content of fruit-based beverages: (1) strawberry powder (Str) + dairy, D-Str; (2) Str + water, ND-Str; (3) dairy + no Str, D-NStr. Beverages were subjected to high-temperature-short-time (HTST) and high-pressure processing (HPP). AC and PP were measured before and after processing and after a 5 week shelf-life study. Unprocessed D-Str had significantly lower AC compared to unprocessed ND-Str. Significant reductions in AC were apparent in HTST- compared to HPP-processed beverages (up to 600 MPa). PP content was significantly reduced in D-Str compared to ND-Str and in response to HPP and HTST in all beverages. After storage (5 weeks), AC and PP were reduced in all beverages compared to unprocessed and week 0 processed beverages. These findings indicate potentially negative effects of milk and processing on AC and PP of fruit-based beverages.

  10. Effect of soaking, boiling, and steaming on total phenolic contentand antioxidant activities of cool season food legumes.

    PubMed

    Xu, Baojun; Chang, Sam K C

    2008-09-01

    The effects of soaking, boiling and steaming processes on the total phenolic components and antioxidant activity in commonly consumed cool season food legumes (CSFL's), including green pea, yellow pea, chickpea and lentil were investigated. As compared to original unprocessed legumes, all processing steps caused significant (p<0.05) decreases in total phenolic content (TPC), DPPH free radical scavenging activity (DPPH) in all tested CSFL's. All soaking and atmospheric boiling treatments caused significant (p<0.05) decreases in oxygen radical absorbing capacity (ORAC). However, pressure boiling and pressure steaming caused significant (p<0.05) increases in ORAC values. Steaming treatments resulted in a greater retention of TPC, DPPH, and ORAC values in all tested CSFL's as compared to boiling treatments. To obtain cooked legumes with similar palatability and firmness, pressure boiling shortened processing time as compared to atmospheric boiling, resulted in insignificant differences in TPC, DPPH for green and yellow pea. However, TPC and DPPH in cooked lentils differed significantly between atmospheric and pressure boiling. As compared to atmospheric processes, pressure processes significantly increased ORAC values in both boiled and steamed CSFL's. Greater TPC, DPPH and ORAC values were detected in boiling water than that in soaking and steaming water. Boiling also caused more solid loss than steaming. Steam processing exhibited several advantages in retaining the integrity of the legume appearance and texture of the cooked product, shortening process time, and greater retention of antioxidant components and activities. Copyright © 2008 Elsevier Ltd. All rights reserved.

  11. Modeling spiking behavior of neurons with time-dependent Poisson processes.

    PubMed

    Shinomoto, S; Tsubo, Y

    2001-10-01

    Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.

  12. Administrative Preparedness Strategies: Expediting Procurement and Contracting Cycle Times During an Emergency.

    PubMed

    Hurst, David; Sharpe, Sharon; Yeager, Valerie A

    We assessed whether administrative preparedness processes that were intended to expedite the acquisition of goods and services during a public health emergency affect estimated procurement and contracting cycle times. We obtained data from 2014-2015 applications to the Hospital Preparedness Program and Public Health Emergency Preparedness (HPP-PHEP) cooperative agreements. We compared the estimated procurement and contracting cycle times of 61 HPP-PHEP awardees that did and did not have certain administrative processes in place. Certain processes, such as statutes allowing for procuring and contracting on the open market, had an effect on reducing the estimated cycle times for obtaining goods and services. Other processes, such as cooperative purchasing agreements, also had an effect on estimated procurement time. For example, awardees with statutes that permitted them to obtain goods and services in the open market had an average procurement cycle time of 6 days; those without such statutes had a cycle time of 17 days ( P = .04). PHEP awardees should consider adopting these or similar processes in an effort to reduce cycle times.

  13. Do Visual Processing Deficits Cause Problem on Response Time Task for Dyslexics?

    ERIC Educational Resources Information Center

    Sigmundsson, H.

    2005-01-01

    This study was set out to explore the prediction that dyslexics would be likely to have particular problems compared to control group, on response time task when 'driving' a car simulator. The reason for doing so stems from the fact that there is considerable body of research on visual processing difficulties manifested by dyslexics. The task was…

  14. Comparing Changes in Late-Life Depressive Symptoms across Aging, Disablement, and Mortality Processes

    ERIC Educational Resources Information Center

    Fauth, Elizabeth B.; Gerstorf, Denis; Ram, Nilam; Malmberg, Bo

    2014-01-01

    Developmental processes are inherently time-related, with various time metrics and transition points being used to proxy how change is organized with respect to the theoretically underlying mechanisms. Using data from 4 Swedish studies of individuals aged 70-100+ (N = 453) who were measured every 2 years for up to 5 waves, we tested whether…

  15. Tide Gauge Records Reveal Improved Processing of Gravity Recovery and Climate Experiment Time-Variable Mass Solutions over the Coastal Ocean

    NASA Astrophysics Data System (ADS)

    Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.

    2018-05-01

    Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.

  16. Modulation of human time processing by subthalamic deep brain stimulation.

    PubMed

    Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.

  17. Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation

    PubMed Central

    Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767

  18. Constant versus variable response signal delays in speed--accuracy trade-offs: effects of advance preparation for processing time.

    PubMed

    Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf

    2008-07-01

    In two experiments, we used response signals (RSs) to control processing time and trace out speed--accuracy trade-off(SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is little effect of advance preparation for a given processing time, suggesting that the discrimination mechanisms underlying SAT functions are driven solely by bottom-up information processing in perceptual discrimination tasks.

  19. Simulation and Validation of Injection-Compression Filling Stage of Liquid Moulding with Fast Curing Resins

    NASA Astrophysics Data System (ADS)

    Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa

    2018-03-01

    Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.

  20. Fixation of strategies with the Moran and Fermi processes in evolutionary games

    NASA Astrophysics Data System (ADS)

    Liu, Xuesong; He, Mingfeng; Kang, Yibin; Pan, Qiuhui

    2017-10-01

    A model of stochastic evolutionary game dynamics with finite population was built. It combines the standard Moran and Fermi rules with two strategies cooperation and defection. We obtain the expressions of fixation probabilities and fixation times. The one-third rule which has been found in the frequency dependent Moran process also holds for our model. We obtain the conditions of strategy being an evolutionarily stable strategy in our model, and then make a comparison with the standard Moran process. Besides, the analytical results show that compared with the standard Moran process, fixation occurs with higher probabilities under a prisoner's dilemma game and coordination game, but with lower probabilities under a coexistence game. The simulation result shows that the fixation time in our mixed process is lower than that in the standard Fermi process. In comparison with the standard Moran process, fixation always takes more time on average in spatial populations, regardless of the game. In addition, the fixation time decreases with the growth of the number of neighbors.

  1. On the upscaling of process-based models in deltaic applications

    NASA Astrophysics Data System (ADS)

    Li, L.; Storms, J. E. A.; Walstra, D. J. R.

    2018-03-01

    Process-based numerical models are increasingly used to study the evolution of marine and terrestrial depositional environments. Whilst a detailed description of small-scale processes provides an accurate representation of reality, application on geological timescales is restrained by the associated increase in computational time. In order to reduce the computational time, a number of acceleration methods are combined and evaluated for a schematic supply-driven delta (static base level) and an accommodation-driven delta (variable base level). The performance of the combined acceleration methods is evaluated by comparing the morphological indicators such as distributary channel networking and delta volumes derived from the model predictions for various levels of acceleration. The results of the accelerated models are compared to the outcomes from a series of simulations to capture autogenic variability. Autogenic variability is quantified by re-running identical models on an initial bathymetry with 1 cm added noise. The overall results show that the variability of the accelerated models fall within the autogenic variability range, suggesting that the application of acceleration methods does not significantly affect the simulated delta evolution. The Time-scale compression method (the acceleration method introduced in this paper) results in an increased computational efficiency of 75% without adversely affecting the simulated delta evolution compared to a base case. The combination of the Time-scale compression method with the existing acceleration methods has the potential to extend the application range of process-based models towards geologic timescales.

  2. Stern Frame and Hawsepipe Construction Technology

    DTIC Science & Technology

    1978-01-01

    to the classificationsocieties regarding possible changes in the rules governing stern frame and hawsepipe designs were also considered. In the first...which were most representativeof the ships being constructedor contemplatedfor constructionin U.S. shipyards,and comparing them from the standpoint of...equipmentneeded in the manufacturing process. Time: Length of time needed to completeunits on a comparative 1.3 Summary of Results The data obtained

  3. Digital signal processing methods for biosequence comparison.

    PubMed Central

    Benson, D C

    1990-01-01

    A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096

  4. Using Dispersed Modes During Model Correlation

    NASA Technical Reports Server (NTRS)

    Stewart, Eric C.; Hathcock, Megan L.

    2017-01-01

    The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.

  5. Experimental photonic generation of chirped pulses using nonlinear dispersion-based incoherent processing.

    PubMed

    Rius, Manuel; Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2015-05-18

    We experimentally demonstrate, for the first time, a chirped microwave pulses generator based on the processing of an incoherent optical signal by means of a nonlinear dispersive element. Different capabilities have been demonstrated such as the control of the time-bandwidth product and the frequency tuning increasing the flexibility of the generated waveform compared to coherent techniques. Moreover, the use of differential detection improves considerably the limitation over the signal-to-noise ratio related to incoherent processing.

  6. Materials and Process Design for High-Temperature Carburizing: Integrating Processing and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. Apelian

    2007-07-23

    The objective of the project is to develop an integrated process for fast, high-temperature carburizing. The new process results in an order of magnitude reduction in cycle time compared to conventional carburizing and represents significant energy savings in addition to a corresponding reduction of scrap associated with distortion free carburizing steels.

  7. Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.

    PubMed

    Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence

    2012-12-01

    A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.

  8. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    PubMed

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Real-Time Plasma Process Condition Sensing and Abnormal Process Detection

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2010-01-01

    The plasma process is often used in the fabrication of semiconductor wafers. However, due to the lack of real-time etching control, this may result in some unacceptable process performances and thus leads to significant waste and lower wafer yield. In order to maximize the product wafer yield, a timely and accurately process fault or abnormal detection in a plasma reactor is needed. Optical emission spectroscopy (OES) is one of the most frequently used metrologies in in-situ process monitoring. Even though OES has the advantage of non-invasiveness, it is required to provide a huge amount of information. As a result, the data analysis of OES becomes a big challenge. To accomplish real-time detection, this work employed the sigma matching method technique, which is the time series of OES full spectrum intensity. First, the response model of a healthy plasma spectrum was developed. Then, we defined a matching rate as an indictor for comparing the difference between the tested wafers response and the health sigma model. The experimental results showed that this proposal method can detect process faults in real-time, even in plasma etching tools. PMID:22219683

  10. Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes

    NASA Astrophysics Data System (ADS)

    Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János

    2014-01-01

    Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.

  11. A Comparison of Some Processing Time Measures Based on Eye Movements. Technical Report No. 285.

    ERIC Educational Resources Information Center

    Blanchard, Harry E.

    A study was conducted to provide a replication of the gaze duration algorithm proposed by M. A. Just and P. A. Carpenter using a different kind of passage, to compare the three gaze duration algorithms that have been proposed by other researchers, and to measure processing time in reading. Fifty-one college students read a passage while their eye…

  12. Temporal abnormalities in children with developmental dyscalculia.

    PubMed

    Vicario, Carmelo Mario; Rappo, Gaetano; Pepi, Annamaria; Pavan, Andrea; Martino, Davide

    2012-01-01

    Recent imaging studies have associated Developmental dyscalculia (DD) to structural and functional alterations corresponding Parietal and the Prefrontal cortex (PFC). Since these areas were shown also to be involved in timing abilities, we hypothesized that time processing is abnormal in DD. We compared time processing abilities between 10 children with pure DD (8 years old) and 11 age-matched healthy children. Results show that the DD group underestimated duration of a sub-second scale when asked to perform a time comparison task. The timing abnormality observed in our DD participants is consistent with evidence of a shared fronto-parietal neural network for representing time and quantity.

  13. Using Empirical Mode Decomposition to process Marine Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Chen, J.; Jegen, M. D.; Heincke, B. H.; Moorkamp, M.

    2014-12-01

    The magnetotelluric (MT) data always exhibits nonstationarities due to variations of source mechanisms causing MT variations on different time and spatial scales. An additional non-stationary component is introduced through noise, which is particularly pronounced in marine MT data through motion induced noise caused by time-varying wave motion and currents. We present a new heuristic method for dealing with the non-stationarity of MT time series based on Empirical Mode Decomposition (EMD). The EMD method is used in combination with the derived instantaneous spectra to determine impedance estimates. The procedure is tested on synthetic and field MT data. In synthetic tests the reliability of impedance estimates from EMD-based method is compared to the synthetic responses of a 1D layered model. To examine how estimates are affected by noise, stochastic stationary and non-stationary noise are added on the time series. Comparisons reveal that estimates by the EMD-based method are generally more stable than those by simple Fourier analysis. Furthermore, the results are compared to those derived by a commonly used Fourier-based MT data processing software (BIRRP), which incorporates additional sophisticated robust estimations to deal with noise issues. It is revealed that the results from both methods are already comparable, even though no robust estimate procedures are implemented in the EMD approach at present stage. The processing scheme is then applied to marine MT field data. Testing is performed on short, relatively quiet segments of several data sets, as well as on long segments of data with many non-stationary noise packages. Compared to BIRRP, the new method gives comparable or better impedance estimates, furthermore, the estimates are extended to lower frequencies and less noise biased estimates with smaller error bars are obtained at high frequencies. The new processing methodology represents an important step towards deriving a better resolved Earth model to greater depth underneath the seafloor.

  14. Effect of Aging Process and Time on Physicochemical and Sensory Evaluation of Raw Beef Top Round and Shank Muscles Using an Electronic Tongue.

    PubMed

    Kim, Ji-Han; Kim, Dong-Han; Ji, Da-Som; Lee, Hyun-Jin; Yoon, Dong-Kyu; Lee, Chi-Ho

    2017-01-01

    The objective of this study was to determine the effect of aging method (dry or wet) and time (20 d or 40 d) on physical, chemical, and sensory properties of two different muscles (top round and shank) from steers (n=12) using an electronic tongue (ET). Moisture content was not affected by muscle types and aging method ( p >0.05). Shear force of dry aged beef was significantly decreased compared to that of wet aged beef. Most fatty acids of dry aged beef were significantly lower than those of wet aged beef. Dry aged shank muscles had more abundant free amino acids than top round muscles. Dry-aging process enhanced tastes such as umami and saltiness compared to wet-aging process according to ET results. Dry-aging process could enhance the instrumental tenderness and umami taste of beef. In addition, the taste of shank muscle was more affected by dry-aging process than that of round muscle.

  15. Effect of Aging Process and Time on Physicochemical and Sensory Evaluation of Raw Beef Top Round and Shank Muscles Using an Electronic Tongue

    PubMed Central

    2017-01-01

    The objective of this study was to determine the effect of aging method (dry or wet) and time (20 d or 40 d) on physical, chemical, and sensory properties of two different muscles (top round and shank) from steers (n=12) using an electronic tongue (ET). Moisture content was not affected by muscle types and aging method (p>0.05). Shear force of dry aged beef was significantly decreased compared to that of wet aged beef. Most fatty acids of dry aged beef were significantly lower than those of wet aged beef. Dry aged shank muscles had more abundant free amino acids than top round muscles. Dry-aging process enhanced tastes such as umami and saltiness compared to wet-aging process according to ET results. Dry-aging process could enhance the instrumental tenderness and umami taste of beef. In addition, the taste of shank muscle was more affected by dry-aging process than that of round muscle. PMID:29725203

  16. Event-driven processing for hardware-efficient neural spike sorting

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Pereira, João L.; Constandinou, Timothy G.

    2018-02-01

    Objective. The prospect of real-time and on-node spike sorting provides a genuine opportunity to push the envelope of large-scale integrated neural recording systems. In such systems the hardware resources, power requirements and data bandwidth increase linearly with channel count. Event-based (or data-driven) processing can provide here a new efficient means for hardware implementation that is completely activity dependant. In this work, we investigate using continuous-time level-crossing sampling for efficient data representation and subsequent spike processing. Approach. (1) We first compare signals (synthetic neural datasets) encoded with this technique against conventional sampling. (2) We then show how such a representation can be directly exploited by extracting simple time domain features from the bitstream to perform neural spike sorting. (3) The proposed method is implemented in a low power FPGA platform to demonstrate its hardware viability. Main results. It is observed that considerably lower data rates are achievable when using 7 bits or less to represent the signals, whilst maintaining the signal fidelity. Results obtained using both MATLAB and reconfigurable logic hardware (FPGA) indicate that feature extraction and spike sorting accuracies can be achieved with comparable or better accuracy than reference methods whilst also requiring relatively low hardware resources. Significance. By effectively exploiting continuous-time data representation, neural signal processing can be achieved in a completely event-driven manner, reducing both the required resources (memory, complexity) and computations (operations). This will see future large-scale neural systems integrating on-node processing in real-time hardware.

  17. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  18. ERP differences between processing of physical characteristics and personality attributes

    PubMed Central

    2012-01-01

    Background Limited data from behavioral and brain-imaging studies indicate that personality traits and physical characteristics are processed differently by the brain. Additionally, electrophysiological results of studies comparing the processing of positive and negative words have produced mixed results. It is therefore not clear how physical and personality attributes with emotional valence (i.e., positive and negative valence) are processed. Thus, this study aimed to examine the neural activity associated with words describing personality traits and physical characteristics with positive or negative emotional valence using Event Related Potentials (ERPs). Methods A sample of 15 healthy adults (7 men, 8 women) participated in a computerized word categorization task. Participants were asked to categorize visual word stimuli as physical characteristics or personality traits, while ERPs were recorded synchronously. Results Behavioral reaction times to negative physical stimuli were shorter compared to negative personality words, however reaction times did not significantly differ for positive stimuli. Electrophysiological results showed that personality stimuli elicited larger P2 and LPC (Late Positive Component) amplitudes compared to physical stimuli, regardless of negative or positive valence. Moreover, negative as compared with positive stimuli elicited larger P2 and LPC amplitudes. Conclusion Personality and physical stimuli were processed differently regardless of positive or negative valence. These findings suggest that personality traits and physical characteristics are differentially classified and are associated with different motivational significance. PMID:22967478

  19. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  20. Evaluation of Two Commercial Systems for Automated Processing, Reading, and Interpretation of Lyme Borreliosis Western Blots▿

    PubMed Central

    Binnicker, M. J.; Jespersen, D. J.; Harring, J. A.; Rollins, L. O.; Bryant, S. C.; Beito, E. M.

    2008-01-01

    The diagnosis of Lyme borreliosis (LB) is commonly made by serologic testing with Western blot (WB) analysis serving as an important supplemental assay. Although specific, the interpretation of WBs for diagnosis of LB (i.e., Lyme WBs) is subjective, with considerable variability in results. In addition, the processing, reading, and interpretation of Lyme WBs are laborious and time-consuming procedures. With the need for rapid processing and more objective interpretation of Lyme WBs, we evaluated the performances of two automated interpretive systems, TrinBlot/BLOTrix (Trinity Biotech, Carlsbad, CA) and BeeBlot/ViraScan (Viramed Biotech AG, Munich, Germany), using 518 serum specimens submitted to our laboratory for Lyme WB analysis. The results of routine testing with visual interpretation were compared to those obtained by BLOTrix analysis of MarBlot immunoglobulin M (IgM) and IgG and by ViraScan analysis of ViraBlot and ViraStripe IgM and IgG assays. BLOTrix analysis demonstrated an agreement of 84.7% for IgM and 87.3% for IgG compared to visual reading and interpretation. ViraScan analysis of the ViraBlot assays demonstrated agreements of 85.7% for IgM and 94.2% for IgG, while ViraScan analysis of the ViraStripe IgM and IgG assays showed agreements of 87.1 and 93.1%, respectively. Testing by the automated systems yielded an average time savings of 64 min/run compared to processing, reading, and interpretation by our current procedure. Our findings demonstrated that automated processing and interpretive systems yield results comparable to those of visual interpretation, while reducing the subjectivity and time required for Lyme WB analysis. PMID:18463211

  1. Evaluation of two commercial systems for automated processing, reading, and interpretation of Lyme borreliosis Western blots.

    PubMed

    Binnicker, M J; Jespersen, D J; Harring, J A; Rollins, L O; Bryant, S C; Beito, E M

    2008-07-01

    The diagnosis of Lyme borreliosis (LB) is commonly made by serologic testing with Western blot (WB) analysis serving as an important supplemental assay. Although specific, the interpretation of WBs for diagnosis of LB (i.e., Lyme WBs) is subjective, with considerable variability in results. In addition, the processing, reading, and interpretation of Lyme WBs are laborious and time-consuming procedures. With the need for rapid processing and more objective interpretation of Lyme WBs, we evaluated the performances of two automated interpretive systems, TrinBlot/BLOTrix (Trinity Biotech, Carlsbad, CA) and BeeBlot/ViraScan (Viramed Biotech AG, Munich, Germany), using 518 serum specimens submitted to our laboratory for Lyme WB analysis. The results of routine testing with visual interpretation were compared to those obtained by BLOTrix analysis of MarBlot immunoglobulin M (IgM) and IgG and by ViraScan analysis of ViraBlot and ViraStripe IgM and IgG assays. BLOTrix analysis demonstrated an agreement of 84.7% for IgM and 87.3% for IgG compared to visual reading and interpretation. ViraScan analysis of the ViraBlot assays demonstrated agreements of 85.7% for IgM and 94.2% for IgG, while ViraScan analysis of the ViraStripe IgM and IgG assays showed agreements of 87.1 and 93.1%, respectively. Testing by the automated systems yielded an average time savings of 64 min/run compared to processing, reading, and interpretation by our current procedure. Our findings demonstrated that automated processing and interpretive systems yield results comparable to those of visual interpretation, while reducing the subjectivity and time required for Lyme WB analysis.

  2. Music and Sound in Time Processing of Children with ADHD

    PubMed Central

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6–14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant’s performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. Results: (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD. PMID:26441688

  3. Music and Sound in Time Processing of Children with ADHD.

    PubMed

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  4. Recollection and familiarity for words and faces: a study comparing Remember-Know judgements and the Process Dissociation Procedure.

    PubMed

    Espinosa-García, María; Vaquero, Joaquín M M; Milliken, Bruce; Tudela, Pío

    2017-01-01

    Measures of recollection and familiarity often differ depending on the paradigm utilised. Remember-Know (R-K) and Process Dissociation Procedure (PDP) methods have been commonly used but rarely compared within a single study. In the current experiments, R-K and PDP were compared by examining the effect of attention at study and time to respond at test on recollection and familiarity using the same experimental procedures for each paradigm. We also included faces in addition to words to test the generality of the findings often obtained using words. The results from the R-K paradigm revealed that recollection and familiarity were similarly affected by attention at study and time to respond at test. However, in the case of PDP, the measures of recollection and familiarity showed a different pattern of results. The effects observed for recollection were similar to those obtained with the R-K method, whereas familiarity was affected by time to respond but not by attention at study. These results are discussed in relation to the controlled-automatic processing distinction and the contribution of each paradigm to research on recognition memory.

  5. Development of magnitude processing in children with developmental dyscalculia: space, time, and number

    PubMed Central

    Skagerlund, Kenny; Träff, Ulf

    2014-01-01

    Developmental dyscalculia (DD) is a learning disorder associated with impairments in a preverbal non-symbolic approximate number system (ANS) pertaining to areas in and around the intraparietal sulcus (IPS). The current study sought to enhance our understanding of the developmental trajectory of the ANS and symbolic number processing skills, thereby getting insight into whether a deficit in the ANS precedes or is preceded by impaired symbolic and exact number processing. Recent work has also suggested that humans are endowed with a shared magnitude system (beyond the number domain) in the brain. We therefore investigated whether children with DD demonstrated a general magnitude deficit, stemming from the proposed magnitude system, rather than a specific one limited to numerical quantity. Fourth graders with DD were compared to age-matched controls and a group of ability-matched second graders, on a range of magnitude processing tasks pertaining to space, time, and number. Children with DD displayed difficulties across all magnitude dimensions compared to age-matched peers and showed impaired ANS acuity compared to the younger, ability-matched control group, while exhibiting intact symbolic number processing. We conclude that (1) children with DD suffer from a general magnitude-processing deficit, (2) a shared magnitude system likely exists, and (3) a symbolic number-processing deficit in DD tends to be preceded by an ANS deficit. PMID:25018746

  6. Development of magnitude processing in children with developmental dyscalculia: space, time, and number.

    PubMed

    Skagerlund, Kenny; Träff, Ulf

    2014-01-01

    Developmental dyscalculia (DD) is a learning disorder associated with impairments in a preverbal non-symbolic approximate number system (ANS) pertaining to areas in and around the intraparietal sulcus (IPS). The current study sought to enhance our understanding of the developmental trajectory of the ANS and symbolic number processing skills, thereby getting insight into whether a deficit in the ANS precedes or is preceded by impaired symbolic and exact number processing. Recent work has also suggested that humans are endowed with a shared magnitude system (beyond the number domain) in the brain. We therefore investigated whether children with DD demonstrated a general magnitude deficit, stemming from the proposed magnitude system, rather than a specific one limited to numerical quantity. Fourth graders with DD were compared to age-matched controls and a group of ability-matched second graders, on a range of magnitude processing tasks pertaining to space, time, and number. Children with DD displayed difficulties across all magnitude dimensions compared to age-matched peers and showed impaired ANS acuity compared to the younger, ability-matched control group, while exhibiting intact symbolic number processing. We conclude that (1) children with DD suffer from a general magnitude-processing deficit, (2) a shared magnitude system likely exists, and (3) a symbolic number-processing deficit in DD tends to be preceded by an ANS deficit.

  7. A New Perspective on Visual Word Processing Efficiency

    PubMed Central

    Houpt, Joseph W.; Townsend, James T.; Donkin, Christopher

    2013-01-01

    As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established advantage of perceiving letters in a word or in a pseudoword over letters alone or in random sequences using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain. PMID:24334151

  8. Letter-sound processing deficits in children with developmental dyslexia: An ERP study.

    PubMed

    Moll, Kristina; Hasko, Sandra; Groth, Katharina; Bartling, Jürgen; Schulte-Körne, Gerd

    2016-04-01

    The time course during letter-sound processing was investigated in children with developmental dyslexia (DD) and typically developing (TD) children using electroencephalography. Thirty-eight children with DD and 25 TD children participated in a visual-auditory oddball paradigm. Event-related potentials (ERPs) elicited by standard and deviant stimuli in an early (100-190 ms) and late (560-750 ms) time window were analysed. In the early time window, ERPs elicited by the deviant stimulus were delayed and less left lateralized over fronto-temporal electrodes for children with DD compared to TD children. In the late time window, children with DD showed higher amplitudes extending more over right frontal electrodes. Longer latencies in the early time window and stronger right hemispheric activation in the late time window were associated with slower reading and naming speed. Additionally, stronger right hemispheric activation in the late time window correlated with poorer phonological awareness skills. Deficits in early stages of letter-sound processing influence later more explicit cognitive processes during letter-sound processing. Identifying the neurophysiological correlates of letter-sound processing and their relation to reading related skills provides insight into the degree of automaticity during letter-sound processing beyond behavioural measures of letter-sound-knowledge. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Decolorization of distillery spent wash effluent by electro oxidation (EC and EF) and Fenton processes: A comparative study.

    PubMed

    David, Charles; Arivazhagan, M; Tuvakara, Fazaludeen

    2015-11-01

    In this study, laboratory scale experiments were performed to degrade highly concentrated organic matter in the form of color in the distillery spent wash through batch oxidative methods such as electrocoagulation (EC), electrofenton (EF) and Fenton process. The effect of corresponding operating parameters, namely initial pH: 2-10; current intensity: 1-5A; electrolysis time: 0.5-4h; agitation speed: 100-500rpm; inter-electrode distance: 0.5-4cm and Fenton's reagent dosage: 5-40mg/L was employed for optimizing the process of spent wash color removal. The performance of all the three processes was compared and assessed in terms of percentage color removal. For EC, 79% color removal was achieved using iron electrodes arranged with 0.5cm of inter-electrode space and at optimum conditions of pH 7, 5A current intensity, 300rpm agitation speed and in 2h of electrolysis time. In EF, 44% spent wash decolorization was observed using carbon (graphite) electrodes with an optimum conditions of 0.5cm inter-electrode distance, pH 3, 4A current intensity, 20mg/L FeSO4 and agitation speed of 400rpm for 3h of electrolysis time. By Fenton process, 66% decolorization was attained by Fenton process at optimized conditions of pH 3, 40mg/L of Fenton's reagent and at 500rpm of agitation speed for 4h of treatment time. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Discrete time modelization of human pilot behavior

    NASA Technical Reports Server (NTRS)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  11. Resin Flow Behavior Simulation of Grooved Foam Sandwich Composites with the Vacuum Assisted Resin Infusion (VARI) Molding Process

    PubMed Central

    Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo

    2012-01-01

    The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.

  12. Bioreactors for high cell density and continuous multi-stage cultivations: options for process intensification in cell culture-based viral vaccine production.

    PubMed

    Tapia, Felipe; Vázquez-Ramírez, Daniel; Genzel, Yvonne; Reichl, Udo

    2016-03-01

    With an increasing demand for efficacious, safe, and affordable vaccines for human and animal use, process intensification in cell culture-based viral vaccine production demands advanced process strategies to overcome the limitations of conventional batch cultivations. However, the use of fed-batch, perfusion, or continuous modes to drive processes at high cell density (HCD) and overextended operating times has so far been little explored in large-scale viral vaccine manufacturing. Also, possible reductions in cell-specific virus yields for HCD cultivations have been reported frequently. Taking into account that vaccine production is one of the most heavily regulated industries in the pharmaceutical sector with tough margins to meet, it is understandable that process intensification is being considered by both academia and industry as a next step toward more efficient viral vaccine production processes only recently. Compared to conventional batch processes, fed-batch and perfusion strategies could result in ten to a hundred times higher product yields. Both cultivation strategies can be implemented to achieve cell concentrations exceeding 10(7) cells/mL or even 10(8) cells/mL, while keeping low levels of metabolites that potentially inhibit cell growth and virus replication. The trend towards HCD processes is supported by development of GMP-compliant cultivation platforms, i.e., acoustic settlers, hollow fiber bioreactors, and hollow fiber-based perfusion systems including tangential flow filtration (TFF) or alternating tangential flow (ATF) technologies. In this review, these process modes are discussed in detail and compared with conventional batch processes based on productivity indicators such as space-time yield, cell concentration, and product titers. In addition, options for the production of viral vaccines in continuous multi-stage bioreactors such as two- and three-stage systems are addressed. While such systems have shown similar virus titers compared to batch cultivations, keeping high yields for extended production times is still a challenge. Overall, we demonstrate that process intensification of cell culture-based viral vaccine production can be realized by the consequent application of fed-batch, perfusion, and continuous systems with a significant increase in productivity. The potential for even further improvements is high, considering recent developments in establishment of new (designer) cell lines, better characterization of host cell metabolism, advances in media design, and the use of mathematical models as a tool for process optimization and control.

  13. AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images

    USGS Publications Warehouse

    Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.

    2017-01-01

    Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.

  14. Pichia pastoris secretes recombinant proteins less efficiently than Chinese hamster ovary cells but allows higher space-time yields for less complex proteins

    PubMed Central

    Maccani, Andreas; Landes, Nils; Stadlmayr, Gerhard; Maresch, Daniel; Leitner, Christian; Maurer, Michael; Gasser, Brigitte; Ernst, Wolfgang; Kunert, Renate; Mattanovich, Diethard

    2014-01-01

    Chinese hamster ovary (CHO) cells are currently the workhorse of the biopharmaceutical industry. However, yeasts such as Pichia pastoris are about to enter this field. To compare their capability for recombinant protein secretion, P. pastoris strains and CHO cell lines producing human serum albumin (HSA) and the 3D6 single chain Fv-Fc anti-HIV-1 antibody (3D6scFv-Fc) were cultivated in comparable fed batch processes. In P. pastoris, the mean biomass-specific secretion rate (qp) was 40-fold lower for 3D6scFv-Fc compared to HSA. On the contrary, qp was similar for both proteins in CHO cells. When comparing both organisms, the mean qp of the CHO cell lines was 1011-fold higher for 3D6scFv-Fc and 26-fold higher for HSA. Due to the low qp of the 3D6scFv-Fc producing strain, the space-time yield (STY) was 9.6-fold lower for P. pastoris. In contrast, the STY of the HSA producer was 9.2-fold higher compared to CHO cells because of the shorter process time and higher biomass density. The results indicate that the protein secretion machinery of P. pastoris is much less efficient and the secretion rate strongly depends on the complexity of the recombinant protein. However, process efficiency of the yeast system allows higher STYs for less complex proteins. PMID:24390926

  15. An open Markov chain scheme model for a credit consumption portfolio fed by ARIMA and SARMA processes

    NASA Astrophysics Data System (ADS)

    Esquível, Manuel L.; Fernandes, José Moniz; Guerreiro, Gracinda R.

    2016-06-01

    We introduce a schematic formalism for the time evolution of a random population entering some set of classes and such that each member of the population evolves among these classes according to a scheme based on a Markov chain model. We consider that the flow of incoming members is modeled by a time series and we detail the time series structure of the elements in each of the classes. We present a practical application to data from a credit portfolio of a Cape Verdian bank; after modeling the entering population in two different ways - namely as an ARIMA process and as a deterministic sigmoid type trend plus a SARMA process for the residues - we simulate the behavior of the population and compare the results. We get that the second method is more accurate in describing the behavior of the populations when compared to the observed values in a direct simulation of the Markov chain.

  16. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  17. Systems view of adipogenesis via novel omics-driven and tissue-specific activity scoring of network functional modules

    NASA Astrophysics Data System (ADS)

    Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado

    2016-07-01

    The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.

  18. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  19. The study of features of the structural organization of the au-tomated information processing system of the collective type

    NASA Astrophysics Data System (ADS)

    Nikolaev, V. N.; Titov, D. V.; Syryamkin, V. I.

    2018-05-01

    The comparative assessment of the level of channel capacity of different variants of the structural organization of the automated information processing systems is made. The information processing time assessment model depending on the type of standard elements and their structural organization is developed.

  20. Subprimal purchasing and merchandising decisions for pork: relationship to retail yield and fabrication time.

    PubMed

    Lorenzen, C L; Griffin, D B; Dockerty, T R; Walter, J P; Johnson, H K; Savell, J W

    1996-01-01

    Boxed pork was obtained to represent four different purchase specifications (different anatomical separation locations and[or] external fat trim levels) common in the pork industry to conduct a study of retail yields and labor requirements. Bone-in loins (n = 180), boneless loins (n = 94), and Boston butts (n = 148) were assigned randomly to fabrication styles within subprimals. When comparing cutting styles within subprimals, it was evident that cutting style affected percentage of retail yield and cutting time. When more bone-in cuts were prepared from bone-in loin subprimals, retail yields ranged from 92.80 +/- .61 to 95.28 +/- .45%, and processing times ranged from 222.57 +/- 10.13 to 318.99 +/- 7.85 s, from the four suppliers. When more boneless cuts were prepared from bone-in loin subprimals, retail yields ranged from 71.12 +/- 1.10 to 77.92 +/- .77% and processing times ranged from 453.49 +/- 8.95 to 631.09 +/- 15.04 s from the different loins. Comparing boneless to bone-in cuts from bone-in loins resulted in lower yields and required greater processing times. Significant variations in yields and times were found within cutting styles. These differences seemed to have been the result of variation in supplier fat trim level and anatomical separation (primarily scribe length).

  1. Glass transition dynamics of stacked thin polymer films

    NASA Astrophysics Data System (ADS)

    Fukao, Koji; Terasawa, Takehide; Oda, Yuto; Nakamura, Kenji; Tahara, Daisuke

    2011-10-01

    The glass transition dynamics of stacked thin films of polystyrene and poly(2-chlorostyrene) were investigated using differential scanning calorimetry and dielectric relaxation spectroscopy. The glass transition temperature Tg of as-stacked thin polystyrene films has a strong depression from that of the bulk samples. However, after annealing at high temperatures above Tg, the stacked thin films exhibit glass transition at a temperature almost equal to the Tg of the bulk system. The α-process dynamics of stacked thin films of poly(2-chlorostyrene) show a time evolution from single-thin-film-like dynamics to bulk-like dynamics during the isothermal annealing process. The relaxation rate of the α process becomes smaller with increase in the annealing time. The time scale for the evolution of the α dynamics during the annealing process is very long compared with that for the reptation dynamics. At the same time, the temperature dependence of the relaxation time for the α process changes from Arrhenius-like to Vogel-Fulcher-Tammann dependence with increase of the annealing time. The fragility index increases and the distribution of the α-relaxation times becomes smaller with increase in the annealing time for isothermal annealing. The observed change in the α process is discussed with respect to the interfacial interaction between the thin layers of stacked thin polymer films.

  2. Remote vs. head-mounted eye-tracking: a comparison using radiologists reading mammograms

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Gur, David

    2007-03-01

    Eye position monitoring has been used for decades in Radiology in order to determine how radiologists interpret medical images. Using these devices several discoveries about the perception/decision making process have been made, such as the importance of comparisons of perceived abnormalities with selected areas of the background, the likelihood that a true lesion will attract visual attention early in the reading process, and the finding that most misses attract prolonged visual dwell, often comparable to dwell in the location of reported lesions. However, eye position tracking is a cumbersome process, which often requires the observer to wear a helmet gear which contains the eye tracker per se and a magnetic head tracker, which allows for the computation of head position. Observers tend to complain of fatigue after wearing the gear for a prolonged time. Recently, with the advances made to remote eye-tracking, the use of head-mounted systems seemed destined to become a thing of the past. In this study we evaluated a remote eye tracking system, and compared it to a head-mounted system, as radiologists read a case set of one-view mammograms on a high-resolution display. We compared visual search parameters between the two systems, such as time to hit the location of the lesion for the first time, amount of dwell time in the location of the lesion, total time analyzing the image, etc. We also evaluated the observers' impressions of both systems, and what their perceptions were of the restrictions of each system.

  3. Comparing an annual and daily time-step model for predicting field-scale phosphorus loss

    USDA-ARS?s Scientific Manuscript database

    Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...

  4. Information Fusion for Feature Extraction and the Development of Geospatial Information

    DTIC Science & Technology

    2004-07-01

    of automated processing . 2. Requirements for Geospatial Information Accurate, timely geospatial information is critical for many military...this evaluation illustrates some of the difficulties in comparing manual and automated processing results (figure 5). The automated delineation of

  5. a Comparative Case Study of Reflection Seismic Imaging Method

    NASA Astrophysics Data System (ADS)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  6. Resource constrained design of artificial neural networks using comparator neural network

    NASA Technical Reports Server (NTRS)

    Wah, Benjamin W.; Karnik, Tanay S.

    1992-01-01

    We present a systematic design method executed under resource constraints for automating the design of artificial neural networks using the back error propagation algorithm. Our system aims at finding the best possible configuration for solving the given application with proper tradeoff between the training time and the network complexity. The design of such a system is hampered by three related problems. First, there are infinitely many possible network configurations, each may take an exceedingly long time to train; hence, it is impossible to enumerate and train all of them to completion within fixed time, space, and resource constraints. Second, expert knowledge on predicting good network configurations is heuristic in nature and is application dependent, rendering it difficult to characterize fully in the design process. A learning procedure that refines this knowledge based on examples on training neural networks for various applications is, therefore, essential. Third, the objective of the network to be designed is ill-defined, as it is based on a subjective tradeoff between the training time and the network cost. A design process that proposes alternate configurations under different cost-performance tradeoff is important. We have developed a Design System which schedules the available time, divided into quanta, for testing alternative network configurations. Its goal is to select/generate and test alternative network configurations in each quantum, and find the best network when time is expended. Since time is limited, a dynamic schedule that determines the network configuration to be tested in each quantum is developed. The schedule is based on relative comparison of predicted training times of alternative network configurations using comparator network paradigm. The comparator network has been trained to compare training times for a large variety of traces of TSSE-versus-time collected during back-propagation learning of various applications.

  7. Transmodal comparison of auditory, motor, and visual post-processing with and without intentional short-term memory maintenance.

    PubMed

    Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias

    2010-12-01

    To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Effect of nucleation time on bending response of ionic polymer–metal composite actuators

    DOE PAGES

    Kim, Suran; Hong, Seungbum; Choi, Yoon-Young; ...

    2013-07-02

    We attempted an autocatalytic electro-less plating of nickel in order to replace an electroless impregnation-reduction (IR) method in ionic polymer–metal composite (IPMC) actuators to reduce cost and processing time. Because nucleation time of Pd–Sn colloids is the determining factor of overall processing time, we used the nucleation time as our control parameter. In order to optimize nucleation time and investigate its effect on the performance of IPMC actuators, we analyzed the relationship between the nucleation time, interface morphology and electrical properties. The optimized nucleation time was 10 h. Furthermore, the trends of the performance and electrical properties as a functionmore » of nucleation time were attributed to the fact that the Ni penetration depth was determined by the minimum diffusion length of either Pd–Sn colloids or reducing agent ions. The Ni-IPMC actuators can be fabricated less than 14 h processing time without deteriorating performance of the actuators, which is comparable to Pt-IPMC prepared by IR method.« less

  9. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  10. Assessing the quality of radiographic processing in general dental practice.

    PubMed

    Thornley, P H; Stewardson, D A; Rout, P G J; Burke, F J T

    2006-05-13

    To determine if a commercial device (Vischeck) for monitoring film processing quality was a practical option in general dental practice, and to assess processing quality among a group of GDPs in the West Midlands with this device. Clinical evaluation. General dental practice, UK, 2004. Ten GDP volunteers from a practice based research group processed Vischeck strips (a) when chemicals were changed, (b) one week later, and (c) immediately before the next change of chemicals. These were compared with strips processed under ideal conditions. Additionally, a series of duplicate radiographs were produced and processed together with Vischeck strips in progressively more dilute developer solutions to compare the change in radiograph quality assessed clinically with that derived from the Vischeck. The Vischeck strips suggested that at the time chosen for change of processing chemicals, eight dentists had been processing films well beyond the point indicated for replacement. Solutions were changed after a wide range of time periods and number of films processed. The calibration of the Vischeck strip correlated closely to a clinical assessment of acceptable film quality. Vischeck strips are a useful aid to monitoring processing quality in automatic developers in general dental practice. Most of this group of GDPs were using chemicals beyond the point at which diagnostic yield would be affected.

  11. Interoceptive Processes in Anorexia Nervosa in the Time Course of Cognitive-Behavioral Therapy: A Pilot Study.

    PubMed

    Fischer, Dana; Berberich, Götz; Zaudig, Michael; Krauseneck, Till; Weiss, Sarah; Pollatos, Olga

    2016-01-01

    Previous studies report reduced interoceptive abilities in anorexia nervosa (AN) using various methods. Recent research suggests that different levels of interoceptive processes aiming at different subdomains of interoceptive abilities must be further distinguished as these levels can be differentially affected. Two important levels refer to interoceptive accuracy (IA) derived from objective performance tasks such as the heartbeat detection task and interoceptive sensibility (IS) as assessed by self-report. There is a lack of studies investigating both IA and IS in AN and examining them in the time course of therapy. The aim of this pilot study was to evaluate the different interoceptive processes - especially IA and IS - in the time course of therapy. Fifteen patients with AN (restricting type) from the Psychosomatic Clinic in Windach were investigated three times (T1, T2, T3) during a standardized cognitive-behavioral therapy and compared with 15 matched healthy controls assessed at Ulm University in a comparable design. All participants performed the heartbeat detection task examining IA and completed standard psychological assessments including an assessment of IS. Patients with AN showed a significantly decreased weight, higher levels of depression, and both reduced IA and IS compared to healthy controls at T1. Following therapy, patients recovered in terms of weight and depression symptomatology. A descriptive trend for recovering from IA and IS was observed. Our findings suggest that interoceptive deficits are present in recovered patients. Therefore, further investigations are needed with more patients, differentiating between relapsed and recovered patients, and more specific training methods to improve interoceptive processes.

  12. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  13. Motor demands impact speed of information processing in Autism Spectrum Disorders

    PubMed Central

    Kenworthy, Lauren; Yerys, Benjamin E.; Weinblatt, Rachel; Abrams, Danielle N.; Wallace, Gregory L.

    2015-01-01

    Objective The apparent contradiction between preserved or even enhanced perceptual processing speed on inspection time tasks in autism spectrum disorders (ASD) and impaired performance on complex processing speed tasks that require motor output (e.g. Wechsler Processing Speed Index) has not yet been systematically investigated. This study investigates whether adding motor output demands to an inspection time task impairs ASD performance compared to that of typically developing control (TDC) children. Method The performance of children with ASD (n=28; mean FSIQ=115) and TDC (n=25; mean FSIQ=122) children was compared on processing speed tasks with increasing motor demand. Correlations were run between ASD task performance and Autism Diagnostic Observation Schedule (ADOS) Communication scores. Results Performance by the ASD and TDC groups on a simple perceptual processing speed task with minimal motor demand was equivalent, though it diverged (ASD worse than TDC) on two tasks with the same stimuli, but increased motor output demands. ASD performance on the moderate but not the high speeded motor output demand task was negatively correlated with ADOS communication symptoms. Conclusions These data address the apparent contradiction between preserved inspection time in the context of slowed “processing speed” in ASD. They show that processing speed is preserved when motor demands are minimized, but that increased motor output demands interfere with the ability to act on perceptual processing of simple stimuli. Reducing motor demands (e.g. through the use of computers) may increase the capacity of people with ASD to demonstrate good perceptual processing in a variety of educational, vocational and social settings. PMID:23937483

  14. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

  15. Statistical process control as a tool for controlling operating room performance: retrospective analysis and benchmarking.

    PubMed

    Chen, Tsung-Tai; Chang, Yun-Jau; Ku, Shei-Ling; Chung, Kuo-Piao

    2010-10-01

    There is much research using statistical process control (SPC) to monitor surgical performance, including comparisons among groups to detect small process shifts, but few of these studies have included a stabilization process. This study aimed to analyse the performance of surgeons in operating room (OR) and set a benchmark by SPC after stabilized process. The OR profile of 499 patients who underwent laparoscopic cholecystectomy performed by 16 surgeons at a tertiary hospital in Taiwan during 2005 and 2006 were recorded. SPC was applied to analyse operative and non-operative times using the following five steps: first, the times were divided into two segments; second, they were normalized; third, they were evaluated as individual processes; fourth, the ARL(0) was calculated;, and fifth, the different groups (surgeons) were compared. Outliers were excluded to ensure stability for each group and to facilitate inter-group comparison. The results showed that in the stabilized process, only one surgeon exhibited a significantly shorter total process time (including operative time and non-operative time). In this study, we use five steps to demonstrate how to control surgical and non-surgical time in phase I. There are some measures that can be taken to prevent skew and instability in the process. Also, using SPC, one surgeon can be shown to be a real benchmark. © 2010 Blackwell Publishing Ltd.

  16. Mathematical modeling and hydrodynamics of Electrochemical deburring process

    NASA Astrophysics Data System (ADS)

    Prabhu, Satisha; Abhishek Kumar, K., Dr

    2018-04-01

    The electrochemical deburring (ECD) is a variation of electrochemical machining is considered as one of the efficient methods for deburring of intersecting features and internal parts. Since manual deburring costs are comparatively high one can potentially use this method in both batch production and flow production. The other advantage of this process is that time of deburring as is on the order of seconds as compared to other methods. In this paper, the mathematical modeling of Electrochemical deburring is analysed from its deburring time and base metal removal point of view. Simultaneously material removal rate is affected by electrolyte temperature and bubble formation. The mathematical model and hydrodynamics of the process throw limelight upon optimum velocity calculations which can be theoretically determined. The analysis can be the powerful tool for prediction of the above-mentioned parameters by experimentation.

  17. Identifying causes of laboratory turnaround time delay in the emergency department.

    PubMed

    Jalili, Mohammad; Shalileh, Keivan; Mojtahed, Ali; Mojtahed, Mohammad; Moradi-Lakeh, Maziar

    2012-12-01

    Laboratory turnaround time (TAT) is an important determinant of patient stay and quality of care. Our objective is to evaluate laboratory TAT in our emergency department (ED) and to generate a simple model for identifying the primary causes for delay. We measured TATs of hemoglobin, potassium, and prothrombin time tests requested in the ED of a tertiary-care, metropolitan hospital during a consecutive one-week period. The time of different steps (physician order, nurse registration, blood-draw, specimen dispatch from the ED, specimen arrival at the laboratory, and result availability) in the test turnaround process were recorded and the intervals between these steps (order processing, specimen collection, ED waiting, transit, and within-laboratory time) and total TAT were calculated. Median TATs for hemoglobin and potassium were compared with those of the 1990 Q-Probes Study (25 min for hemoglobin and 36 min for potassium) and its recommended goals (45 min for 90% of tests). Intervals were compared according to the proportion of TAT they comprised. Median TATs (170 min for 132 hemoglobin tests, 225 min for 172 potassium tests, and 195.5 min for 128 prothrombin tests) were drastically longer than Q-Probes reported and recommended TATs. The longest intervals were ED waiting time and order processing.  Laboratory TAT varies among institutions, and data are sparse in developing countries. In our ED, actions to reduce ED waiting time and order processing are top priorities. We recommend utilization of this model by other institutions in settings with limited resources to identify their own priorities for reducing laboratory TAT.

  18. The impact of a preloaded intraocular lens delivery system on operating room efficiency in routine cataract surgery.

    PubMed

    Jones, Jason J; Chu, Jeffrey; Graham, Jacob; Zaluski, Serge; Rocha, Guillermo

    2016-01-01

    The aim of this study was to evaluate the operational impact of using preloaded intraocular lens (IOL) delivery systems compared with manually loaded IOL delivery processes during routine cataract surgeries. Time and motion data, staff and surgery schedules, and cost accounting reports were collected across three sites located in the US, France, and Canada. Time and motion data were collected for manually loaded IOL processes and preloaded IOL delivery systems over four surgery days. Staff and surgery schedules and cost accounting reports were collected during the 2 months prior and after introduction of the preloaded IOL delivery system. The study included a total of 154 routine cataract surgeries across all three sites. Of these, 77 surgeries were performed using a preloaded IOL delivery system, and the remaining 77 surgeries were performed using a manual IOL delivery process. Across all three sites, use of the preloaded IOL delivery system significantly decreased mean total case time by 6.2%-12.0% (P<0.001 for data from Canada and the US and P<0.05 for data from France). Use of the preloaded delivery system also decreased surgeon lens time, surgeon delays, and eliminated lens touches during IOL preparation. Compared to a manual IOL delivery process, use of a preloaded IOL delivery system for cataract surgery reduced total case time, total surgeon lens time, surgeon delays, and eliminated IOL touches. The time savings provided by the preloaded IOL delivery system provide an opportunity for sites to improve routine cataract surgery throughput without impacting surgeon or staff capacity.

  19. Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers

    PubMed Central

    Luu, Loc; Dinh, Anh

    2018-01-01

    The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821

  20. Automated Data Abstraction of Cardiopulmonary Resuscitation Process Measures for Complete Episodes of Cardiac Arrest Resuscitation.

    PubMed

    Lin, Steve; Turgulov, Anuar; Taher, Ahmed; Buick, Jason E; Byers, Adam; Drennan, Ian R; Hu, Samantha; J Morrison, Laurie

    2016-10-01

    Cardiopulmonary resuscitation (CPR) process measures research and quality assurance has traditionally been limited to the first 5 minutes of resuscitation due to significant costs in time, resources, and personnel from manual data abstraction. CPR performance may change over time during prolonged resuscitations, which represents a significant knowledge gap. Moreover, currently available commercial software output of CPR process measures are difficult to analyze. The objective was to develop and validate a software program to help automate the abstraction and transfer of CPR process measures data from electronic defibrillators for complete episodes of cardiac arrest resuscitation. We developed a software program to facilitate and help automate CPR data abstraction and transfer from electronic defibrillators for entire resuscitation episodes. Using an intermediary Extensible Markup Language export file, the automated software transfers CPR process measures data (electrocardiogram [ECG] number, CPR start time, number of ventilations, number of chest compressions, compression rate per minute, compression depth per minute, compression fraction, and end-tidal CO 2 per minute). We performed an internal validation of the software program on 50 randomly selected cardiac arrest cases with resuscitation durations between 15 and 60 minutes. CPR process measures were manually abstracted and transferred independently by two trained data abstractors and by the automated software program, followed by manual interpretation of raw ECG tracings, treatment interventions, and patient events. Error rates and the time needed for data abstraction, transfer, and interpretation were measured for both manual and automated methods, compared to an additional independent reviewer. A total of 9,826 data points were each abstracted by the two abstractors and by the software program. Manual data abstraction resulted in a total of six errors (0.06%) compared to zero errors by the software program. The mean ± SD time measured per case for manual data abstraction was 20.3 ± 2.7 minutes compared to 5.3 ± 1.4 minutes using the software program (p = 0.003). We developed and validated an automated software program that efficiently abstracts and transfers CPR process measures data from electronic defibrillators for complete cardiac arrest episodes. This software will enable future cardiac arrest studies and quality assurance programs to evaluate the impact of CPR process measures during prolonged resuscitations. © 2016 by the Society for Academic Emergency Medicine.

  1. Turbulent mixing and removal of ozone within an Amazon rainforest canopy

    NASA Astrophysics Data System (ADS)

    Freire, L. S.; Gerken, T.; Ruiz-Plancarte, J.; Wei, D.; Fuentes, J. D.; Katul, G. G.; Dias, N. L.; Acevedo, O. C.; Chamecki, M.

    2017-03-01

    Simultaneous profiles of turbulence statistics and mean ozone mixing ratio are used to establish a relation between eddy diffusivity and ozone mixing within the Amazon forest. A one-dimensional diffusion model is proposed and used to infer mixing time scales from the eddy diffusivity profiles. Data and model results indicate that during daytime conditions, the upper (lower) half of the canopy is well (partially) mixed most of the time and that most of the vertical extent of the forest can be mixed in less than an hour. During nighttime, most of the canopy is predominantly poorly mixed, except for periods with bursts of intermittent turbulence. Even though turbulence is faster than chemistry during daytime, both processes have comparable time scales in the lower canopy layers during nighttime conditions. Nonchemical loss time scales (associated with stomatal uptake and dry deposition) for the entire forest are comparable to turbulent mixing time scale in the lower canopy during the day and in the entire canopy during the night, indicating a tight coupling between turbulent transport and dry deposition and stomatal uptake processes. Because of the significant time of day and height variability of the turbulent mixing time scale inside the canopy, it is important to take it into account when studying chemical and biophysical processes happening in the forest environment. The method proposed here to estimate turbulent mixing time scales is a reliable alternative to currently used models, especially for situations in which the vertical distribution of the time scale is relevant.

  2. [Development of an automated processing method to detect coronary motion for coronary magnetic resonance angiography].

    PubMed

    Asou, Hiroya; Imada, N; Sato, T

    2010-06-20

    On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.

  3. FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES

    DTIC Science & Technology

    2017-06-01

    FINDING A METHOD FOR THE MADNESS: A COMPARATIVE ANALYSIS OF STRATEGIC DESIGN METHODOLOGIES BY AMANDA DONNELLY A THESIS...work develops a comparative model for strategic design methodologies, focusing on the primary elements of vision, time, process, communication and...collaboration, and risk assessment. My analysis dissects and compares three potential design methodologies including, net assessment, scenarios and

  4. Real-time parameter optimization based on neural network for smart injection molding

    NASA Astrophysics Data System (ADS)

    Lee, H.; Liau, Y.; Ryu, K.

    2018-03-01

    The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.

  5. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    NASA Astrophysics Data System (ADS)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  6. Comparative Effects of Antihistamines on Aircrew Mission Effectiveness under Sustained Operations

    DTIC Science & Technology

    1992-06-01

    measures consist mainly of process measures. Process measures are measures of activities used to accomplish the mission and produce the final results...They include task completion times and response variability, and information processing rates as they relate to unique task assignment. Performance...contains process measures that assess the Individual contributions of hardware/software and human components to overall system performance. Measures

  7. Pad ultrasonic batch dyeing of causticized lyocell fabric with reactive dyes.

    PubMed

    Babar, Aijaz Ahmed; Peerzada, Mazhar Hussain; Jhatial, Abdul Khalique; Bughio, Noor-Ul-Ain

    2017-01-01

    Conventionally, cellulosic fabric dyed with reactive dyes requires significant amount of salt. However, the dyeing of a solvent spun regenerated cellulosic fiber is a critical process. This paper presents the dyeing results of lyocell fabrics dyed with conventional pad batch (CPB) and pad ultrasonic batch (PUB) processes. The dyeing of lyocell fabrics was carried out with two commercial dyes namely Drimarine Blue CL-BR and Ramazol Blue RGB. Dyeing parameters including concentration of sodium hydroxide, sodium carbonate and dwell time were compared for the two processes. The outcomes show that PUB dyed samples offered reasonably higher color yield and dye fixation than CPB dyed samples. A remarkable reduction of 12h in batching time, 18ml/l in NaOH and 05g/l in Na 2 CO 3 quantity was observed for PUB processed samples producing similar results compared to CPB process, making PUB a more economical, productive and an environment friendly process. Color fastness examination witnessed identical results for both PUB and CPB methods. No significant change in surface morphology of PUB processed samples was observed through scanning electron microscope (SEM) analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Using convolutional neural networks to estimate time-of-flight from PET detector waveforms

    NASA Astrophysics Data System (ADS)

    Berg, Eric; Cherry, Simon R.

    2018-01-01

    Although there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time-of-flight information from the detector signals. In most cases, the timing pick-off for each waveform is computed using leading edge discrimination or constant fraction discrimination, as these were historically easily implemented with analog pulse processing electronics. However, now with the availability of fast waveform digitizers, there is opportunity to make use of more of the timing information contained in the coincident detector waveforms with advanced signal processing techniques. Here we describe the application of deep convolutional neural networks (CNNs), a type of machine learning, to estimate time-of-flight directly from the pair of digitized detector waveforms for a coincident event. One of the key features of this approach is the simplicity in obtaining ground-truth-labeled data needed to train the CNN: the true time-of-flight is determined from the difference in path length between the positron emission and each of the coincident detectors, which can be easily controlled experimentally. The experimental setup used here made use of two photomultiplier tube-based scintillation detectors, and a point source, stepped in 5 mm increments over a 15 cm range between the two detectors. The detector waveforms were digitized at 10 GS s-1 using a bench-top oscilloscope. The results shown here demonstrate that CNN-based time-of-flight estimation improves timing resolution by 20% compared to leading edge discrimination (231 ps versus 185 ps), and 23% compared to constant fraction discrimination (242 ps versus 185 ps). By comparing several different CNN architectures, we also showed that CNN depth (number of convolutional and fully connected layers) had the largest impact on timing resolution, while the exact network parameters, such as convolutional filter size and number of feature maps, had only a minor influence.

  9. Effect of processing time delay on the dose response of Kodak EDR2 film.

    PubMed

    Childress, Nathan L; Rosen, Isaac I

    2004-08-01

    Kodak EDR2 film is a widely used two-dimensional dosimeter for intensity modulated radiotherapy (IMRT) measurements. Our clinical use of EDR2 film for IMRT verifications revealed variations and uncertainties in dose response that were larger than expected, given that we perform film calibrations for every experimental measurement. We found that the length of time between film exposure and processing can affect the absolute dose response of EDR2 film by as much as 4%-6%. EDR2 films were exposed to 300 cGy using 6 and 18 MV 10 x 10 cm2 fields and then processed after time delays ranging from 2 min to 24 h. An ion chamber measured the relative dose for these film exposures. The ratio of optical density (OD) to dose stabilized after 3 h. Compared to its stable value, the film response was 4%-6% lower at 2 min and 1% lower at 1 h. The results of the 4 min and 1 h processing time delays were verified with a total of four different EDR2 film batches. The OD/dose response for XV2 films was consistent for time periods of 4 min and 1 h between exposure and processing. To investigate possible interactions of the processing time delay effect with dose, single EDR2 films were irradiated to eight different dose levels between 45 and 330 cGy using smaller 3 x 3 cm2 areas. These films were processed after time delays of 1, 3, and 6 h, using 6 and 18 MV photon qualities. The results at all dose levels were consistent, indicating that there is no change in the processing time delay effect for different doses. The difference in the time delay effect between the 6 and 18 MV measurements was negligible for all experiments. To rule out bias in selecting film regions for OD measurement, we compared the use of a specialized algorithm that systematically determines regions of interest inside the 10 x 10 cm2 exposure areas to manually selected regions of interest. There was a maximum difference of only 0.07% between the manually and automatically selected regions, indicating that the use of a systematic algorithm to determine regions of interest in large and fairly uniform areas is not necessary. Based on these results, we recommend a minimum time of 1 h between exposure and processing for all EDR2 film measurements.

  10. Dual process theory and intermediate effect: are faculty and residents' performance on multiple-choice, licensing exam questions different?

    PubMed

    Dong, Ting; Durning, Steven J; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Lipner, Rebecca; Schuwirth, Lambert

    2015-04-01

    Clinical reasoning is essential for the practice of medicine. Dual process theory conceptualizes reasoning as falling into two general categories: nonanalytic reasoning (pattern recognition) and analytic reasoning (active comparing and contrasting of alternatives). The debate continues regarding how expert performance develops and how individuals make the best use of analytic and nonanalytic processes. Several investigators have identified the unexpected finding that intermediates tend to perform better on licensing examination items than experts, which has been termed the "intermediate effect." We explored differences between faculty and residents on multiple-choice questions (MCQs) using dual process measures (both reading and answering times) to inform this ongoing debate. Faculty (board-certified internists; experts) and residents (internal medicine interns; intermediates) answered live licensing examination MCQs (U.S. Medical Licensing Examination Step 2 Clinical Knowledge and American Board of Internal Medicine Certifying Examination) while being timed. We conducted repeated analysis of variance to compare the 2 groups on average reading time, answering time, and accuracy on various types of items. Faculty and residents did not differ significantly in reading time [F (1,35) = 0.01, p = 0.93], answering time [F (1,35) = 0.60, p = 0.44], or accuracy [F (1,35) = 0.24, p = 0.63] regardless of easy or hard items. Dual process theory was not evidenced in this study. However, this lack of difference between faculty and residents may have been affected by the small sample size of participants and MCQs may not reflect how physicians made decisions in actual practice setting. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.

  11. Effect of production management on semen quality during long-term storage in different European boar studs.

    PubMed

    Schulze, M; Kuster, C; Schäfer, J; Jung, M; Grossfeld, R

    2018-03-01

    The processing of ejaculates is a fundamental step for the fertilizing capacity of boar spermatozoa. The aim of the present study was to identify factors that affect quality of boar semen doses. The production process during 1 day of semen processing in 26 European boar studs was monitored. In each boar stud, nine to 19 randomly selected ejaculates from 372 Pietrain boars were analyzed for sperm motility, acrosome and plasma membrane integrity, mitochondrial activity and thermo-resistance (TRT). Each ejaculate was monitored for production time and temperature for each step in semen processing using the special programmed software SEQU (version 1.7, Minitüb, Tiefenbach, Germany). The dilution of ejaculates with a short-term extender was completed in one step in 10 AI centers (n = 135 ejaculates), in two steps in 11 AI centers (n = 158 ejaculates) and in three steps in five AI centers (n = 79 ejaculates). Results indicated there was a greater semen quality with one-step isothermal dilution compared with the multi-step dilution of AI semen doses (total motility TRT d7: 71.1 ± 19.2%, 64.6 ± 20.0%, 47.1 ± 27.1%; one-step compared with two-step compared with the three-step dilution; P < .05). There was a marked advantage when using the one-step isothermal dilution regarding time management, preservation suitability, stability and stress resistance. One-step dilution caused significant lower holding times of raw ejaculates and reduced the possible risk of making mistakes due to a lower number of processing steps. These results lead to refined recommendations for boar semen processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing

    NASA Astrophysics Data System (ADS)

    Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.

    2018-05-01

    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.

  13. Software-safety and software quality assurance in real-time applications Part 2: Real-time structures and languages

    NASA Astrophysics Data System (ADS)

    Schoitsch, Erwin

    1988-07-01

    Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.

  14. The Effect of Highlighting on Processing and Memory of Central and Peripheral Text Information: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Yeari, Menahem; Oudega, Marja; van den Broek, Paul

    2017-01-01

    The present study investigated the effect of text highlighting on online processing and memory of central and peripheral information. We compared processing time (using eye-tracking methodology) and recall of central and peripheral information for three types of highlighting: (a) highlighting of central information, (b) highlighting of peripheral…

  15. Impact assessment of GPS radio occultation data on Antarctic analysis and forecast using WRF 3DVAR

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Wee, T. K.; Liu, Z.; Lin, H. C.; Kuo, Y. H.

    2016-12-01

    This study assesses the impact of Global Positioning System (GPS) Radio Occultation (RO) refractivity data on the analysis and forecast in the Antarctic region. The RO data are continuously assimilated into the Weather Research and Forecasting (WRF) Model using the WRF 3DVAR along with other observations that were operationally available to the National Center for Environmental Prediction (NCEP) during a month period, October 2010, including the Advance Microwave Sounding Unit (AMSU) radiance data. For the month-long data assimilation experiments, three RO datasets are used: 1) The actual operational dataset, which was produced by the near real-time RO processing at that time and provided to weather forecasting centers; 2) a post-processed dataset with posterior clock and orbit estimates, and with improved RO processing algorithms; and, 3) another post-processed dataset, produced with a variational RO processing. The data impact is evaluated with comparing the forecasts and analyses to independent driftsonde observations that are made available through the Concordiasi field campaign, in addition to utilizing other traditional means of verification. A denial of RO data (while keeping all other observations) resulted in a remarkable quality degradation of analysis and forecast, indicating the high value of RO data over the Antarctic area. The post-processed RO data showed a significantly larger positive impact compared to the near real-time data, due to extra RO data from the TerraSAR-X satellite (unavailable at the time of the near real-time processing) as well as the supposedly improved data quality as a result of the post-processing. This strongly suggests that the future polar constellation of COSMIC-2 is vital. The variational RO processing further reduced the systematic and random errors in both analysis and forecasts, for instance, leading to a smaller background departure of AMSU radiance. This indicates that the variational RO processing provides an improved reference for the bias correction of satellite radiance, making the bias correction more effective. This study finds that advanced RO data processing algorithms may further enhance the high quality of RO data in high Southern latitudes.

  16. On two diffusion neuronal models with multiplicative noise: The mean first-passage time properties

    NASA Astrophysics Data System (ADS)

    D'Onofrio, G.; Lansky, P.; Pirozzi, E.

    2018-04-01

    Two diffusion processes with multiplicative noise, able to model the changes in the neuronal membrane depolarization between two consecutive spikes of a single neuron, are considered and compared. The processes have the same deterministic part but different stochastic components. The differences in the state-dependent variabilities, their asymptotic distributions, and the properties of the first-passage time across a constant threshold are investigated. Closed form expressions for the mean of the first-passage time of both processes are derived and applied to determine the role played by the parameters involved in the model. It is shown that for some values of the input parameters, the higher variability, given by the second moment, does not imply shorter mean first-passage time. The reason for that can be found in the complete shape of the stationary distribution of the two processes. Applications outside neuroscience are also mentioned.

  17. Joint Services Electronics Program Annual Progress Report.

    DTIC Science & Technology

    1985-11-01

    one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB

  18. Efficient reactive Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    2018-01-01

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently process reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.

  19. Development and evaluation of low cost honey heating-cum-filtration system.

    PubMed

    Alam, Md Shafiq; Sharma, D K; Sehgal, V K; Arora, M; Bhatia, S

    2014-11-01

    A fully mechanized honey heating-cum-filtration system was designed, developed, fabricated and evaluated for its performance. The system comprised of two sections; the top heating section and the lower filtering section. The developed system was evaluated for its performance at different process conditions (25 kg and 50 kg capacity using processing condition: 50 °C heating temperature and 60 °C heating temperature with 20 and 40 min holding time, respectively) and it was found that the total time required for heating, holding and filtration of honey was 108 and 142 min for 25 kg and 50 kg capacity of machine, respectively, irrespective of the processing conditions. The optimum capacity of the system was found to be 50 kg and it involved an investment of Rs 40,000 for its fabrication. The honey filtered through the developed filtration system was compared with the honey filtered in a high cost honey processing plant and raw honey for its microbial and biochemical (reducing sugars (%), moisture, acidity and pH) quality attributes. It was observed that the process of filtering through the developed unit resulted in reduction of microbes. The microbiological quality of honey filtered through the developed filtration system was better than that of raw honey and commercially processed honey. The treatment conditions found best in context of microbiological counts were 60 °C temperature for 20 min. There was 1.97 fold reductions in the plate count and 2.14 reductions in the fungal count of honey processed through the developed filtration system as compared to the raw honey. No coliforms were found in the processed honey. Honey processed through developed unit witnessed less moisture content, acidity and more reducing sugars as compared to raw honey, whereas its quality was comparable to the commercially processed honey.

  20. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms

    PubMed Central

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831

  1. A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.

    PubMed

    Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein

    2017-01-01

    Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.

  2. A fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor

    NASA Astrophysics Data System (ADS)

    Jiang, Junfeng; An, Jianchang; Liu, Kun; Ma, Chunyu; Li, Zhichen; Liu, Tiegen

    2017-09-01

    We propose a fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor. Using the approximately derivation method and the enveloping detection method, we successfully eliminate the asymmetry of the interference outputs and improve the processing speed. A positioning measurement experiment was carried out to verify the effectiveness of the proposed algorithm. At the sensing length of 85 km, the experimental results show that the mean positioning error is 18.9 m and the mean processing time is 116 ms. The processing speed is improved by 5 times compared to what can be achieved by using the traditional time-frequency analysis-based positioning method.

  3. Optimization and Improvement of Test Processes on a Production Line

    NASA Astrophysics Data System (ADS)

    Sujová, Erika; Čierna, Helena

    2018-06-01

    The paper deals with increasing processes efficiency at a production line of cylinder heads of engines in a production company operating in the automotive industry. The goal is to achieve improvement and optimization of test processes on a production line. It analyzes options for improving capacity, availability and productivity of processes of an output test by using modern technology available on the market. We have focused on analysis of operation times before and after optimization of test processes at specific production sections. By analyzing measured results we have determined differences in time before and after improvement of the process. We have determined a coefficient of efficiency OEE and by comparing outputs we have confirmed real improvement of the process of the output test of cylinder heads.

  4. The time course of implicit processing of erotic pictures: an event-related potential study.

    PubMed

    Feng, Chunliang; Wang, Lili; Wang, Naiyi; Gu, Ruolei; Luo, Yue-Jia

    2012-12-13

    The current study investigated the time course of the implicit processing of erotic stimuli using event-related potentials (ERPs). ERPs elicited by erotic pictures were compared with those by three other types of pictures: non-erotic positive, negative, and neutral pictures. We observed that erotic pictures evoked enhanced neural responses compared with other pictures at both early (P2/N2) and late (P3/positive slow wave) temporal stages. These results suggested that erotic pictures selectively captured individuals' attention at early stages and evoked deeper processing at late stages. More importantly, the amplitudes of P2, N2, and P3 only discriminated between erotic and non-erotic (i.e., positive, neutral, and negative) pictures. That is, no difference was revealed among non-erotic pictures, although these pictures differed in both valence and arousal. Thus, our results suggest that the erotic picture processing is beyond the valence and arousal. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Mathematical Analysis and Optimization of Infiltration Processes

    NASA Technical Reports Server (NTRS)

    Chang, H.-C.; Gottlieb, D.; Marion, M.; Sheldon, B. W.

    1997-01-01

    A variety of infiltration techniques can be used to fabricate solid materials, particularly composites. In general these processes can be described with at least one time dependent partial differential equation describing the evolution of the solid phase, coupled to one or more partial differential equations describing mass transport through a porous structure. This paper presents a detailed mathematical analysis of a relatively simple set of equations which is used to describe chemical vapor infiltration. The results demonstrate that the process is controlled by only two parameters, alpha and beta. The optimization problem associated with minimizing the infiltration time is also considered. Allowing alpha and beta to vary with time leads to significant reductions in the infiltration time, compared with the conventional case where alpha and beta are treated as constants.

  6. Sorting processes with energy-constrained comparisons*

    NASA Astrophysics Data System (ADS)

    Geissmann, Barbara; Penna, Paolo

    2018-05-01

    We study very simple sorting algorithms based on a probabilistic comparator model. In this model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms repeatedly compare and swap pairs of randomly chosen elements, and they correspond to natural Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm that repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: in the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses interesting questions as the latter algorithm yields a nonreversible chain, and therefore its stationary distribution seems difficult to calculate explicitly. We nevertheless provide bounds on the stationary distributions and on the mixing time of these processes in several restrictions.

  7. Prediction of Time Response of Electrowetting

    NASA Astrophysics Data System (ADS)

    Lee, Seung Jun; Hong, Jiwoo; Kang, Kwan Hyoung

    2009-11-01

    It is very important to predict the time response of electrowetting-based devices, such as liquid lenses, reflective displays, and optical switches. We investigated the time response of electrowetting, based on an analytical and a numerical method, to find out characteristic scales and a scaling law for the switching time. For this, spreading process of a sessile droplet was analyzed based on the domain perturbation method. First, we considered the case of weakly viscous fluids. The analytical result for the spreading process was compared with experimental results, which showed very good agreement in overall time response. It was shown that the overall dynamics is governed by P2 shape mode. We derived characteristic scales combining the droplet volume, density, and surface tension. The overall dynamic process was scaled quite well by the scales. A scaling law was derived from the analytical solution and was verified experimentally. We also suggest a scaling law for highly viscous liquids, based on results of numerical analysis for the electrowetting-actuated spreading process.

  8. The effect of processing temperature and time on the structure and fracture characteristics of self-reinforced composite poly(methyl methacrylate).

    PubMed

    Wright, D D; Gilbert, J L; Lautenschlager, E P

    1999-08-01

    A novel material, self-reinforced composite poly(methyl methacrylate) (SRC-PMMA) has been previously developed in this laboratory. It consists of high-strength PMMA fibers embedded in a matrix of PMMA derived from the fibers. As a composite material, uniaxial SRC-PMMA has been shown to have greatly improved flexural, tensile, fracture toughness and fatigue properties when compared to unreinforced PMMA. Previous work examined one empirically defined processing condition. This work systematically examines the effect of processing time and temperature on the thermal properties, fracture toughness and fracture morphology of SRC-PMMA produced by a hot compaction method. Differential scanning calorimetry (DSC) shows that composites containing high amounts of retained molecular orientation exhibit both endothermic and exothermic peaks which depend on processing times and temperatures. An exothermic release of energy just above Tg is related to the release of retained molecular orientation in the composites. This release of energy decreases linearly with increasing processing temperature or time for the range investigated. Fracture toughness results show a maximum fracture toughness of 3.18 MPa m1/2 for samples processed for 65 min at 128 degrees C. Optimal structure and fracture toughness are obtained in composites which have maximum interfiber bonding and minimal loss of molecular orientation. Composite fracture mechanisms are highly dependent on processing. Low processing times and temperatures result in more interfiber/matrix fracture, while higher processing times and temperatures result in higher ductility and more transfiber fracture. Excessive processing times result in brittle failure. Copyright 1999 Kluwer Academic Publishers

  9. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  10. Effect of hot and cold severe deformation by extrusion on the properties of lead and aluminum alloys

    NASA Astrophysics Data System (ADS)

    Ganiev, M. M.; Shibakov, V. G.; Pankratov, D. L.; Shibakov, R. V.

    2015-07-01

    The study of the effect of severe plastic deformation (SPD) by extrusion shows that the ductility of lead after several cycles of SPD increases significantly (3-4 times) as compared to as-cast samples. An aluminum alloy after this processing is hardened by a factor of 2.3-2.5, with ductility decreasing by 2.5-2.7 times, as compared to the as-delivered state.

  11. Comparison of the MPP with other supercomputers for LANDSAT data processing

    NASA Technical Reports Server (NTRS)

    Ozga, Martin

    1987-01-01

    The massively parallel processor is compared to the CRAY X-MP and the CYBER-205 for LANDSAT data processing. The maximum likelihood classification algorithm is the basis for comparison since this algorithm is simple to implement and vectorizes very well. The algorithm was implemented on all three machines and tested by classifying the same full scene of LANDSAT multispectral scan data. Timings are compared as well as features of the machines and available software.

  12. Improving Emergency Department Door to Doctor Time and Process Reliability

    PubMed Central

    El Sayed, Mazen J.; El-Eid, Ghada R.; Saliba, Miriam; Jabbour, Rima; Hitti, Eveline A.

    2015-01-01

    Abstract The aim of this study is to determine the effectiveness of using lean management methods on improving emergency department door to doctor times at a tertiary care hospital. We performed a before and after study at an academic urban emergency department with 49,000 annual visits after implementing a series of lean driven interventions over a 20 month period. The primary outcome was mean door to doctor time and the secondary outcome was length of stay of both admitted and discharged patients. A convenience sample from the preintervention phase (February 2012) was compared to another from the postintervention phase (mid-October to mid-November 2013). Individual control charts were used to assess process stability. Postintervention there was a statistically significant decrease in the mean door to doctor time measure (40.0 minutes ± 53.44 vs 25.3 minutes ± 15.93 P < 0.001). The postintervention process was more statistically in control with a drop in the upper control limits from 148.8 to 72.9 minutes. Length of stay of both admitted and discharged patients dropped from 2.6 to 2.0 hours and 9.0 to 5.5 hours, respectively. All other variables including emergency department visit daily volumes, hospital occupancy, and left without being seen rates were comparable. Using lean change management techniques can be effective in reducing door to doctor time in the Emergency Department and improving process reliability. PMID:26496278

  13. Criteria for Comparing Domain Analysis Approaches Version 01.00.00

    DTIC Science & Technology

    1991-12-01

    Down-Bottom-Up Domain Analysis Process (1990 Version) ..... 14 Figure 8. FODAs Domain Analysis Process ............................................ 16... FODA , which uses the Design Approach for Real-Time Systems (DARTS) design method (Gomaa 1984)? 1 1. hntmduction Domain analysis is still immature... Analysis Process 16 2. An Orvicw of Some Domain AnabAppro•d•a 2.4.3 ExAzs The FODA report illustrates the process by using the window management

  14. Process Waste Assessment for the Diana Laser Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, N.M.

    1993-12-01

    This Process Waste Assessment was conducted to evaluate the Diana Laser Laboratory, located in the Combustion Research Facility. It documents the hazardous chemical waste streams generated by the laser process and establishes a baseline for future waste minimization efforts. This Process Waste Assessment will be reevaluated in approximately 18 to 24 months, after enough time has passed to implement recommendations and to compare results with the baseline established in this assessment.

  15. Single crystals and nonlinear process for outstanding vibration-powered electrical generators.

    PubMed

    Badel, Adrien; Benayad, Abdelmjid; Lefeuvre, Elie; Lebrun, Laurent; Richard, Claude; Guyomar, Daniel

    2006-04-01

    This paper compares the performances of vibration-powered electrical generators using a piezoelectric ceramic and a piezoelectric single crystal associated to several power conditioning circuits. A new approach of the piezoelectric power conversion based on a nonlinear voltage processing is presented, leading to three novel high performance power conditioning interfaces. Theoretical predictions and experimental results show that the nonlinear processing technique may increase the power harvested by a factor of 8 compared to standard techniques. Moreover, it is shown that, for a given energy harvesting technique, generators using single crystals deliver 20 times more power than generators using piezoelectric ceramics.

  16. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  17. Real-time slicing algorithm for Stereolithography (STL) CAD model applied in additive manufacturing industry

    NASA Astrophysics Data System (ADS)

    Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.

    2018-04-01

    Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.

  18. Refractory pulse counting processes in stochastic neural computers.

    PubMed

    McNeill, Dean K; Card, Howard C

    2005-03-01

    This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.

  19. Review of critical flow rate, propagation of pressure pulse, and sonic velocity in two-phase media

    NASA Technical Reports Server (NTRS)

    Hsu, Y.

    1972-01-01

    For single-phase media, the critical discharge velocity, the sonic velocity, and the pressure pulse propagation velocity can be expressed in the same form by assuming isentropic, equilibria processes. In two-phase mixtures, the same concept is not valid due to the existence of interfacial transports of momentum, heat, and mass. Thus, the three velocities should be treated differently and separately for each particular condition, taking into account the various transport processes involved under that condition. Various attempts are reviewed to predict the critical discharge rate or the propagation velocities by considering slip ratio (momentum change), evaporation (mass and heat transport), flow pattern, etc. Experimental data were compared with predictions based on various theorems. The importance is stressed of the time required to achieve equilibrium as compared with the time available during the process, for example, of passing a pressure pulse.

  20. Dynamic laser piercing of thick section metals

    NASA Astrophysics Data System (ADS)

    Pocorni, Jetro; Powell, John; Frostevarg, Jan; Kaplan, Alexander F. H.

    2018-01-01

    Before a contour can be laser cut the laser first needs to pierce the material. The time taken to achieve piercing should be minimised to optimise productivity. One important aspect of laser piercing is the reliability of the process because industrial laser cutting machines are programmed for the minimum reliable pierce time. In this work piercing experiments were carried out in 15 mm thick stainless steel sheets, comparing a stationary laser and a laser which moves along a circular trajectory with varying processing speeds. Results show that circular piercing can decrease the pierce duration by almost half compared to stationary piercing. High speed imaging (HSI) was employed during the piercing process to understand melt behaviour inside the pierce hole. HSI videos show that circular rotation of the laser beam forces melt to eject in opposite direction of the beam movement, while in stationary piercing the melt ejects less efficiently in random directions out of the hole.

  1. Technique for analyzing human respiratory process

    NASA Technical Reports Server (NTRS)

    Liu, F. F.

    1970-01-01

    Electronic system /MIRACLE 2/ places frequency and gas flow rate of the respiratory process within a common frame of reference to render them comparable and compatible with ''real clock time.'' Numerous measurements are accomplished accurately on a strict one-minute half-minute, breath-by-breath, or other period basis.

  2. A microfluidic device integrating dual CMOS polysilicon nanowire sensors for on-chip whole blood processing and simultaneous detection of multiple analytes.

    PubMed

    Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu

    2016-08-02

    The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.

  3. Process techniques of charge transfer time reduction for high speed CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhongxiang, Cao; Quanliang, Li; Ye, Han; Qi, Qin; Peng, Feng; Liyuan, Liu; Nanjian, Wu

    2014-11-01

    This paper proposes pixel process techniques to reduce the charge transfer time in high speed CMOS image sensors. These techniques increase the lateral conductivity of the photo-generated carriers in a pinned photodiode (PPD) and the voltage difference between the PPD and the floating diffusion (FD) node by controlling and optimizing the N doping concentration in the PPD and the threshold voltage of the reset transistor, respectively. The techniques shorten the charge transfer time from the PPD diode to the FD node effectively. The proposed process techniques do not need extra masks and do not cause harm to the fill factor. A sub array of 32 × 64 pixels was designed and implemented in the 0.18 μm CIS process with five implantation conditions splitting the N region in the PPD. The simulation and measured results demonstrate that the charge transfer time can be decreased by using the proposed techniques. Comparing the charge transfer time of the pixel with the different implantation conditions of the N region, the charge transfer time of 0.32 μs is achieved and 31% of image lag was reduced by using the proposed process techniques.

  4. Eye-Tracking and Corpus-Based Analyses of Syntax-Semantics Interactions in Complement Coercion

    PubMed Central

    Lowder, Matthew W.; Gordon, Peter C.

    2016-01-01

    Previous work has shown that the difficulty associated with processing complex semantic expressions is reduced when the critical constituents appear in separate clauses as opposed to when they appear together in the same clause. We investigated this effect further, focusing in particular on complement coercion, in which an event-selecting verb (e.g., began) combines with a complement that represents an entity (e.g., began the memo). Experiment 1 compared reading times for coercion versus control expressions when the critical verb and complement appeared together in a subject-extracted relative clause (SRC) (e.g., The secretary that began/wrote the memo) compared to when they appeared together in a simple sentence. Readers spent more time processing coercion expressions than control expressions, replicating the typical coercion cost. In addition, readers spent less time processing the verb and complement in SRCs than in simple sentences; however, the magnitude of the coercion cost did not depend on sentence structure. In contrast, Experiment 2 showed that the coercion cost was reduced when the complement appeared as the head of an object-extracted relative clause (ORC) (e.g., The memo that the secretary began/wrote) compared to when the constituents appeared together in an SRC. Consistent with the eye-tracking results of Experiment 2, a corpus analysis showed that expressions requiring complement coercion are more frequent when the constituents are separated by the clause boundary of an ORC compared to when they are embedded together within an SRC. The results provide important information about the types of structural configurations that contribute to reduced difficulty with complex semantic expressions, as well as how these processing patterns are reflected in naturally occurring language. PMID:28529960

  5. [Alcohol-purification technology and its particle sedimentation process in manufactory of Fufang Kushen injection].

    PubMed

    Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin

    2011-11-01

    Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.

  6. Efficacy of Cognitive Processes in Young People with High-Functioning Autism Spectrum Disorder Using a Novel Visual Information-Processing Task

    ERIC Educational Resources Information Center

    Speirs, Samantha J.; Rinehart, Nicole J.; Robinson, Stephen R.; Tonge, Bruce J.; Yelland, Gregory W.

    2014-01-01

    Autism spectrum disorders (ASD) are characterised by a unique pattern of preserved abilities and deficits within and across cognitive domains. The Complex Information Processing Theory proposes this pattern reflects an altered capacity to respond to cognitive demands. This study compared how complexity induced by time constraints on processing…

  7. A cost analysis comparing xeroradiography to film technics for intraoral radiography.

    PubMed

    Gratt, B M; Sickles, E A

    1986-01-01

    In the United States during 1978 $730 million was spent on dental radiographic services. Currently there are three alternatives for the processing of intraoral radiographs: manual wet-tanks, automatic film units, or xeroradiography. It was the intent of this study to determine which processing system is the most economical. Cost estimates were based on a usage rate of 750 patient images per month and included a calculation of the average cost per radiograph over a five-year period. Capital costs included initial processing equipment and site preparation. Operational costs included labor, supplies, utilities, darkroom rental, and breakdown costs. Clinical time trials were employed to measure examination times. Maintenance logs were employed to assess labor costs. Indirect costs of training were estimated. Results indicated that xeroradiography was the most cost effective ($0.81 per image) compared to either automatic film processing ($1.14 per image) or manual processing ($1.35 per image). Variations in projected costs indicated that if a dental practice performs primarily complete-mouth surveys, exposes less than 120 radiographs per month, and pays less than +6.50 per hour in wages, then manual (wet-tank) processing is the most economical method for producing intraoral radiographs.

  8. Markovian prediction of future values for food grains in the economic survey

    NASA Astrophysics Data System (ADS)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.

  9. “Superluminal” FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique

    NASA Astrophysics Data System (ADS)

    Eguchi, Satoshi

    2013-05-01

    The FITS is the standard file format in astronomy, and it has been extended to meet the astronomical needs of the day. However, astronomical datasets have been inflating year by year. In the case of the ALMA telescope, a ˜TB-scale four-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is tens of MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al. 2012). The system will equip a very high-speed disk array to process a TB-scale data cube in 10 s, and disk I/O speed, endian conversion, and data processing speeds will be comparable. Hence, reducing the endian conversion time is one of issues to solve in our system. In this article, I introduce a technique named “just-in-time endian conversion”, which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading and 40% for multi-threading compared to CFITSIO. The speedup tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of “causality”, a programmed instruction code sequence.

  10. On the Application of Different Event-Based Sampling Strategies to the Control of a Simple Industrial Process

    PubMed Central

    Sánchez, José; Guarnes, Miguel Ángel; Dormido, Sebastián

    2009-01-01

    This paper is an experimental study of the utilization of different event-based strategies for the automatic control of a simple but very representative industrial process: the level control of a tank. In an event-based control approach it is the triggering of a specific event, and not the time, that instructs the sensor to send the current state of the process to the controller, and the controller to compute a new control action and send it to the actuator. In the document, five control strategies based on different event-based sampling techniques are described, compared, and contrasted with a classical time-based control approach and a hybrid one. The common denominator in the time, the hybrid, and the event-based control approaches is the controller: a proportional-integral algorithm with adaptations depending on the selected control approach. To compare and contrast each one of the hybrid and the pure event-based control algorithms with the time-based counterpart, the two tasks that a control strategy must achieve (set-point following and disturbance rejection) are independently analyzed. The experimental study provides new proof concerning the ability of event-based control strategies to minimize the data exchange among the control agents (sensors, controllers, actuators) when an error-free control of the process is not a hard requirement. PMID:22399975

  11. Adaptive Comparative Judgment: A Tool to Support Students' Assessment Literacy.

    PubMed

    Rhind, Susan M; Hughes, Kirsty J; Yool, Donald; Shaw, Darren; Kerr, Wesley; Reed, Nicki

    Comparative judgment in assessment is a process whereby repeated comparison of two items (e.g., assessment answers) can allow an accurate ranking of all the submissions to be achieved. In adaptive comparative judgment (ACJ), technology is used to automate the process and present pairs of pieces of work over iterative cycles. An online ACJ system was used to present students with work prepared by a previous cohort at the same stage of their studies. Objective marks given to the work by experienced faculty were compared to the rankings given to the work by a cohort of veterinary students (n=154). Each student was required to review and judge 20 answers provided by the previous cohort to a free-text short answer question. The time that students spent on the judgment tasks was recorded, and students were asked to reflect on their experiences after engaging with the task. There was a strong positive correlation between student ranking and faculty marking. A weak positive correlation was found between the time students spent on the judgments and their performance on the part of their own examination that contained questions in the same format. Slightly less than half of the students agreed that the exercise was a good use of their time, but 78% agreed that they had learned from the process. Qualitative data highlighted different levels of benefit from the simplest aspect of learning more about the topic to an appreciation of the more generic lessons to be learned.

  12. Design and implementation of laser target simulator in hardware-in-the-loop simulation system based on LabWindows/CVI and RTX

    NASA Astrophysics Data System (ADS)

    Tong, Qiujie; Wang, Qianqian; Li, Xiaoyang; Shan, Bin; Cui, Xuntai; Li, Chenyu; Peng, Zhong

    2016-11-01

    In order to satisfy the requirements of the real-time and generality, a laser target simulator in semi-physical simulation system based on RTX+LabWindows/CVI platform is proposed in this paper. Compared with the upper-lower computers simulation platform architecture used in the most of the real-time system now, this system has better maintainability and portability. This system runs on the Windows platform, using Windows RTX real-time extension subsystem to ensure the real-time performance of the system combining with the reflective memory network to complete some real-time tasks such as calculating the simulation model, transmitting the simulation data, and keeping real-time communication. The real-time tasks of simulation system run under the RTSS process. At the same time, we use the LabWindows/CVI to compile a graphical interface, and complete some non-real-time tasks in the process of simulation such as man-machine interaction, display and storage of the simulation data, which run under the Win32 process. Through the design of RTX shared memory and task scheduling algorithm, the data interaction between the real-time tasks process of RTSS and non-real-time tasks process of Win32 is completed. The experimental results show that this system has the strongly real-time performance, highly stability, and highly simulation accuracy. At the same time, it also has the good performance of human-computer interaction.

  13. Initial Results from Fitting Resolved Modes using HMI Intensity Observations

    NASA Astrophysics Data System (ADS)

    Korzennik, Sylvain G.

    2017-08-01

    The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.

  14. Disciplined rubidium oscillator with GPS selective availability

    NASA Technical Reports Server (NTRS)

    Dewey, Wayne P.

    1993-01-01

    A U.S. Department of Defense decision for continuous implementation of GPS Selective Availability (S/A) has made it necessary to modify Rubidium oscillator disciplining methods. One such method for reducing the effects of S/A on the oscillator disciplining process was developed which achieves results approaching pre-S/A GPS. The Satellite Hopping algorithm used in minimizing the effects of S/A on the oscillator disciplining process is described, and the results of using this process to those obtained prior to the implementation of S/A are compared. Test results are from a TrueTime Rubidium based Model GPS-DC timing receiver.

  15. Method and apparatus for signal processing in a sensor system for use in spectroscopy

    DOEpatents

    O'Connor, Paul [Bellport, NY; DeGeronimo, Gianluigi [Nesconset, NY; Grosholz, Joseph [Natrona Heights, PA

    2008-05-27

    A method for processing pulses arriving randomly in time on at least one channel using multiple peak detectors includes asynchronously selecting a non-busy peak detector (PD) in response to a pulse-generated trigger signal, connecting the channel to the selected PD in response to the trigger signal, and detecting a pulse peak amplitude. Amplitude and time of arrival data are output in first-in first-out (FIFO) sequence. An apparatus includes trigger comparators to generate the trigger signal for the pulse-receiving channel, PDs, a switch for connecting the channel to the selected PD, and logic circuitry which maintains the write pointer. Also included, time-to-amplitude converters (TACs) convert time of arrival to analog voltage and an analog multiplexer provides FIFO output. A multi-element sensor system for spectroscopy includes detector elements, channels, trigger comparators, PDs, a switch, and a logic circuit with asynchronous write pointer. The system includes TACs, a multiplexer and analog-to-digital converter.

  16. Hyperactivity in boys with attention-deficit/hyperactivity disorder (ADHD): The role of executive and non-executive functions.

    PubMed

    Hudec, Kristen L; Alderson, R Matt; Patros, Connor H G; Lea, Sarah E; Tarle, Stephanie J; Kasper, Lisa J

    2015-01-01

    Motor activity of boys (age 8-12 years) with (n=19) and without (n=18) ADHD was objectively measured with actigraphy across experimental conditions that varied with regard to demands on executive functions. Activity exhibited during two n-back (1-back, 2-back) working memory tasks was compared to activity during a choice-reaction time (CRT) task that placed relatively fewer demands on executive processes and during a simple reaction time (SRT) task that required mostly automatic processing with minimal executive demands. Results indicated that children in the ADHD group exhibited greater activity compared to children in the non-ADHD group. Further, both groups exhibited the greatest activity during conditions with high working memory demands, followed by the reaction time and control task conditions, respectively. The findings indicate that large-magnitude increases in motor activity are predominantly associated with increased demands on working memory, though demands on non-executive processes are sufficient to elicit small to moderate increases in motor activity as well. Published by Elsevier Ltd.

  17. Redefining the Data Pipeline Using GPUs

    NASA Astrophysics Data System (ADS)

    Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.

    2013-10-01

    There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which could be easily and rapidly queried and file I/O would be kept at a minimum. We believe this redefined data pipeline will be able to process data at the telescope, concurrent with continuing observations, thus maximizing precious observing time and optimizing the observational process in general. We also believe that using this design, it is possible to obtain a speed gain of a factor of 30-40 over traditional data pipelines when processing large archives of data.

  18. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    NASA Astrophysics Data System (ADS)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  19. Verbal Processing Reaction Times in "Normal" and "Poor" Readers.

    ERIC Educational Resources Information Center

    Culbertson, Jack; And Others

    After it had been determined that reaction time (RT) was a sensitive measure of hemispheric dominance in a verbal task performed by normal adult readers, the reaction times of three groups of subjects (20 normal reading college students, 12 normal reading third graders and 11 poor reading grade school students) were compared. Ss were exposed to…

  20. Patterns of Response Times and Response Choices to Science Questions: The Influence of Relative Processing Time

    ERIC Educational Resources Information Center

    Heckler, Andrew F.; Scaife, Thomas M.

    2015-01-01

    We report on five experiments investigating response choices and response times to simple science questions that evoke student "misconceptions," and we construct a simple model to explain the patterns of response choices. Physics students were asked to compare a physical quantity represented by the slope, such as speed, on simple physics…

  1. Early metacognitive abilities: the interplay of monitoring and control processes in 5- to 7-year-old children.

    PubMed

    Destan, Nesrin; Hembacher, Emily; Ghetti, Simona; Roebers, Claudia M

    2014-10-01

    The goal of the current investigation was to compare two monitoring processes (judgments of learning [JOLs] and confidence judgments [CJs]) and their corresponding control processes (allocation of study time and selection of answers to maximize accuracy, respectively) in 5-, 6-, and 7-year-old children (N=101). Children learned the meanings of Japanese characters and provided JOLs after a study phase and CJs after a memory test. They were given the opportunity to control their learning in self-paced study phases and to control their accuracy by placing correct answers in a treasure chest and placing incorrect answers in a trash can. All three age groups gave significantly higher CJs for correct answers compared with incorrect answers, with no age-related differences in the magnitude of this difference, suggesting robust metacognitive monitoring skills in children as young as 5 years. Furthermore, a link between JOLs and study time was found in 6- and 7-year-olds, such that children spent more time studying items with low JOLs compared with items with high JOLs. In addition, 6- and 7-year-olds, but not 5-year-olds, spent more time studying difficult items compared with easier items. Moreover, age-related improvements were found in children's use of CJs to guide their selection of answers; although children as young as 5 years placed their most confident answers in the treasure chest and placed their least confident answers in the trash can, this pattern was more robust in older children. Overall, results support the view that some metacognitive judgments may be acted on with greater ease than others among young children. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Time Constraints Experienced by Female Teacher Researchers in Canada and Turkey: Challenges to Developing an Autonomous Professional Learning Mindset

    ERIC Educational Resources Information Center

    Mitton-Kükner, Jennifer

    2016-01-01

    The focus of this comparative qualitative study is on female teachers' experiences as teacher researchers in Canada and Turkey as they worked towards the completion of their postgraduate degrees in the midst of teaching full-time. Attending carefully to participants' accounts of time use during the research process revealed heavy time pressure as…

  3. A reach-to-touch investigation on the nature of reading in the Stroop task.

    PubMed

    Tillman, Gabriel; Eidels, Ami; Finkbeiner, Matthew

    2016-11-01

    In a Stroop task, participants can be presented with a color name printed in color and need to classify the print color while ignoring the word. The Stroop effect is typically calculated as the difference in mean response time (RT) between congruent (e.g., the word RED printed in red) and incongruent (GREEN in red) trials. Delta plots compare not just mean performance, but the entire RT distributions of congruent and incongruent conditions. However, both mean RT and delta plots have some limitations. Arm-reaching trajectories allow a more continuous measure for assessing the time course of the Stroop effect. We compared arm movements to congruent and incongruent stimuli in a standard Stroop task and a control task that encourages processing of each and every word. The Stroop effect emerged over time in the control task, but not in the standard Stroop, suggesting words may be processed differently in the two tasks.

  4. A Controlled Agitation Process for Improving Quality of Canned Green Beans during Agitation Thermal Processing.

    PubMed

    Singh, Anika; Pratap Singh, Anubhav; Ramaswamy, Hosahalli S

    2016-06-01

    This work introduces the concept of a controlled agitation thermal process to reduce quality damage in liquid-particulate products during agitation thermal processing. Reciprocating agitation thermal processing (RA-TP) was used as the agitation thermal process. In order to reduce the impact of agitation, a new concept of "stopping agitations after sufficient development of cold-spot temperature" was proposed. Green beans were processed in No. 2 (307×409) cans filled with liquids of various consistency (0% to 2% CMC) at various frequencies (1 to 3 Hz) of RA-TP using a full-factorial design and heat penetration results were collected. Corresponding operator's process time to impart a 10-min process lethality (Fo ) and agitation time (AT) were calculated using heat penetration results. Accordingly, products were processed again by stopping agitations as per 3 agitation regimes, namely; full time agitation, equilibration time agitation, and partial time agitation. Processed products were photographed and tested for visual quality, color, texture, breakage of green beans, turbidity, and percentage of insoluble solids in can liquid. Results showed that stopping agitations after sufficient development of cold-spot temperatures is an effective way of reducing product damages caused by agitation (for example, breakage of beans and its leaching into liquid). Agitations till one-log temperature difference gave best color, texture and visual product quality for low-viscosity liquid-particulate mixture and extended agitations till equilibration time was best for high-viscosity products. Thus, it was shown that a controlled agitation thermal process is more effective in obtaining high product quality as compared to a regular agitation thermal process. © 2016 Institute of Food Technologists®

  5. Using rapid infrared forming to control interfaces in titanium-matrix composites

    NASA Technical Reports Server (NTRS)

    Warrier, Sunil G.; Lin, Ray Y.

    1993-01-01

    Control of the fiber-matrix reaction during composite fabrication is commonly achieved by shortening the processing time, coating the reinforcement with relatively inert materials, or adding alloying elements to retard the reaction. To minimize the processing time, a rapid IR forming (RIF) technique for metal-matrix composite fabrication has been developed. Experiments have shown that the RIF technique is a quick, simple, and low-cost process to fabricate titanium-alloy matrix composites reinforced with either silicon carbide or carbon fibers. Due to short processing times (typically on the order of 1-2 minutes in an inert atmosphere for composites with up to eight-ply reinforcements), the interfacial reaction is limited and well controlled. Composites fabricated by this technique have mechanical properties that are comparable to (in several cases, superior to) those made with conventional diffusion-bonding techniques.

  6. How First-year Students Expressed Their Transition to College Experiences Differently Depending on the Affordances of Two Writing Contexts.

    PubMed

    Kreniske, Philip

    2017-09-01

    Drawing on theory that positions writing as a social process, this study compares how two distinct contexts influenced the linguistic features of college students' writing over time. In one context, students blogged and received comments, while in the other context students word-processed and received no comments. Systematic qualitative and quantitative analyses of these natural language posts and comments indicated the bloggers used greater rates of cognitive and intensifying expressions in their writing over time than students who word-processed. These results suggest that the affordances of the context influenced narrators' expressive writing over time. The current findings have significance for scholars seeking to understand connections between interactive media, writing processes, and audience, and for college programs across the U.S. that provide support for first-year students.

  7. Studies on the use of power ultrasound in leather dyeing.

    PubMed

    Sivakumar, Venkatasubramanian; Rao, Paruchuri Gangadhar

    2003-03-01

    Uses of power ultrasound for acceleration/performing the chemical as well as physical processes are gaining importance. In conventional leather processing, the diffusion of chemicals through the pores of the skin/hide is achieved by the mechanical agitation caused by the paddle or drumming action. In this work, the use of power ultrasound in the dyeing of leather has been studied with the aim to improve the exhaustion of dye for a given processing time, to reduce the dyeing time and to improve the quality of dyed leather. The effect of power ultrasound in the dyeing of full chrome cow crust leather in a stationary condition is compared with dyeing in the absence of ultrasound as a control experiment both in a stationary as well as conventional drumming condition. An ultrasonic cleaner (150 W and 33 kHz) was used for the experiments. Actual power dissipated into the system was calculated from the calorimetric measurement. Experiments were carried out with variation in type of dye, amount of dye offer, temperature and time. The results show that there is a significant improvement in the percentage exhaustion of dye due to the presence of ultrasound, when compared to dyeing in absence of ultrasound. Experiments on equilibrium dye uptake carried out with or without ultrasound suggest that ultrasound help to improve the kinetics of leather dyeing. The results indicate that leathers dyed in presence of ultrasound have higher colour values, better dye penetration and fastness properties compared to control leathers. The physical testing results show that strength properties of the dyed leathers are not affected due to the application of ultrasound under the given process conditions. Apparent diffusion coefficient during the initial stage of dyeing process, both in presence and in absence of ultrasound was calculated. The values show that ultrasound helps in improving the apparent diffusion coefficient more for the difficult dyeing conditions such as in the case of metal-complex dyes having bigger aggregate size compared to less difficult dyeing conditions.

  8. Lexical Processes in the Recognition of Japanese Horizontal and Vertical Compounds

    ERIC Educational Resources Information Center

    Miwa, Koji; Dijkstra, Ton

    2017-01-01

    This lexical decision eye-tracking study investigated whether horizontal and vertical readings elicit comparable behavioral patterns and whether reading directions modulate lexical processes. Response times and eye movements were recorded during a lexical decision task with Japanese bimorphemic compound words presented vertically. The data were…

  9. Status of CSR RL06 GRACE reprocessing and preliminary results

    NASA Astrophysics Data System (ADS)

    Save, H.

    2017-12-01

    The GRACE project plans to re-processes the GRACE mission data in order to be consistent with the first gravity products released by the GRACE-FO project. The RL06 reprocessing will harmonize the GRACE time-series with the first release of GRACE-FO. This paper catalogues the changes in the upcoming RL06 release and discusses the quality improvements as compared to the current RL05 release. The processing and parameterization changes as compared to the current release are also discussed. This paper discusses the evolution of the quality of the GRACE solutions and characterize the errors over the past few years. The possible challenges associated with connecting the GRACE time series with that from GRACE-FO are also discussed.

  10. Social and monetary reward processing in autism spectrum disorders

    PubMed Central

    2012-01-01

    Background Social motivation theory suggests that deficits in social reward processing underlie social impairments in autism spectrum disorders (ASD). However, the extent to which abnormalities in reward processing generalize to other classes of stimuli remains unresolved. The aim of the current study was to examine if reward processing abnormalities in ASD are specific to social stimuli or can be generalized to other classes of reward. Additionally, we sought to examine the results in the light of behavioral impairments in ASD. Methods Participants performed adapted versions of the social and monetary incentive delay tasks. Data from 21 unmedicated right-handed male participants with ASD and 21 age- and IQ-matched controls were analyzed using a factorial design to examine the blood-oxygen-level-dependent (BOLD) response during the anticipation and receipt of both reward types. Results Behaviorally, the ASD group showed less of a reduction in reaction time (RT) for rewarded compared to unrewarded trials than the control group. In terms of the fMRI results, there were no significant group differences in reward circuitry during reward anticipation. During the receipt of rewards, there was a significant interaction between group and reward type in the left dorsal striatum (DS). The ASD group showed reduced activity in the DS compared to controls for social rewards but not monetary rewards and decreased activation for social rewards compared to monetary rewards. Controls showed no significant difference between the two reward types. Increased activation in the DS during social reward processing was associated with faster response times for rewarded trials, compared to unrewarded trials, in both groups. This is in line with behavioral results indicating that the ASD group showed less of a reduction in RT for rewarded compared to unrewarded trials. Additionally, de-activation to social rewards was associated with increased repetitive behavior in ASD. Conclusions In line with social motivation theory, the ASD group showed reduced activation, compared to controls, during the receipt of social rewards in the DS. Groups did not differ significantly during the processing of monetary rewards. BOLD activation in the DS, during social reward processing, was associated with behavioral impairments in ASD. PMID:23014171

  11. Social and monetary reward processing in autism spectrum disorders.

    PubMed

    Delmonte, Sonja; Balsters, Joshua H; McGrath, Jane; Fitzgerald, Jacqueline; Brennan, Sean; Fagan, Andrew J; Gallagher, Louise

    2012-09-26

    Social motivation theory suggests that deficits in social reward processing underlie social impairments in autism spectrum disorders (ASD). However, the extent to which abnormalities in reward processing generalize to other classes of stimuli remains unresolved. The aim of the current study was to examine if reward processing abnormalities in ASD are specific to social stimuli or can be generalized to other classes of reward. Additionally, we sought to examine the results in the light of behavioral impairments in ASD. Participants performed adapted versions of the social and monetary incentive delay tasks. Data from 21 unmedicated right-handed male participants with ASD and 21 age- and IQ-matched controls were analyzed using a factorial design to examine the blood-oxygen-level-dependent (BOLD) response during the anticipation and receipt of both reward types. Behaviorally, the ASD group showed less of a reduction in reaction time (RT) for rewarded compared to unrewarded trials than the control group. In terms of the fMRI results, there were no significant group differences in reward circuitry during reward anticipation. During the receipt of rewards, there was a significant interaction between group and reward type in the left dorsal striatum (DS). The ASD group showed reduced activity in the DS compared to controls for social rewards but not monetary rewards and decreased activation for social rewards compared to monetary rewards. Controls showed no significant difference between the two reward types. Increased activation in the DS during social reward processing was associated with faster response times for rewarded trials, compared to unrewarded trials, in both groups. This is in line with behavioral results indicating that the ASD group showed less of a reduction in RT for rewarded compared to unrewarded trials. Additionally, de-activation to social rewards was associated with increased repetitive behavior in ASD. In line with social motivation theory, the ASD group showed reduced activation, compared to controls, during the receipt of social rewards in the DS. Groups did not differ significantly during the processing of monetary rewards. BOLD activation in the DS, during social reward processing, was associated with behavioral impairments in ASD.

  12. Effect of multiple forming tools on geometrical and mechanical properties in incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Wernicke, S.; Dang, T.; Gies, S.; Tekkaya, A. E.

    2018-05-01

    The tendency to a higher variety of products requires economical manufacturing processes suitable for the production of prototypes and small batches. In the case of complex hollow-shaped parts, single point incremental forming (SPIF) represents a highly flexible process. The flexibility of this process comes along with a very long process time. To decrease the process time, a new incremental forming approach with multiple forming tools is investigated. The influence of two incremental forming tools on the resulting mechanical and geometrical component properties compared to SPIF is presented. Sheets made of EN AW-1050A were formed to frustums of a pyramid using different tool-path strategies. Furthermore, several variations of the tool-path strategy are analyzed. A time saving between 40% and 60% was observed depending on the tool-path and the radii of the forming tools while the mechanical properties remained unchanged. This knowledge can increase the cost efficiency of incremental forming processes.

  13. Automated processing of whole blood units: operational value and in vitro quality of final blood components

    PubMed Central

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958

  14. Automated processing of whole blood units: operational value and in vitro quality of final blood components.

    PubMed

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement.

  15. Applying Toyota Production System principles to a psychiatric hospital: making transfers safer and more timely.

    PubMed

    Young, John Q; Wachter, Robert M

    2009-09-01

    Health care organizations have increasingly embraced industrial methods, such as the Toyota Production System (TPS), to improve quality, safety, timeliness, and efficiency. However, the use of such methods in psychiatric hospitals has been limited. A psychiatric hospital applied TPS principles to patient transfers to the outpatient medication management clinics (MMCs) from all other inpatient and outpatient services within the hospital's system. Sources of error and delay were identified, and a new process was designed to improve timely access (measured by elapsed time from request for transfer to scheduling of an appointment and to the actual visit) and patient safety by decreasing communication errors (measured by number of failed transfers). Complexity was substantially reduced, with one streamlined pathway replacing five distinct and more complicated pathways. To assess sustainability, the postintervention period was divided into Period 1 (first 12 months) and Period 2 (next 24 months). Time required to process the transfer and schedule the first appointment was reduced by 74.1% in Period 1 (p < .001) and by an additional 52.7% in Period 2 (p < .0001) for an overall reduction of 87% (p < .0001). Similarly, time to the actual appointment was reduced 31.2% in Period 1 (p < .0001), but was stable in Period 2 (p = .48). The number of transfers per month successfully processed and scheduled increased 95% in the postintervention period compared with the pre-implementation period (p = .015). Finally, data for failed transfers were only available for the postintervention period, and the rate decreased 89% in Period 2 compared with Period 1 (p = .017). The application of TPS principles enhanced access and safety through marked and sustained improvements in the transfer process's timeliness and reliability. Almost all transfer processes have now been standardized.

  16. State of the evidence on simulation-based training for laparoscopic surgery: a systematic review.

    PubMed

    Zendejas, Benjamin; Brydges, Ryan; Hamstra, Stanley J; Cook, David A

    2013-04-01

    Summarize the outcomes and best practices of simulation training for laparoscopic surgery. Simulation-based training for laparoscopic surgery has become a mainstay of surgical training. Much new evidence has accrued since previous reviews were published. We systematically searched the literature through May 2011 for studies evaluating simulation, in comparison with no intervention or an alternate training activity, for training health professionals in laparoscopic surgery. Outcomes were classified as satisfaction, skills (in a test setting) of time (to perform the task), process (eg, performance rating), product (eg, knot strength), and behaviors when caring for patients. We used random effects to pool effect sizes. From 10,903 articles screened, we identified 219 eligible studies enrolling 7138 trainees, including 91 (42%) randomized trials. For comparisons with no intervention (n = 151 studies), pooled effect size (ES) favored simulation for outcomes of knowledge (1.18; N = 9 studies), skills time (1.13; N = 89), skills process (1.23; N = 114), skills product (1.09; N = 7), behavior time (1.15; N = 7), behavior process (1.22; N = 15), and patient effects (1.28; N = 1), all P < 0.05. When compared with nonsimulation instruction (n = 3 studies), results significantly favored simulation for outcomes of skills time (ES, 0.75) and skills process (ES, 0.54). Comparisons between different simulation interventions (n = 79 studies) clarified best practices. For example, in comparison with virtual reality, box trainers have similar effects for process skills outcomes and seem to be superior for outcomes of satisfaction and skills time. Simulation-based laparoscopic surgery training of health professionals has large benefits when compared with no intervention and is moderately more effective than nonsimulation instruction.

  17. Applications of satellite image processing to the analysis of Amazonian cultural ecology

    NASA Technical Reports Server (NTRS)

    Behrens, Clifford A.

    1991-01-01

    This paper examines the application of satellite image processing towards identifying and comparing resource exploitation among indigenous Amazonian peoples. The use of statistical and heuristic procedures for developing land cover/land use classifications from Thematic Mapper satellite imagery will be discussed along with actual results from studies of relatively small (100 - 200 people) settlements. Preliminary research indicates that analysis of satellite imagery holds great potential for measuring agricultural intensification, comparing rates of tropical deforestation, and detecting changes in resource utilization patterns over time.

  18. Emotional words facilitate lexical but not early visual processing.

    PubMed

    Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M

    2015-12-12

    Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.

  19. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times.

    PubMed

    Yang, Xin; Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems.

  1. Bi-Objective Flexible Job-Shop Scheduling Problem Considering Energy Consumption under Stochastic Processing Times

    PubMed Central

    Zeng, Zhenxiang; Wang, Ruidong; Sun, Xueshan

    2016-01-01

    This paper presents a novel method on the optimization of bi-objective Flexible Job-shop Scheduling Problem (FJSP) under stochastic processing times. The robust counterpart model and the Non-dominated Sorting Genetic Algorithm II (NSGA-II) are used to solve the bi-objective FJSP with consideration of the completion time and the total energy consumption under stochastic processing times. The case study on GM Corporation verifies that the NSGA-II used in this paper is effective and has advantages to solve the proposed model comparing with HPSO and PSO+SA. The idea and method of the paper can be generalized widely in the manufacturing industry, because it can reduce the energy consumption of the energy-intensive manufacturing enterprise with less investment when the new approach is applied in existing systems. PMID:27907163

  2. The combined positive impact of Lean methodology and Ventana Symphony autostainer on histology lab workflow

    PubMed Central

    2010-01-01

    Background Histologic samples all funnel through the H&E microtomy staining area. Here manual processes intersect with semi-automated processes creating a bottleneck. We compare alternate work processes in anatomic pathology primarily in the H&E staining work cell. Methods We established a baseline measure of H&E process impact on personnel, information management and sample flow from historical workload and production data and direct observation. We compared this to performance after implementing initial Lean process modifications, including workstation reorganization, equipment relocation and workflow levelling, and the Ventana Symphony stainer to assess the impact on productivity in the H&E staining work cell. Results Average time from gross station to assembled case decreased by 2.9 hours (12%). Total process turnaround time (TAT) exclusive of processor schedule changes decreased 48 minutes/case (4%). Mean quarterly productivity increased 8.5% with the new methods. Process redesign reduced the number of manual steps from 219 to 182, a 17% reduction. Specimen travel distance was reduced from 773 ft/case to 395 ft/case (49%) overall, and from 92 to 53 ft/case in the H&E cell (42% improvement). Conclusions Implementation of Lean methods in the H&E work cell of histology can result in improved productivity, improved through-put and case availability parameters including TAT. PMID:20181123

  3. Masticatory motion after surgical or nonsurgical treatment for unilateral fractures of the mandibular condylar process.

    PubMed

    Throckmorton, Gaylord S; Ellis, Edward; Hayasaki, Haruaki

    2004-02-01

    We sought to compare mandibular motion during mastication in patients treated in either an open or a closed fashion for unilateral fractures of the mandibular condylar process. Eighty-one male patients with unilateral condylar process fractures were treated either with (n = 37) or without (n = 44) surgical reduction and rigid fixation of their condylar process fractures. At 6 weeks, 6 months, 1 year, and 2 years after treatment, the subjects' chewing cycles were recorded using a magnetic sensor array (Sirognathograph; Siemens Corp, Bensheim, Germany) while chewing Gummi-Bears (HARIBO, Bonn, Germany) unilaterally on the same side as the fracture and on the opposite side. The chewing cycles were analyzed using a custom computer program, and the duration, excursive ranges, and 3-dimensional cycle shape were compared between the 2 treatment groups at each time interval using multilevel linear modeling statistics. The 2 treatment groups did not differ significantly for any measure of cycle duration or any excursive range (except lateral excursions at 1 year post-treatment) at any of the time intervals. However, the 3-dimensional cycle shapes of the 2 groups did differ significantly at all time intervals. Surgical correction of unilateral condylar process fractures has relatively little effect on the more standard measures (duration and excursive ranges) of masticatory function. However, surgical correction better normalizes opening incisor pathways during mastication on the side opposite the fracture.

  4. Time processing impairments in preschoolers at risk of developing difficulties in mathematics.

    PubMed

    Tobia, Valentina; Rinaldi, Luca; Marzocchi, Gian Marco

    2018-03-01

    The occurrence of time processing problems in individuals with Development Dyscalculia (DD) has favored the view of a general magnitude system devoted to both numerical and temporal information. Yet, this scenario has been partially challenged by studies indicating that time difficulties can be attributed to poor calculation or counting skills, which can support reasoning on time in school-aged children and adults. Here, we tackle this debate by exploring the performance of young children before they fully develop the symbolic number system. Preschoolers at risk of developing DD were compared with typically developing children in a series of tasks investigating time processing and in their 'sense of time', evaluated by parents and teachers. Results yielded a poorer performance in time reproduction of 5-second intervals and in time discrimination, as well as a weaker 'sense of time', in children at risk of DD. These findings provide evidence of a common magnitude system that would be responsible for deficits in both numerical and temporal domains, already at early stages of life. © 2016 John Wiley & Sons Ltd.

  5. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  6. The effects of the framing of time on delay discounting.

    PubMed

    DeHart, William Brady; Odum, Amy L

    2015-01-01

    We examined the effects of the framing of time on delay discounting. Delay discounting is the process by which delayed outcomes are devalued as a function of time. Time in a titrating delay discounting task is often framed in calendar units (e.g., as 1 week, 1 month, etc.). When time is framed as a specific date, delayed outcomes are discounted less compared to the calendar format. Other forms of framing time; however, have not been explored. All participants completed a titrating calendar unit delay-discounting task for money. Participants were also assigned to one of two delay discounting tasks: time as dates (e.g., June 1st, 2015) or time in units of days (e.g., 5000 days), using the same delay distribution as the calendar delay-discounting task. Time framed as dates resulted in less discounting compared to the calendar method, whereas time framed as days resulted in greater discounting compared to the calendar method. The hyperboloid model fit best compared to the hyperbola and exponential models. How time is framed may alter how participants attend to the delays as well as how the delayed outcome is valued. Altering how time is framed may serve to improve adherence to goals with delayed outcomes. © Society for the Experimental Analysis of Behavior.

  7. A New Performance Improvement Model: Adding Benchmarking to the Analysis of Performance Indicator Data.

    PubMed

    Al-Kuwaiti, Ahmed; Homa, Karen; Maruthamuthu, Thennarasu

    2016-01-01

    A performance improvement model was developed that focuses on the analysis and interpretation of performance indicator (PI) data using statistical process control and benchmarking. PIs are suitable for comparison with benchmarks only if the data fall within the statistically accepted limit-that is, show only random variation. Specifically, if there is no significant special-cause variation over a period of time, then the data are ready to be benchmarked. The proposed Define, Measure, Control, Internal Threshold, and Benchmark model is adapted from the Define, Measure, Analyze, Improve, Control (DMAIC) model. The model consists of the following five steps: Step 1. Define the process; Step 2. Monitor and measure the variation over the period of time; Step 3. Check the variation of the process; if stable (no significant variation), go to Step 4; otherwise, control variation with the help of an action plan; Step 4. Develop an internal threshold and compare the process with it; Step 5.1. Compare the process with an internal benchmark; and Step 5.2. Compare the process with an external benchmark. The steps are illustrated through the use of health care-associated infection (HAI) data collected for 2013 and 2014 from the Infection Control Unit, King Fahd Hospital, University of Dammam, Saudi Arabia. Monitoring variation is an important strategy in understanding and learning about a process. In the example, HAI was monitored for variation in 2013, and the need to have a more predictable process prompted the need to control variation by an action plan. The action plan was successful, as noted by the shift in the 2014 data, compared to the historical average, and, in addition, the variation was reduced. The model is subject to limitations: For example, it cannot be used without benchmarks, which need to be calculated the same way with similar patient populations, and it focuses only on the "Analyze" part of the DMAIC model.

  8. Recharge processes and vertical transfer investigated through long-term monitoring of dissolved gases in shallow groundwater

    NASA Astrophysics Data System (ADS)

    de Montety, V.; Aquilina, L.; Labasque, T.; Chatton, E.; Fovet, O.; Ruiz, L.; Fourré, E.; de Dreuzy, J. R.

    2018-05-01

    We investigated temporal variations and vertical evolution of dissolved gaseous tracers (CFC-11, CFC-12, SF6, and noble gases), as well as 3H/3He ratio to determine groundwater recharge processes of a shallow unconfined, hard-rock aquifer in an agricultural catchment. We sampled dissolved gas concentration at 4 locations along the hillslope of a small experimental watershed, over 6 hydrological years, between 2 and 6 times per years, for a total of 20 field campaigns. We collected groundwater samples in the fluctuation zone and the permanently saturated zone using piezometers from 5 to 20 m deep. The purpose of this work is i) to assess the benefits of using gaseous tracers like CFCs and SF6 to study very young groundwater with flows suspected to be heterogeneous and variable in time, ii) to characterize the processes that control dissolved gas concentrations in groundwater during the recharge of the aquifer, and iii) to understand the evolution of recharge flow processes by repeated measurement campaigns, taking advantage of a long monitoring in a site devoted to recharge processes investigation. Gas tracer profiles are compared at different location of the catchment and for different hydrologic conditions. In addition, we compare results from CFCs and 3H/3He analysis to define the flow model that best explains tracer concentrations. Then we discuss the influence of recharge events on tracer concentrations and residence time and propose a temporal evolution of residence times for the unsaturated zone and the permanently saturated zone. These results are used to gain a better understanding of the conceptual model of the catchment and flow processes especially during recharge events.

  9. A Simple Approach for Monitoring Business Service Time Variation

    PubMed Central

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended. PMID:24895647

  10. A simple approach for monitoring business service time variation.

    PubMed

    Yang, Su-Fen; Arnold, Barry C

    2014-01-01

    Control charts are effective tools for signal detection in both manufacturing processes and service processes. Much of the data in service industries comes from processes having nonnormal or unknown distributions. The commonly used Shewhart variable control charts, which depend heavily on the normality assumption, are not appropriately used here. In this paper, we propose a new asymmetric EWMA variance chart (EWMA-AV chart) and an asymmetric EWMA mean chart (EWMA-AM chart) based on two simple statistics to monitor process variance and mean shifts simultaneously. Further, we explore the sampling properties of the new monitoring statistics and calculate the average run lengths when using both the EWMA-AV chart and the EWMA-AM chart. The performance of the EWMA-AV and EWMA-AM charts and that of some existing variance and mean charts are compared. A numerical example involving nonnormal service times from the service system of a bank branch in Taiwan is used to illustrate the applications of the EWMA-AV and EWMA-AM charts and to compare them with the existing variance (or standard deviation) and mean charts. The proposed EWMA-AV chart and EWMA-AM charts show superior detection performance compared to the existing variance and mean charts. The EWMA-AV chart and EWMA-AM chart are thus recommended.

  11. 75 FR 44181 - Mevinphos; Proposed Data Call-in Order for Pesticide Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ... are most often collected in a study called the comparative cholinesterase assay (CCA). Since that time....1520 Processing studies Not Required 24 months (tomatoes) 870.6300 Comparative 6 months 12 months... mevinphos including: 1. A developmental neurotoxicity (DNT) study in rats (with expanded protocol to extend...

  12. Synaptic consolidation as a temporally variable process: Uncovering the parameters modulating its time-course.

    PubMed

    Casagrande, Mirelle A; Haubrich, Josué; Pedraza, Lizeth K; Popik, Bruno; Quillfeldt, Jorge A; de Oliveira Alvares, Lucas

    2018-04-01

    Memories are not instantly created in the brain, requiring a gradual stabilization process called consolidation to be stored and persist in a long-lasting manner. However, little is known whether this time-dependent process is dynamic or static, and the factors that might modulate it. Here, we hypothesized that the time-course of consolidation could be affected by specific learning parameters, changing the time window where memory is susceptible to retroactive interference. In the rodent contextual fear conditioning paradigm, we compared weak and strong training protocols and found that in the latter memory is susceptible to post-training hippocampal inactivation for a shorter period of time. The accelerated consolidation process triggered by the strong training was mediated by glucocorticoids, since this effect was blocked by pre-training administration of metyrapone. In addition, we found that pre-exposure to the training context also accelerates fear memory consolidation. Hence, our results demonstrate that the time window in which memory is susceptible to post-training interferences varies depending on fear conditioning intensity and contextual familiarity. We propose that the time-course of memory consolidation is dynamic, being directly affected by attributes of the learning experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Allocating time to future tasks: the effect of task segmentation on planning fallacy bias.

    PubMed

    Forsyth, Darryl K; Burt, Christopher D B

    2008-06-01

    The scheduling component of the time management process was used as a "paradigm" to investigate the allocation of time to future tasks. In three experiments, we compared task time allocation for a single task with the summed time allocations given for each subtask that made up the single task. In all three, we found that allocated time for a single task was significantly smaller than the summed time allocated to the individual subtasks. We refer to this as the segmentation effect. In Experiment 3, we asked participants to give estimates by placing a mark on a time line, and found that giving time allocations in the form of rounded close approximations probably does not account for the segmentation effect. We discuss the results in relation to the basic processes used to allocate time to future tasks and the means by which planning fallacy bias might be reduced.

  14. Biosensor-based real-time monitoring of paracetamol photocatalytic degradation.

    PubMed

    Calas-Blanchard, Carole; Istamboulié, Georges; Bontoux, Margot; Plantard, Gaël; Goetz, Vincent; Noguer, Thierry

    2015-07-01

    This paper presents for the first time the integration of a biosensor for the on-line, real-time monitoring of a photocatalytic degradation process. Paracetamol was used as a model molecule due to its wide use and occurrence in environmental waters. The biosensor was developed based on tyrosinase immobilization in a polyvinylalcohol photocrosslinkable polymer. It was inserted in a computer-controlled flow system installed besides a photocatalytic reactor including titanium dioxide (TiO2) as photocatalyst. It was shown that the biosensor was able to accurately monitor the paracetamol degradation with time. Compared with conventional HPLC analysis, the described device provides a real-time information on the reaction advancement, allowing a better control of the photodegradation process. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Lean techniques for the improvement of patients’ flow in emergency department

    PubMed Central

    Chan, HY; Lo, SM; Lee, LLY; Lo, WYL; Yu, WC; Wu, YF; Ho, ST; Yeung, RSD; Chan, JTS

    2014-01-01

    BACKGROUND: Emergency departments (EDs) face problems with overcrowding, access block, cost containment, and increasing demand from patients. In order to resolve these problems, there is rising interest to an approach called “lean” management. This study aims to (1) evaluate the current patient flow in ED, (2) to identify and eliminate the non-valued added process, and (3) to modify the existing process. METHODS: It was a quantitative, pre- and post-lean design study with a series of lean management work implemented to improve the admission and blood result waiting time. These included structured re-design process, priority admission triage (PAT) program, enhanced communication with medical department, and use of new high sensitivity troponin-T (hsTnT) blood test. Triage waiting time, consultation waiting time, blood result time, admission waiting time, total processing time and ED length of stay were compared. RESULTS: Among all the processes carried out in ED, the most time consuming processes were to wait for an admission bed (38.24 minutes; SD 66.35) and blood testing result (mean 52.73 minutes, SD 24.03). The triage waiting time and end waiting time for consultation were significantly decreased. The admission waiting time of emergency medical ward (EMW) was significantly decreased from 54.76 minutes to 24.45 minutes after implementation of PAT program (P<0.05). CONCLUSION: The application of lean management can improve the patient flow in ED. Acquiescence to the principle of lean is crucial to enhance high quality emergency care and patient satisfaction. PMID:25215143

  16. IJA: an efficient algorithm for query processing in sensor networks.

    PubMed

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm.

  17. IJA: An Efficient Algorithm for Query Processing in Sensor Networks

    PubMed Central

    Lee, Hyun Chang; Lee, Young Jae; Lim, Ji Hyang; Kim, Dong Hwa

    2011-01-01

    One of main features in sensor networks is the function that processes real time state information after gathering needed data from many domains. The component technologies consisting of each node called a sensor node that are including physical sensors, processors, actuators and power have advanced significantly over the last decade. Thanks to the advanced technology, over time sensor networks have been adopted in an all-round industry sensing physical phenomenon. However, sensor nodes in sensor networks are considerably constrained because with their energy and memory resources they have a very limited ability to process any information compared to conventional computer systems. Thus query processing over the nodes should be constrained because of their limitations. Due to the problems, the join operations in sensor networks are typically processed in a distributed manner over a set of nodes and have been studied. By way of example while simple queries, such as select and aggregate queries, in sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. Therefore, in this paper, we propose and describe an Incremental Join Algorithm (IJA) in Sensor Networks to reduce the overhead caused by moving a join pair to the final join node or to minimize the communication cost that is the main consumer of the battery when processing the distributed queries in sensor networks environments. At the same time, the simulation result shows that the proposed IJA algorithm significantly reduces the number of bytes to be moved to join nodes compared to the popular synopsis join algorithm. PMID:22319375

  18. Electrochemical pretreatment of waste activated sludge: effect of process conditions on sludge disintegration degree and methane production.

    PubMed

    Ye, Caihong; Yuan, Haiping; Dai, Xiaohu; Lou, Ziyang; Zhu, Nanwen

    2016-11-01

    Waste activated sludge (WAS) requires a long digestion time because of a rate-limiting hydrolysis step - the first phase of anaerobic digestion (AD). Pretreatment can be used prior to AD to facilitate the hydrolysis step and improve the efficiency of WAS digestion. This study evaluated a novel application of electrochemical (EC) technology employed as the pretreatment method prior to AD of WAS, focusing on the effect of process conditions on sludge disintegration and subsequent AD process. A superior process condition of EC pretreatment was obtained by reaction time of 30 min, electrolysis voltage of 20 V, and electrode distance of 5 cm, under which the disintegration degree of WAS ranged between 9.02% and 9.72%. In the subsequent batch AD tests, 206 mL/g volatile solid (VS) methane production in EC pretreated sludge was obtained, which was 20.47% higher than that of unpretreated sludge. The AD time was 19 days shorter for EC pretreated sludge compared to the unpretreated sludge. Additionally, the EC + AD reactor achieved 41.84% of VS removal at the end of AD. The analysis of energy consumption showed that EC pretreatment could be effective in enhancing sludge AD with reduced energy consumption when compared to other pretreatment methods.

  19. Measuring the complexity of design in real-time imaging software

    NASA Astrophysics Data System (ADS)

    Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.

    2007-02-01

    Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.

  20. Bitstream decoding processor for fast entropy decoding of variable length coding-based multiformat videos

    NASA Astrophysics Data System (ADS)

    Jo, Hyunho; Sim, Donggyu

    2014-06-01

    We present a bitstream decoding processor for entropy decoding of variable length coding-based multiformat videos. Since most of the computational complexity of entropy decoders comes from bitstream accesses and table look-up process, the developed bitstream processing unit (BsPU) has several designated instructions to access bitstreams and to minimize branch operations in the table look-up process. In addition, the instruction for bitstream access has the capability to remove emulation prevention bytes (EPBs) of H.264/AVC without initial delay, repeated memory accesses, and additional buffer. Experimental results show that the proposed method for EPB removal achieves a speed-up of 1.23 times compared to the conventional EPB removal method. In addition, the BsPU achieves speed-ups of 5.6 and 3.5 times in entropy decoding of H.264/AVC and MPEG-4 Visual bitstreams, respectively, compared to an existing processor without designated instructions and a new table mapping algorithm. The BsPU is implemented on a Xilinx Virtex5 LX330 field-programmable gate array. The MPEG-4 Visual (ASP, Level 5) and H.264/AVC (Main Profile, Level 4) are processed using the developed BsPU with a core clock speed of under 250 MHz in real time.

  1. Stochastic resetting in backtrack recovery by RNA polymerases

    NASA Astrophysics Data System (ADS)

    Roldán, Édgar; Lisica, Ana; Sánchez-Taltavull, Daniel; Grill, Stephan W.

    2016-06-01

    Transcription is a key process in gene expression, in which RNA polymerases produce a complementary RNA copy from a DNA template. RNA polymerization is frequently interrupted by backtracking, a process in which polymerases perform a random walk along the DNA template. Recovery of polymerases from the transcriptionally inactive backtracked state is determined by a kinetic competition between one-dimensional diffusion and RNA cleavage. Here we describe backtrack recovery as a continuous-time random walk, where the time for a polymerase to recover from a backtrack of a given depth is described as a first-passage time of a random walker to reach an absorbing state. We represent RNA cleavage as a stochastic resetting process and derive exact expressions for the recovery time distributions and mean recovery times from a given initial backtrack depth for both continuous and discrete-lattice descriptions of the random walk. We show that recovery time statistics do not depend on the discreteness of the DNA lattice when the rate of one-dimensional diffusion is large compared to the rate of cleavage.

  2. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  3. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  4. Use of dual coolant displacing media for in-process optical measurement of form profiles

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Xie, F.

    2018-07-01

    In-process measurement supports feedback control to reduce workpiece surface form error. Without it, the workpiece surface must be measured offline causing significant errors in workpiece positioning and reduced productivity. To offer better performance, a new in-process optical measurement method based on the use of dual coolant displacing media is proposed and studied, which uses an air and liquid phase together to resist coolant and to achieve in-process measurement. In the proposed new design, coolant is used to replace the previously used clean water to avoid coolant dilution. Compared with the previous methods, the distance between the applicator and the workpiece surface can be relaxed to 1 mm. The result is 4 times larger than before, thus permitting measurement of curved surfaces. The use of air is up to 1.5 times less than the best method previously available. For a sample workpiece with curved surfaces, the relative error of profile measurement under coolant conditions can be as small as 0.1% compared with the one under no coolant conditions. Problems in comparing measured 3D surfaces are discussed. A comparative study between a Bruker Npflex optical profiler and the developed new in-process optical profiler was conducted. For a surface area of 5.5 mm  ×  5.5 mm, the average measurement error under coolant conditions is only 0.693 µm. In addition, the error due to the new method is only 0.10 µm when compared between coolant and no coolant conditions. The effect of a thin liquid film on workpiece surface is discussed. The experimental results show that the new method can successfully solve the coolant dilution problem and is able to accurately measure the workpiece surface whilst fully submerged in the opaque coolant. The proposed new method is advantageous and should be very useful for in-process optical form profile measurement in precision machining.

  5. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Digital signal processing for velocity measurements in dynamical material's behaviour studies.

    PubMed

    Devlaminck, Julien; Luc, Jérôme; Chanal, Pierre-Yves

    2014-03-01

    In this work, we describe different configurations of optical fiber interferometers (types Michelson and Mach-Zehnder) used to measure velocities during dynamical material's behaviour studies. We detail the algorithms of processing developed and optimized to improve the performance of these interferometers especially in terms of time and frequency resolutions. Three methods of analysis of interferometric signals were studied. For Michelson interferometers, the time-frequency analysis of signals by Short-Time Fourier Transform (STFT) is compared to a time-frequency analysis by Continuous Wavelet Transform (CWT). The results have shown that the CWT was more suitable than the STFT for signals with low signal-to-noise, and low velocity and high acceleration areas. For Mach-Zehnder interferometers, the measurement is carried out by analyzing the phase shift between three interferometric signals (Triature processing). These three methods of digital signal processing were evaluated, their measurement uncertainties estimated, and their restrictions or operational limitations specified from experimental results performed on a pulsed power machine.

  7. Nonlinear stochastic exclusion financial dynamics modeling and time-dependent intrinsic detrended cross-correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Jun

    2017-09-01

    In attempt to reproduce price dynamics of financial markets, a stochastic agent-based financial price model is proposed and investigated by stochastic exclusion process. The exclusion process, one of interacting particle systems, is usually thought of as modeling particle motion (with the conserved number of particles) in a continuous time Markov process. In this work, the process is utilized to imitate the trading interactions among the investing agents, in order to explain some stylized facts found in financial time series dynamics. To better understand the correlation behaviors of the proposed model, a new time-dependent intrinsic detrended cross-correlation (TDI-DCC) is introduced and performed, also, the autocorrelation analyses are applied in the empirical research. Furthermore, to verify the rationality of the financial price model, the actual return series are also considered to be comparatively studied with the simulation ones. The comparison results of return behaviors reveal that this financial price dynamics model can reproduce some correlation features of actual stock markets.

  8. A Real-Time Data Acquisition and Processing Framework Based on FlexRIO FPGA and ITER Fast Plant System Controller

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zheng, W.; Zhang, M.; Yuan, T.; Zhuang, G.; Pan, Y.

    2016-06-01

    Measurement and control of the plasma in real-time are critical for advanced Tokamak operation. It requires high speed real-time data acquisition and processing. ITER has designed the Fast Plant System Controllers (FPSC) for these purposes. At J-TEXT Tokamak, a real-time data acquisition and processing framework has been designed and implemented using standard ITER FPSC technologies. The main hardware components of this framework are an Industrial Personal Computer (IPC) with a real-time system and FlexRIO devices based on FPGA. With FlexRIO devices, data can be processed by FPGA in real-time before they are passed to the CPU. The software elements are based on a real-time framework which runs under Red Hat Enterprise Linux MRG-R and uses Experimental Physics and Industrial Control System (EPICS) for monitoring and configuring. That makes the framework accord with ITER FPSC standard technology. With this framework, any kind of data acquisition and processing FlexRIO FPGA program can be configured with a FPSC. An application using the framework has been implemented for the polarimeter-interferometer diagnostic system on J-TEXT. The application is able to extract phase-shift information from the intermediate frequency signal produced by the polarimeter-interferometer diagnostic system and calculate plasma density profile in real-time. Different algorithms implementations on the FlexRIO FPGA are compared in the paper.

  9. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  10. The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration

    NASA Astrophysics Data System (ADS)

    Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.

    2017-03-01

    In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.

  11. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    PubMed

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  12. [Performance development of a university operating room after implementation of a central operating room management].

    PubMed

    Waeschle, R M; Sliwa, B; Jipp, M; Pütz, H; Hinz, J; Bauer, M

    2016-08-01

    The difficult financial situation in German hospitals requires measures for improvement in process quality. Associated increases in revenues in the high income field "operating room (OR) area" are increasingly the responsibility of OR management but it has not been shown that the introduction of an efficiency-oriented management leads to an increase in process quality and revenues in the operating theatre. Therefore the performance in the operating theatre of the University Medical Center Göttingen was analyzed for working days in the core operating time from 7.45 a.m. to 3.30 p.m. from 2009 to 2014. The achievement of process target times for the morning surgery start time and the turnover times of anesthesia and OR-nurses were calculated as indicators of process quality. The number of operations and cumulative incision-suture time were also analyzed as aggregated performance indicators. In order to assess the development of revenues in the operating theatre, the revenues from diagnosis-related groups (DRG) in all inpatient and occupational accident cases, adjusted for the regional basic case value from 2009, were calculated for each year. The development of revenues was also analyzed after deduction of revenues resulting from altered economic case weighting. It could be shown that the achievement of process target values for the morning surgery start time could be improved by 40 %, the turnover times for anesthesia reduced by 50 % and for the OR-nurses by 36 %. Together with the introduction of central planning for reallocation, an increase in operation numbers of 21 % and cumulative incision-suture times of 12% could be realized. Due to these additional operations the DRG revenues in 2014 could be increased to 132 % compared to 2009 or 127 % if the revenues caused by economic case weighting were excluded. The personnel complement in anesthesia (-1.7 %) and OR-nurses (+2.6 %) as well as anesthetists (+6.7 %) increased less compared to the revenues or were slightly reduced. This improvement in process quality and cumulative incision-suture times as well as the increase in revenues, reflect the positive impact of an efficiency-oriented central OR management. The OR management releases due to measures of process optimization the necessary personnel and time resources and therefore achieves the basic prerequisites for increased revenues of surgical disciplines. The method presented can be used by other hospitals as a guideline to analyze performance development.

  13. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  14. Microwave processing of gustatory tissues for immunohistochemistry

    PubMed Central

    Bond, Amanda; Kinnamon, John C.

    2013-01-01

    We use immunohistochemistry to study taste cell structure and function as a means to elucidate how taste receptor cells communicate with nerve fibers and adjacent taste cells. This conventional method, however, is time consuming. In the present study we used taste buds from rat circumvallate papillae to compare conventional immunohistochemical tissue processing with microwave processing for the colocalization of several biochemical pathway markers (PLCβ2, syntaxin-1, IP3R3, α-gustducin) and the nuclear stain, Sytox. The results of our study indicate that in microwave versus conventional immunocytochemistry: (1) fixation quality is improved; (2) the amount of time necessary for processing tissue is decreased; (3) antigen retrieval is no longer needed; (4) image quality is superior. In sum, microwave tissue processing of gustatory tissues is faster and superior to conventional immunohistochemical tissue processing for many applications. PMID:23473796

  15. Lean principles optimize on-time vascular surgery operating room starts and decrease resident work hours.

    PubMed

    Warner, Courtney J; Walsh, Daniel B; Horvath, Alexander J; Walsh, Teri R; Herrick, Daniel P; Prentiss, Steven J; Powell, Richard J

    2013-11-01

    Lean process improvement techniques are used in industry to improve efficiency and quality while controlling costs. These techniques are less commonly applied in health care. This study assessed the effectiveness of Lean principles on first case on-time operating room starts and quantified effects on resident work hours. Standard process improvement techniques (DMAIC methodology: define, measure, analyze, improve, control) were used to identify causes of delayed vascular surgery first case starts. Value stream maps and process flow diagrams were created. Process data were analyzed with Pareto and control charts. High-yield changes were identified and simulated in computer and live settings prior to implementation. The primary outcome measure was the proportion of on-time first case starts; secondary outcomes included hospital costs, resident rounding time, and work hours. Data were compared with existing benchmarks. Prior to implementation, 39% of first cases started on time. Process mapping identified late resident arrival in preoperative holding as a cause of delayed first case starts. Resident rounding process inefficiencies were identified and changed through the use of checklists, standardization, and elimination of nonvalue-added activity. Following implementation of process improvements, first case on-time starts improved to 71% at 6 weeks (P = .002). Improvement was sustained with an 86% on-time rate at 1 year (P < .001). Resident rounding time was reduced by 33% (from 70 to 47 minutes). At 9 weeks following implementation, these changes generated an opportunity cost potential of $12,582. Use of Lean principles allowed rapid identification and implementation of perioperative process changes that improved efficiency and resulted in significant cost savings. This improvement was sustained at 1 year. Downstream effects included improved resident efficiency with decreased work hours. Copyright © 2013 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.

  16. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  17. Horizon sensor errors calculated by computer models compared with errors measured in orbit

    NASA Technical Reports Server (NTRS)

    Ward, K. A.; Hogan, R.; Andary, J.

    1982-01-01

    Using a computer program to model the earth's horizon and to duplicate the signal processing procedure employed by the ESA (Earth Sensor Assembly), errors due to radiance variation have been computed for a particular time of the year. Errors actually occurring in flight at the same time of year are inferred from integrated rate gyro data for a satellite of the TIROS series of NASA weather satellites (NOAA-A). The predicted performance is compared with actual flight history.

  18. Simplified dichromated gelatin hologram recording process

    NASA Technical Reports Server (NTRS)

    Georgekutty, Tharayil G.; Liu, Hua-Kuang

    1987-01-01

    A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.

  19. Reduction of aerobic and lactic acid bacteria in dairy desludge using an integrated compressed CO2 and ultrasonic process.

    PubMed

    Overton, Tim W; Lu, Tiejun; Bains, Narinder; Leeke, Gary A

    Current treatment routes are not suitable to reduce and stabilise bacterial content in some dairy process streams such as separator and bactofuge desludges which currently present a major emission problem faced by dairy producers. In this study, a novel method for the processing of desludge was developed. The new method, elevated pressure sonication (EPS), uses a combination of low frequency ultrasound (20 kHz) and elevated CO 2 pressure (50 to 100 bar). Process conditions (pressure, sonicator power, processing time) were optimised for batch and continuous EPS processes to reduce viable numbers of aerobic and lactic acid bacteria in bactofuge desludge by ≥3-log fold. Coagulation of proteins present in the desludge also occurred, causing separation of solid (curd) and liquid (whey) fractions. The proposed process offers a 10-fold reduction in energy compared to high temperature short time (HTST) treatment of milk.

  20. Sentence Planning in Native and Nonnative Language: A Comparative Study of English and Korean

    ERIC Educational Resources Information Center

    Choe, Mun Hong

    2010-01-01

    This study discusses cognitive processes when speakers produce language in real time, with its focus on cross-linguistic differences in the procedural aspect of language use. It demonstrates that the syntactic characteristics of a language shape the speakers' overall process of sentence planning and production: how they construct sentential…

  1. Bilingual Processing of ASL-English Code-Blends: The Consequences of Accessing Two Lexical Representations Simultaneously

    ERIC Educational Resources Information Center

    Emmorey, Karen; Petrich, Jennifer A. F.; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce "code-blends"--simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization…

  2. Searching CA Condensates, On-Line and Batch.

    ERIC Educational Resources Information Center

    Kaminecki, Ronald M.; And Others

    Batch mode processing is compared, using cost-effectiveness, with on-line processing for computer-aided searching of chemical abstracts. Consideration for time, need, coverage, and adaptability are found to be the criteria by which a searcher selects a method, and sometimes both methods are used. There is a tradeoff between batch mode's slower…

  3. Napping Reduces Emotional Attention Bias during Early Childhood

    ERIC Educational Resources Information Center

    Cremone, Amanda; Kurdziel, Laura B. F.; Fraticelli-Torres, Ada; McDermott, Jennifer M.; Spencer, Rebecca M. C.

    2017-01-01

    Sleep loss alters processing of emotional stimuli in preschool-aged children. However, the mechanism by which sleep modifies emotional processing in early childhood is unknown. We tested the hypothesis that a nap, compared to an equivalent time spent awake, reduces biases in attention allocation to affective information. Children (n = 43;…

  4. 76 FR 4405 - Self-Regulatory Organizations; National Securities Clearing Corporation; Order Granting Approval...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... the cut-off time for intraday comparison if the respective trade parties have submitted contract... intraday comparison process if the contract amounts were within (i) a net $10.00 difference for trades of... that transactions that remain uncompared after the intraday comparison process shall be deemed compared...

  5. Reliability of fully automated versus visually controlled pre- and post-processing of resting-state EEG.

    PubMed

    Hatz, F; Hardmeier, M; Bousleiman, H; Rüegg, S; Schindler, C; Fuhr, P

    2015-02-01

    To compare the reliability of a newly developed Matlab® toolbox for the fully automated, pre- and post-processing of resting state EEG (automated analysis, AA) with the reliability of analysis involving visually controlled pre- and post-processing (VA). 34 healthy volunteers (age: median 38.2 (20-49), 82% female) had three consecutive 256-channel resting-state EEG at one year intervals. Results of frequency analysis of AA and VA were compared with Pearson correlation coefficients, and reliability over time was assessed with intraclass correlation coefficients (ICC). Mean correlation coefficient between AA and VA was 0.94±0.07, mean ICC for AA 0.83±0.05 and for VA 0.84±0.07. AA and VA yield very similar results for spectral EEG analysis and are equally reliable. AA is less time-consuming, completely standardized, and independent of raters and their training. Automated processing of EEG facilitates workflow in quantitative EEG analysis. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  7. Introduction of an all-electronic administrative process for a major international pediatric surgical meeting.

    PubMed

    Applebaum, Harry; Boles, Kay; Atkinson, James B

    2003-12-01

    The administrative process for annual meetings is time consuming and increasingly costly when accomplished by traditional postal, fax, and telephone methods. The Pacific Association of Pediatric Surgeons introduced an all-electronic communication format for its 2002 annual meeting. Attendee acceptance and administrative and financial impact were evaluated. Interested physicians were directed to a Website containing detailed information and electronic forms. E-mail was used for the abstract selection and manuscript submission processes. Attendees were surveyed to evaluate the new format. Administrative costs for the new format were compared with estimated costs for a comparable traditionally managed meeting. Attendance was similar to that at previous US meetings. Eighty-two percent of respondents approved of the all-electronic format, although 48% believed a choice should remain. None suggested a complete return to the traditional format. Abstract and manuscript processing time was reduced substantially as were administrative costs (79.43 dollars savings per physician registrant). Adoption of an all-electronic annual meeting administrative process was associated with substantial cost reduction, increased efficiency, and excellent attendee satisfaction. This technology can help avoid increased registration fees while easing the burden on physician volunteers.

  8. A low-frequency near-field interferometric-TOA 3-D Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    Lyu, Fanchao; Cummer, Steven A.; Solanki, Rahulkumar; Weinert, Joel; McTague, Lindsay; Katko, Alex; Barrett, John; Zigoneanu, Lucian; Xie, Yangbo; Wang, Wenqi

    2014-11-01

    We report on the development of an easily deployable LF near-field interferometric-time of arrival (TOA) 3-D Lightning Mapping Array applied to imaging of entire lightning flashes. An interferometric cross-correlation technique is applied in our system to compute windowed two-sensor time differences with submicrosecond time resolution before TOA is used for source location. Compared to previously reported LF lightning location systems, our system captures many more LF sources. This is due mainly to the improved mapping of continuous lightning processes by using this type of hybrid interferometry/TOA processing method. We show with five station measurements that the array detects and maps different lightning processes, such as stepped and dart leaders, during both in-cloud and cloud-to-ground flashes. Lightning images mapped by our LF system are remarkably similar to those created by VHF mapping systems, which may suggest some special links between LF and VHF emission during lightning processes.

  9. Using rapid infrared forming to control interfaces in titanium-matrix composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warrier, S.G.; Lin, R.Y.

    1993-03-01

    Control of the fiber-matrix reaction during composite fabrication is commonly achieved by shortening the processing time, coating the reinforcement with relatively inert materials, or adding alloying elements to retard the reaction. To minimize the processing time, a rapid IR forming (RIF) technique for metal-matrix composite fabrication has been developed. Experiments have shown that the RIF technique is a quick, simple, and low-cost process to fabricate titanium-alloy matrix composites reinforced with either silicon carbide or carbon fibers. Due to short processing times (typically on the order of 1-2 minutes in an inert atmosphere for composites with up to eight-ply reinforcements), themore » interfacial reaction is limited and well controlled. Composites fabricated by this technique have mechanical properties that are comparable to (in several cases, superior to) those made with conventional diffusion-bonding techniques. 21 refs.« less

  10. Effect of cognitive load on working memory forgetting in aging.

    PubMed

    Baumans, Christine; Adam, Stephane; Seron, Xavier

    2012-01-01

    Functional approaches to working memory (WM) have been proposed recently to better investigate "maintenance" and "processing" mechanisms. The cognitive load (CL) hypothesis presented in the "Time-Based Resource-Sharing" model (Barrouillet & Camos, 2007) suggests that forgetting from WM (maintenance) can be investigated by varying the presentation rate and processing speed (processing). In this study, young and elderly participants were compared on WM tasks in which the difference in processing speed was controlled by CL manipulations. Two main results were found. First, when time constraints (CL) were matched for the two groups, no aging effect was observed. Second, whereas a large variation in CL affected WM performance, a small CL manipulation had no effect on the elderly. This suggests that WM forgetting cannot be completely accounted for by the CL hypothesis. Rather, it highlights the need to explore restoration times in particular, and the nature of the refreshment mechanisms within maintenance.

  11. Listeners modulate temporally selective attention during natural speech processing

    PubMed Central

    Astheimer, Lori B.; Sanders, Lisa D.

    2009-01-01

    Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316

  12. Random walks on activity-driven networks with attractiveness

    NASA Astrophysics Data System (ADS)

    Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola

    2017-05-01

    Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.

  13. Software-Based Real-Time Acquisition and Processing of PET Detector Raw Data.

    PubMed

    Goldschmidt, Benjamin; Schug, David; Lerche, Christoph W; Salomon, André; Gebhardt, Pierre; Weissler, Bjoern; Wehner, Jakob; Dueppenbecker, Peter M; Kiessling, Fabian; Schulz, Volkmar

    2016-02-01

    In modern positron emission tomography (PET) readout architectures, the position and energy estimation of scintillation events (singles) and the detection of coincident events (coincidences) are typically carried out on highly integrated, programmable printed circuit boards. The implementation of advanced singles and coincidence processing (SCP) algorithms for these architectures is often limited by the strict constraints of hardware-based data processing. In this paper, we present a software-based data acquisition and processing architecture (DAPA) that offers a high degree of flexibility for advanced SCP algorithms through relaxed real-time constraints and an easily extendible data processing framework. The DAPA is designed to acquire detector raw data from independent (but synchronized) detector modules and process the data for singles and coincidences in real-time using a center-of-gravity (COG)-based, a least-squares (LS)-based, or a maximum-likelihood (ML)-based crystal position and energy estimation approach (CPEEA). To test the DAPA, we adapted it to a preclinical PET detector that outputs detector raw data from 60 independent digital silicon photomultiplier (dSiPM)-based detector stacks and evaluated it with a [(18)F]-fluorodeoxyglucose-filled hot-rod phantom. The DAPA is highly reliable with less than 0.1% of all detector raw data lost or corrupted. For high validation thresholds (37.1 ± 12.8 photons per pixel) of the dSiPM detector tiles, the DAPA is real time capable up to 55 MBq for the COG-based CPEEA, up to 31 MBq for the LS-based CPEEA, and up to 28 MBq for the ML-based CPEEA. Compared to the COG-based CPEEA, the rods in the image reconstruction of the hot-rod phantom are only slightly better separable and less blurred for the LS- and ML-based CPEEA. While the coincidence time resolution (∼ 500 ps) and energy resolution (∼12.3%) are comparable for all three CPEEA, the system sensitivity is up to 2.5 × higher for the LS- and ML-based CPEEA.

  14. Primary percutaneous coronary intervention for patients presenting with ST-segment elevation myocardial infarction: process improvement in a rural ST-segment elevation myocardial infarction receiving center.

    PubMed

    Niles, Nathaniel W; Conley, Sheila M; Yang, Rayson C; Vanichakarn, Pantila; Anderson, Tamara A; Butterly, John R; Robb, John F; Jayne, John E; Yanofsky, Norman N; Proehl, Jean A; Guadagni, Donald F; Brown, Jeremiah R

    2010-01-01

    Rural ST-segment elevation myocardial infarction (STEMI) care networks may be particularly disadvantaged in achieving a door-to-balloon time (D2B) of less than or equal to 90 minutes recommended in current guidelines. ST-ELEVATION MYOCARDIAL INFARCTION PROCESS UPGRADE PROJECT: A multidisciplinary STEMI process upgrade group at a rural percutaneous coronary intervention center implemented evidence-based strategies to reduce time to electrocardiogram (ECG) and D2B, including catheterization laboratory activation triggered by either a prehospital ECG demonstrating STEMI or an emergency department physician diagnosing STEMI, single-call catheterization laboratory activation, catheterization laboratory response time less than or equal to 30 minutes, and prompt data feedback. An ongoing regional STEMI registry was used to collect process time intervals, including time to ECG and D2B, in a consecutive series of STEMI patients presenting before (group 1) and after (group 2) strategy implementation. Significant reductions in time to first ECG in the emergency department and D2B were seen in group 2 compared with group 1. Important improvement in the process of acute STEMI patient care was accomplished in the rural percutaneous coronary intervention center setting by implementing evidence-based strategies. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Predictive modeling of solidification during laser additive manufacturing of nickel superalloys: recent developments, future directions

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo

    2018-01-01

    Additive manufacturing (AM) processes produce parts with improved physical, chemical, and mechanical properties compared to conventional manufacturing processes. In AM processes, intricate part geometries are produced from multicomponent alloy powder, in a layer-by-layer fashion with multipass laser melting, solidification, and solid-state phase transformations, in a shorter manufacturing time, with minimal surface finishing, and at a reasonable cost. However, there is an increasing need for post-processing of the manufactured parts via, for example, stress relieving heat treatment and hot isostatic pressing to achieve homogeneous microstructure and properties at all times. Solidification in an AM process controls the size, shape, and distribution of the grains, the growth morphology, the elemental segregation and precipitation, the subsequent solid-state phase changes, and ultimately the material properties. The critical issues in this process are linked with multiphysics (such as fluid flow and diffusion of heat and mass) and multiscale (lengths, times and temperature ranges) challenges that arise due to localized rapid heating and cooling during AM processing. The alloy chemistry-process-microstructure-property-performance correlation in this process will be increasingly better understood through multiscale modeling and simulation.

  16. Are Experienced Hearing Aid Users Faster at Grasping the Meaning of a Sentence Than Inexperienced Users? An Eye-Tracking Study

    PubMed Central

    Kollmeier, Birger; Neher, Tobias

    2016-01-01

    This study assessed the effects of hearing aid (HA) experience on how quickly a participant can grasp the meaning of an acoustic sentence-in-noise stimulus presented together with two similar pictures that either correctly (target) or incorrectly (competitor) depict the meaning conveyed by the sentence. Using an eye tracker, the time taken by the participant to start fixating the target (the processing time) was measured for two levels of linguistic complexity (low vs. high) and three HA conditions: clinical linear amplification (National Acoustic Laboratories-Revised), single-microphone noise reduction with National Acoustic Laboratories-Revised, and linear amplification ensuring a sensation level of ≥ 15 dB up to at least 4 kHz for the speech material used here. Timed button presses to the target stimuli after the end of the sentences (offline reaction times) were also collected. Groups of experienced (eHA) and inexperienced (iHA) HA users matched in terms of age, hearing loss, and working memory capacity took part (N = 15 each). For the offline reaction times, no effects were found. In contrast, processing times increased with linguistic complexity. Furthermore, for all HA conditions, processing times were longer (poorer) for the iHA group than for the eHA group, despite comparable speech recognition performance. Taken together, these results indicate that processing times are more sensitive to speech processing-related factors than offline reaction times. Furthermore, they support the idea that HA experience positively impacts the ability to process noisy speech quickly, irrespective of the precise gain characteristics. PMID:27595793

  17. Temporal predictive mechanisms modulate motor reaction time during initiation and inhibition of speech and hand movement.

    PubMed

    Johari, Karim; Behroozmand, Roozbeh

    2017-08-01

    Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A novel approach to optimize workflow in grid-based teleradiology applications.

    PubMed

    Yılmaz, Ayhan Ozan; Baykal, Nazife

    2016-01-01

    This study proposes an infrastructure with a reporting workflow optimization algorithm (RWOA) in order to interconnect facilities, reporting units and radiologists on a single access interface, to increase the efficiency of the reporting process by decreasing the medical report turnaround time and to increase the quality of medical reports by determining the optimum match between the inspection and radiologist in terms of subspecialty, workload and response time. Workflow centric network architecture with an enhanced caching, querying and retrieving mechanism is implemented by seamlessly integrating Grid Agent and Grid Manager to conventional digital radiology systems. The inspection and radiologist attributes are modelled using a hierarchical ontology structure. Attribute preferences rated by radiologists and technical experts are formed into reciprocal matrixes and weights for entities are calculated utilizing Analytic Hierarchy Process (AHP). The assignment alternatives are processed by relation-based semantic matching (RBSM) and Integer Linear Programming (ILP). The results are evaluated based on both real case applications and simulated process data in terms of subspecialty, response time and workload success rates. Results obtained using simulated data are compared with the outcomes obtained by applying Round Robin, Shortest Queue and Random distribution policies. The proposed algorithm is also applied to a real case teleradiology application process data where medical reporting workflow was performed based on manual assignments by the chief radiologist for 6225 inspections. RBSM gives the highest subspecialty success rate and integrating ILP with RBSM ratings as RWOA provides a better response time and workload distribution success rate. RWOA based image delivery also prevents bandwidth, storage or hardware related stuck and latencies. When compared with a real case teleradiology application where inspection assignments were performed manually, the proposed solution was found to increase the experience success rate by 13.25%, workload success rate by 63.76% and response time success rate by 120%. The total response time in the real case application data was improved by 22.39%. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Reducing door-to-needle times using Toyota's lean manufacturing principles and value stream analysis.

    PubMed

    Ford, Andria L; Williams, Jennifer A; Spencer, Mary; McCammon, Craig; Khoury, Naim; Sampson, Tomoko R; Panagos, Peter; Lee, Jin-Moo

    2012-12-01

    Earlier tissue-type plasminogen activator (tPA) treatment for acute ischemic stroke increases efficacy, prompting national efforts to reduce door-to-needle times. We used lean process improvement methodology to develop a streamlined intravenous tPA protocol. In early 2011, a multidisciplinary team analyzed the steps required to treat patients with acute ischemic stroke with intravenous tPA using value stream analysis (VSA). We directly compared the tPA-treated patients in the "pre-VSA" epoch with the "post-VSA" epoch with regard to baseline characteristics, protocol metrics, and clinical outcomes. The VSA revealed several tPA protocol inefficiencies: routing of patients to room, then to CT, then back to the room; serial processing of workflow; and delays in waiting for laboratory results. On March 1, 2011, a new protocol incorporated changes to minimize delays: routing patients directly to head CT before the patient room, using parallel process workflow, and implementing point-of-care laboratories. In the pre and post-VSA epochs, 132 and 87 patients were treated with intravenous tPA, respectively. Compared with pre-VSA, door-to-needle times and percent of patients treated ≤60 minutes from hospital arrival were improved in the post-VSA epoch: 60 minutes versus 39 minutes (P<0.0001) and 52% versus 78% (P<0.0001), respectively, with no change in symptomatic hemorrhage rate. Lean process improvement methodology can expedite time-dependent stroke care without compromising safety.

  20. An Investigation of Sintering Parameters on Titanium Powder for Electron Beam Melting Processing Optimization.

    PubMed

    Drescher, Philipp; Sarhan, Mohamed; Seitz, Hermann

    2016-12-01

    Selective electron beam melting (SEBM) is a relatively new additive manufacturing technology for metallic materials. Specific to this technology is the sintering of the metal powder prior to the melting process. The sintering process has disadvantages for post-processing. The post-processing of parts produced by SEBM typically involves the removal of semi-sintered powder through the use of a powder blasting system. Furthermore, the sintering of large areas before melting decreases productivity. Current investigations are aimed at improving the sintering process in order to achieve better productivity, geometric accuracy, and resolution. In this study, the focus lies on the modification of the sintering process. In order to investigate and improve the sintering process, highly porous titanium test specimens with various scan speeds were built. The aim of this study was to decrease build time with comparable mechanical properties of the components and to remove the residual powder more easily after a build. By only sintering the area in which the melt pool for the components is created, an average productivity improvement of approx. 20% was achieved. Tensile tests were carried out, and the measured mechanical properties show comparatively or slightly improved values compared with the reference.

  1. [Digital signal processing of a novel neuron discharge model stimulation strategy for cochlear implants].

    PubMed

    Yang, Yiwei; Xu, Yuejin; Miu, Jichang; Zhou, Linghong; Xiao, Zhongju

    2012-10-01

    To apply the classic leakage integrate-and-fire models, based on the mechanism of the generation of physiological auditory stimulation, in the information processing coding of cochlear implants to improve the auditory result. The results of algorithm simulation in digital signal processor (DSP) were imported into Matlab for a comparative analysis. Compared with CIS coding, the algorithm of membrane potential integrate-and-fire (MPIF) allowed more natural pulse discharge in a pseudo-random manner to better fit the physiological structures. The MPIF algorithm can effectively solve the problem of the dynamic structure of the delivered auditory information sequence issued in the auditory center and allowed integration of the stimulating pulses and time coding to ensure the coherence and relevance of the stimulating pulse time.

  2. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    PubMed

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  3. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  4. Lévy walks with variable waiting time: A ballistic case

    NASA Astrophysics Data System (ADS)

    Kamińska, A.; Srokowski, T.

    2018-06-01

    The Lévy walk process for a lower interval of an excursion times distribution (α <1 ) is discussed. The particle rests between the jumps, and the waiting time is position-dependent. Two cases are considered: a rising and diminishing waiting time rate ν (x ) , which require different approximations of the master equation. The process comprises two phases of the motion: particles at rest and in flight. The density distributions for them are derived, as a solution of corresponding fractional equations. For strongly falling ν (x ) , the resting particles density assumes the α -stable form (truncated at fronts), and the process resolves itself to the Lévy flights. The diffusion is enhanced for this case but no longer ballistic, in contrast to the case for the rising ν (x ) . The analytical results are compared with Monte Carlo trajectory simulations. The results qualitatively agree with observed properties of human and animal movements.

  5. Signal processing methodologies for an acoustic fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Pretlow, Robert A., III; Stoughton, John W.

    1992-01-01

    Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use.

  6. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms.

    PubMed

    Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils

    2016-05-01

    Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster detection. However, the practical implications of increasing the sample size (such as the costs associated with the disease) should also be taken into account. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Habitual short sleep impacts frontal switch mechanism in attention to novelty.

    PubMed

    Gumenyuk, Valentina; Roth, Thomas; Korzyukov, Oleg; Jefferson, Catherine; Bowyer, Susan; Drake, Christopher L

    2011-12-01

    Reduced time in bed relative to biological sleep need is common. The impact of habitual short sleep on auditory attention has not been studied to date. In the current study, we utilized novelty oddball tasks to evaluate the effect of habitual short sleep on brain function underlying attention control processes measured by the mismatch negativity (MMN, index of pre-attentive stage), P3a (attention-dependent), and P3b (memory-dependent) event related brain potentials (ERPs). An extended time in bed in a separate study was used to evaluate the possible reversal of the impairments of these processes in habitual short sleepers. Ten self-defined short sleepers (total sleep time [TST] ≤ 6 h) and 9 normal-sleeping subjects with TST 7-8 h, participated. ERPs were recorded via a 64-channel EEG system. Two test conditions: "ignore" and "attend" were implemented. The ERPs were analyzed and compared between groups on the 2 task conditions and frontal/central/parietal electrodes by 3-factor ANOVA. Sleep diary data were compared between groups by t-test. Sleep was recorded by the Zeo sleep monitoring system for a week in both habitual and extended sleep conditions at home. The main findings of the present study show that short sleeping individuals had deficiency in activity of the MMN and P3a brain responses over frontal areas compared to normal-sleeping subjects. The P3b amplitude was increased over frontal areas and decreased over parietal with respect to the control group. Extension of time in bed for one week increased TST (from 5.7 h to 7.4 h), and concomitantly MMN amplitude increased from -0.1 μV up to -1.25 μV over frontal areas. Reduced time in bed is associated with deficiency of the neuronal process associated with change detection, which may recover after one week of sleep extension, whereas attention-dependent neural processes do not normalize after this period of time in habitually short sleeping individuals and may require longer recovery periods.

  8. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time Markov branching processes

    PubMed Central

    Chen, Rui; Hyrien, Ollivier

    2011-01-01

    This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356

  9. Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm

    PubMed Central

    Vigneswari, T.; Mohamed, M. A. Maluk

    2014-01-01

    Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557

  10. Generalized Langevin dynamics of a nanoparticle using a finite element approach: Thermostating with correlated noise

    NASA Astrophysics Data System (ADS)

    Uma, B.; Swaminathan, T. N.; Ayyaswamy, P. S.; Eckmann, D. M.; Radhakrishnan, R.

    2011-09-01

    A direct numerical simulation (DNS) procedure is employed to study the thermal motion of a nanoparticle in an incompressible Newtonian stationary fluid medium with the generalized Langevin approach. We consider both the Markovian (white noise) and non-Markovian (Ornstein-Uhlenbeck noise and Mittag-Leffler noise) processes. Initial locations of the particle are at various distances from the bounding wall to delineate wall effects. At thermal equilibrium, the numerical results are validated by comparing the calculated translational and rotational temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical results. Numerical predictions of wall interactions with the particle in terms of mean square displacements are compared with analytical results. In the non-Markovian Langevin approach, an appropriate choice of colored noise is required to satisfy the power-law decay in the velocity autocorrelation function at long times. The results obtained by using non-Markovian Mittag-Leffler noise simultaneously satisfy the equipartition theorem and the long-time behavior of the hydrodynamic correlations for a range of memory correlation times. The Ornstein-Uhlenbeck process does not provide the appropriate hydrodynamic correlations. Comparing our DNS results to the solution of an one-dimensional generalized Langevin equation, it is observed that where the thermostat adheres to the equipartition theorem, the characteristic memory time in the noise is consistent with the inherent time scale of the memory kernel. The performance of the thermostat with respect to equilibrium and dynamic properties for various noise schemes is discussed.

  11. The Use of a Microcomputer Based Array Processor for Real Time Laser Velocimeter Data Processing

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1990-01-01

    The application of an array processor to laser velocimeter data processing is presented. The hardware is described along with the method of parallel programming required by the array processor. A portion of the data processing program is described in detail. The increase in computational speed of a microcomputer equipped with an array processor is illustrated by comparative testing with a minicomputer.

  12. Real-Time Monitoring of Psychotherapeutic Processes: Concept and Compliance

    PubMed Central

    Schiepek, Günter; Aichhorn, Wolfgang; Gruber, Martin; Strunk, Guido; Bachler, Egon; Aas, Benjamin

    2016-01-01

    Objective: The feasibility of a high-frequency real-time monitoring approach to psychotherapy is outlined and tested for patients' compliance to evaluate its integration to everyday practice. Criteria concern the ecological momentary assessment, the assessment of therapy-related cognitions and emotions, equidistant time sampling, real-time nonlinear time series analysis, continuous participative process control by client and therapist, and the application of idiographic (person-specific) surveys. Methods: The process-outcome monitoring is technically realized by an internet-based device for data collection and data analysis, the Synergetic Navigation System. Its feasibility is documented by a compliance study on 151 clients treated in an inpatient and a day-treatment clinic. Results: We found high compliance rates (mean: 78.3%, median: 89.4%) amongst the respondents, independent of the severity of symptoms or the degree of impairment. Compared to other diagnoses, the compliance rate was lower in the group diagnosed with personality disorders. Conclusion: The results support the feasibility of high-frequency monitoring in routine psychotherapy settings. Daily collection of psychological surveys allows for the assessment of highly resolved, equidistant time series data which gives insight into the nonlinear qualities of therapeutic change processes (e.g., pattern transitions, critical instabilities). PMID:27199837

  13. Empirical method to measure stochasticity and multifractality in nonlinear time series

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  14. Effects of Repeated Testing on Short- and Long-Term Memory Performance across Different Test Formats

    ERIC Educational Resources Information Center

    Stenlund, Tova; Sundström, Anna; Jonsson, Bert

    2016-01-01

    This study examined whether practice testing with short-answer (SA) items benefits learning over time compared to practice testing with multiple-choice (MC) items, and rereading the material. More specifically, the aim was to test the hypotheses of "retrieval effort" and "transfer appropriate processing" by comparing retention…

  15. Reaction Times of Normal Listeners to Laryngeal, Alaryngeal, and Synthetic Speech

    ERIC Educational Resources Information Center

    Evitts, Paul M.; Searl, Jeff

    2006-01-01

    The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS).…

  16. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flows.

  17. Efficient self-consistent viscous-inviscid solutions for unsteady transonic flow

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1985-01-01

    An improved method is presented for coupling a boundary layer code with an unsteady inviscid transonic computer code in a quasi-steady fashion. At each fixed time step, the boundary layer and inviscid equations are successively solved until the process converges. An explicit coupling of the equations is described which greatly accelerates the convergence process. Computer times for converged viscous-inviscid solutions are about 1.8 times the comparable inviscid values. Comparison of the results obtained with experimental data on three airfoils are presented. These comparisons demonstrate that the explicitly coupled viscous-inviscid solutions can provide efficient predictions of pressure distributions and lift for unsteady two-dimensional transonic flow.

  18. High speed real-time wavefront processing system for a solid-state laser system

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing

    2008-03-01

    A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.

  19. Impact of the New Abbott mPLUS Feature on Clinical Laboratory Efficiencies of Abbott RealTime Assays for Detection of HIV-1, Hepatitis C Virus, Hepatitis B Virus, Chlamydia trachomatis, and Neisseria gonorrhoeae

    PubMed Central

    Jones, Sara; Wiesneth, Russ; Barry, Cathy; Webb, Erika; Belova, Larissa; Dolan, Peggy; Ho, Shiaolan; Abravaya, Klara; Cloherty, Gavin

    2013-01-01

    Diagnostic laboratories are under increasing pressure to improve and expand their services. Greater flexibility in sample processing is a critical factor that can improve the time to results while reducing reagent waste, making laboratories more efficient and cost-effective. The introduction of the Abbott mPLUS feature, with the capacity for extended use of amplification reagents, significantly increases the flexibility of the m2000 platform and enables laboratories to customize their workflows based on sample arrival patterns. The flexibility in sample batch size offered by mPLUS enables significant reductions in processing times. For hepatitis B virus tests, a reduction in sample turnaround times of up to 30% (105 min) was observed for batches of 12 samples compared with those for batches of 24 samples; for Chlamydia trachomatis/Neisseria gonorrhoeae tests, the ability to run batches of 24 samples reduced the turnaround time by 83% (54 min) compared with that for batches of 48 samples. Excellent correlations between mPLUS and m2000 standard condition results were observed for all RealTime viral load assays evaluated in this study, with correlation r values of 0.998 for all assays tested. For the qualitative RealTime C. trachomatis/N. gonorrhoeae assay, the overall agreements between the two conditions tested were >98% for C. trachomatis and 100% for N. gonorrhoeae. Comparable precision results were observed for the two conditions tested for all RealTime assays. The enhanced mPLUS capability provides clinical laboratories with increased efficiencies to meet increasingly stringent turnaround time requirements without increased costs associated with discarding partially used amplification reagents. PMID:24088850

  20. Impact of the New Abbott mPLUS feature on clinical laboratory efficiencies of abbott RealTime assays for detection of HIV-1, Hepatitis C Virus, Hepatitis B Virus, Chlamydia trachomatis, and Neisseria gonorrhoeae.

    PubMed

    Lucic, Danijela; Jones, Sara; Wiesneth, Russ; Barry, Cathy; Webb, Erika; Belova, Larissa; Dolan, Peggy; Ho, Shiaolan; Abravaya, Klara; Cloherty, Gavin

    2013-12-01

    Diagnostic laboratories are under increasing pressure to improve and expand their services. Greater flexibility in sample processing is a critical factor that can improve the time to results while reducing reagent waste, making laboratories more efficient and cost-effective. The introduction of the Abbott mPLUS feature, with the capacity for extended use of amplification reagents, significantly increases the flexibility of the m2000 platform and enables laboratories to customize their workflows based on sample arrival patterns. The flexibility in sample batch size offered by mPLUS enables significant reductions in processing times. For hepatitis B virus tests, a reduction in sample turnaround times of up to 30% (105 min) was observed for batches of 12 samples compared with those for batches of 24 samples; for Chlamydia trachomatis/Neisseria gonorrhoeae tests, the ability to run batches of 24 samples reduced the turnaround time by 83% (54 min) compared with that for batches of 48 samples. Excellent correlations between mPLUS and m2000 standard condition results were observed for all RealTime viral load assays evaluated in this study, with correlation r values of 0.998 for all assays tested. For the qualitative RealTime C. trachomatis/N. gonorrhoeae assay, the overall agreements between the two conditions tested were >98% for C. trachomatis and 100% for N. gonorrhoeae. Comparable precision results were observed for the two conditions tested for all RealTime assays. The enhanced mPLUS capability provides clinical laboratories with increased efficiencies to meet increasingly stringent turnaround time requirements without increased costs associated with discarding partially used amplification reagents.

  1. Comparison of the efficacy of steam sterilization indicators.

    PubMed Central

    Lee, C H; Montville, T J; Sinskey, A J

    1979-01-01

    Twenty-one commercially available chemical steam sterilization indicators were processed in an empty autoclave for various times at temperatures between 240 and 270 degrees F (ca. 116 and 132 degrees C). The time required to reach a sterilized reading at each temperature was plotted on a semilogarithmic time-temperature plot and compared with the time-temperature sterilization curve for Bacillus stearothermophilus. Five of the indicators had time-temperature kinetics similar to those of B. stearothermophilus, but three of these overestimated the effect of processing. Two of the indicators overestimated the effect of processing and were less sensitive to temperature changes when was B. stearothermophilus. Thirteen of the indicators had time-temperature curves that crossed the B. stearothermophilus plot. One indicator produced such ambiguous results that no determinations could be made with it. Out of 21 indicators tested, only 2 appear to be capable of accurately integrating the time-temperature effect at temperatures between 240 and 270 degrees F. The other indicators should be used only after careful analysis of their suitability for use at a given temperature. PMID:485144

  2. A Comparative Analysis of Extract, Transformation and Loading (ETL) Process

    NASA Astrophysics Data System (ADS)

    Runtuwene, J. P. A.; Tangkawarow, I. R. H. T.; Manoppo, C. T. M.; Salaki, R. J.

    2018-02-01

    The current growth of data and information occurs rapidly in varying amount and media. These types of development will eventually produce large number of data better known as the Big Data. Business Intelligence (BI) utilizes large number of data and information for analysis so that one can obtain important information. This type of information can be used to support decision-making process. In practice a process integrating existing data and information into data warehouse is needed. This data integration process is known as Extract, Transformation and Loading (ETL). In practice, many applications have been developed to carry out the ETL process, but selection which applications are more time, cost and power effective and efficient may become a challenge. Therefore, the objective of the study was to provide comparative analysis through comparison between the ETL process using Microsoft SQL Server Integration Service (SSIS) and one using Pentaho Data Integration (PDI).

  3. Breaking off Engagement: Readers' Disengagement as a Function of Reader and Text Characteristics

    ERIC Educational Resources Information Center

    Goedecke, Patricia J.; Dong, Daqi; Shi, Genghu; Feng, Shi; Risko, Evan; Olney, Andrew M.; D'Mello, Sidney K.; Graesser, Arthur C.

    2015-01-01

    Engagement during reading can be measured by the amount of time readers invest in the reading process. It is hypothesized that disengagement is marked by a decrease in time investment as compared with the demands made on the reader by the text. In this study, self-paced reading times for screens of text were predicted by a text complexity score…

  4. Meal time behavior difficulties but not nutritional deficiencies correlate with sensory processing in children with autism spectrum disorder.

    PubMed

    Shmaya, Yael; Eilat-Adar, Sigal; Leitner, Yael; Reif, Shimon; Gabis, Lidia V

    2017-07-01

    Food aversion and nutritional difficulties are common in children with autism spectrum disorder. To compare meal time behavior of children with autism to their typically developing siblings and to typical controls and to examine if sensory profiles can predict meal time behavior or nutritional deficiencies in the autism group. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Multiscale Granger causality

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele

    2017-10-01

    In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.

  6. Deficits of unconscious emotional processing in patients with major depression: An ERP study.

    PubMed

    Zhang, Dandan; He, Zhenhong; Chen, Yuming; Wei, Zhaoguo

    2016-07-15

    Major depressive disorder (MDD) is associated with behavioral and neurobiological evidences of negative bias in unconscious emotional processing. However, little is known about the time course of this deficit. The current study aimed to explore the unconscious processing of emotional facial expressions in MDD patients by means of event-related potentials (ERPs). The ERP responses to subliminally presented happy/neutral/sad faces were recorded in 26 medication-free patients and 26 healthy controls in a backward masking task. Three ERP components were compared between patients and controls. Detection accuracy was at chance level for both groups, suggesting that the process was performed in the absence of conscious awareness of the emotional stimuli. Robust emotion×group interactions were observed in P1, N170 and P3. Compared with the neutral faces, 1) the patients showed larger P1 for sad and smaller P1 for happy faces; however, the controls showed a completely inverse P1 pattern; 2) the controls exhibited larger N170 in the happy but not in the sad trials, whereas patients had comparable larger N170 amplitudes in sad and happy trials; 3) although both groups exhibited larger P3 for emotional faces, the patients showed a priority for sad trials while the controls showed a priority for happy trials. Our data suggested that negative processing bias exists on the unconscious level in individuals with MDD. The ERP measures indicated that the unconscious emotional processing in MDD patients has a time course of three-stage deflection. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A dynamic dual process model of risky decision making.

    PubMed

    Diederich, Adele; Trueblood, Jennifer S

    2018-03-01

    Many phenomena in judgment and decision making are often attributed to the interaction of 2 systems of reasoning. Although these so-called dual process theories can explain many types of behavior, they are rarely formalized as mathematical or computational models. Rather, dual process models are typically verbal theories, which are difficult to conclusively evaluate or test. In the cases in which formal (i.e., mathematical) dual process models have been proposed, they have not been quantitatively fit to experimental data and are often silent when it comes to the timing of the 2 systems. In the current article, we present a dynamic dual process model framework of risky decision making that provides an account of the timing and interaction of the 2 systems and can explain both choice and response-time data. We outline several predictions of the model, including how changes in the timing of the 2 systems as well as time pressure can influence behavior. The framework also allows us to explore different assumptions about how preferences are constructed by the 2 systems as well as the dynamic interaction of the 2 systems. In particular, we examine 3 different possible functional forms of the 2 systems and 2 possible ways the systems can interact (simultaneously or serially). We compare these dual process models with 2 single process models using risky decision making data from Guo, Trueblood, and Diederich (2017). Using this data, we find that 1 of the dual process models significantly outperforms the other models in accounting for both choices and response times. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  9. A comparison of microwave versus direct solar heating for lunar brick production

    NASA Technical Reports Server (NTRS)

    Yankee, S. J.; Strenski, D. G.; Pletka, B. J.; Patil, D. S.; Mutsuddy, B. C.

    1990-01-01

    Two processing techniques considered suitable for producing bricks from lunar regolith are examined: direct solar heating and microwave heating. An analysis was performed to compare the two processes in terms of the amount of power and time required to fabricate bricks of various sizes. Microwave heating was shown to be significantly faster than solar heating for rapid production of realistic-size bricks. However, the relative simplicity of the solar collector(s) used for the solar furnace compared to the equipment necessary for microwave generation may present an economic tradeoff.

  10. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    NASA Technical Reports Server (NTRS)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  11. Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.

    PubMed

    Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza

    2018-03-02

    Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.

  12. Prospective randomized double-blind study of atlas-based organ-at-risk autosegmentation-assisted radiation planning in head and neck cancer.

    PubMed

    Walker, Gary V; Awan, Musaddiq; Tao, Randa; Koay, Eugene J; Boehling, Nicholas S; Grant, Jonathan D; Sittig, Dean F; Gunn, Gary Brandon; Garden, Adam S; Phan, Jack; Morrison, William H; Rosenthal, David I; Mohamed, Abdallah Sherif Radwan; Fuller, Clifton David

    2014-09-01

    Target volumes and organs-at-risk (OARs) for radiotherapy (RT) planning are manually defined, which is a tedious and inaccurate process. We sought to assess the feasibility, time reduction, and acceptability of an atlas-based autosegmentation (AS) compared to manual segmentation (MS) of OARs. A commercial platform generated 16 OARs. Resident physicians were randomly assigned to modify AS OAR (AS+R) or to draw MS OAR followed by attending physician correction. Dice similarity coefficient (DSC) was used to measure overlap between groups compared with attending approved OARs (DSC=1 means perfect overlap). 40 cases were segmented. Mean ± SD segmentation time in the AS+R group was 19.7 ± 8.0 min, compared to 28.5 ± 8.0 min in the MS cohort, amounting to a 30.9% time reduction (Wilcoxon p<0.01). For each OAR, AS DSC was statistically different from both AS+R and MS ROIs (all Steel-Dwass p<0.01) except the spinal cord and the mandible, suggesting oversight of AS/MS processes is required; AS+R and MS DSCs were non-different. AS compared to attending approved OAR DSCs varied considerably, with a chiasm mean ± SD DSC of 0.37 ± 0.32 and brainstem of 0.97 ± 0.03. Autosegmentation provides a time savings in head and neck regions of interest generation. However, attending physician approval remains vital. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Sex differences in event-related potentials and attentional biases to emotional facial stimuli.

    PubMed

    Pfabigan, Daniela M; Lamplmayr-Kragl, Elisabeth; Pintzinger, Nina M; Sailer, Uta; Tran, Ulrich S

    2014-01-01

    Attentional processes play an important role in the processing of emotional information. Previous research reported attentional biases during stimulus processing in anxiety and depression. However, sex differences in the processing of emotional stimuli and higher prevalence rates of anxiety disorders among women, compared to men, suggest that attentional biases may also differ between the two sexes. The present study used a modified version of the dot probe task with happy, angry, and neutral facial stimuli to investigate the time course of attentional biases in healthy volunteers. Moreover, associations of attentional biases with alexithymia were examined on the behavioral and physiological level. Event-related potentials were measured while 21 participants (11 women) performed the task, utilizing also for the first time a difference wave approach in the analysis to highlight emotion-specific aspects. Women showed overall enhanced probe P1 amplitudes compared to men, in particular after rewarding facial stimuli. Using the difference wave approach, probe P1 amplitudes appeared specifically enhanced with regard to congruently presented happy facial stimuli among women, compared to men. Both methods yielded enhanced probe P1 amplitudes after presentation of the emotional stimulus in the left compared to the right visual hemifield. Probe P1 amplitudes correlated negatively with self-reported alexithymia, most of these correlations were only observable in women. Our results suggest that women orient their attention to a greater extent to facial stimuli than men and corroborate that alexithymia is a correlate of reduced emotional reactivity on a neuronal level. We recommend using a difference wave approach when addressing attentional processes of orientation and disengagement also in future studies.

  14. Strong and Consistently Synergistic Inactivation of Spores of Spoilage-Associated Bacillus and Geobacillus spp. by High Pressure and Heat Compared with Inactivation by Heat Alone ▿ †

    PubMed Central

    Olivier, S. A.; Bull, M. K.; Stone, G.; van Diepenbeek, R. J.; Kormelink, F.; Jacops, L.; Chapman, B.

    2011-01-01

    The inactivation of spores of four low-acid food spoilage organisms by high pressure thermal (HPT) and thermal-only processing was compared on the basis of equivalent thermal lethality calculated at a reference temperature of 121.1°C (Fz121.1°C, 0.1 MPa or 600 MPa) and characterized as synergistic, not different or protective. In addition, the relative resistances of spores of the different spoilage microorganisms to HPT processing were compared. Processing was performed and inactivation was compared in both laboratory and pilot scale systems and in model (diluted) and actual food products. Where statistical comparisons could be made, at least 4 times and up to around 190 times more inactivation (log10 reduction/minute at FTz121.1°C) of spores of Bacillus amyloliquefaciens, Bacillus sporothermodurans, and Geobacillus stearothermophilus was achieved using HPT, indicating a strong synergistic effect of high pressure and heat. Bacillus coagulans spores were also synergistically inactivated in diluted and undiluted Bolognese sauce but were protected by pressure against thermal inactivation in undiluted cream sauce. Irrespective of the response characterization, B. coagulans and B. sporothermodurans were identified as the most HPT-resistant isolates in the pilot scale and laboratory scale studies, respectively, and G. stearothermophilus as the least in both studies and all products. This is the first study to comprehensively quantitatively characterize the responses of a range of spores of spoilage microorganisms as synergistic (or otherwise) using an integrated thermal-lethality approach (FTz). The use of the FTz approach is ultimately important for the translation of commercial minimum microbiologically safe and stable thermal processes to HPT processes. PMID:21278265

  15. Strong and consistently synergistic inactivation of spores of spoilage-associated Bacillus and Geobacillus spp. by high pressure and heat compared with inactivation by heat alone.

    PubMed

    Olivier, S A; Bull, M K; Stone, G; van Diepenbeek, R J; Kormelink, F; Jacops, L; Chapman, B

    2011-04-01

    The inactivation of spores of four low-acid food spoilage organisms by high pressure thermal (HPT) and thermal-only processing was compared on the basis of equivalent thermal lethality calculated at a reference temperature of 121.1°C (F(z)(121.1)(°)(C, 0.1 MPa or 600 MPa)) and characterized as synergistic, not different or protective. In addition, the relative resistances of spores of the different spoilage microorganisms to HPT processing were compared. Processing was performed and inactivation was compared in both laboratory and pilot scale systems and in model (diluted) and actual food products. Where statistical comparisons could be made, at least 4 times and up to around 190 times more inactivation (log(10) reduction/minute at F(T)(z)(121.1)(°)(C)) of spores of Bacillus amyloliquefaciens, Bacillus sporothermodurans, and Geobacillus stearothermophilus was achieved using HPT, indicating a strong synergistic effect of high pressure and heat. Bacillus coagulans spores were also synergistically inactivated in diluted and undiluted Bolognese sauce but were protected by pressure against thermal inactivation in undiluted cream sauce. Irrespective of the response characterization, B. coagulans and B. sporothermodurans were identified as the most HPT-resistant isolates in the pilot scale and laboratory scale studies, respectively, and G. stearothermophilus as the least in both studies and all products. This is the first study to comprehensively quantitatively characterize the responses of a range of spores of spoilage microorganisms as synergistic (or otherwise) using an integrated thermal-lethality approach (F(T)(z)). The use of the F(T)(z) approach is ultimately important for the translation of commercial minimum microbiologically safe and stable thermal processes to HPT processes.

  16. The Delphi Process: Some Assumptions and Some Realities.

    ERIC Educational Resources Information Center

    Waldron, James S.

    The effectiveness of the Delphi Technique is evaluated in terms of immediate and delayed controlled information feedback (feedback within 5 seconds as compared with a 24-hour delay); and the relationships that exist among measures of integrative complexity, estimations about the time of occurrence of future events, and time delay between task…

  17. Fiber-fed time-resolved photoluminescence for reduced process feedback time on thin-film photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Repins, I. L.; Egaas, B.; Mansfield, L. M.

    2015-01-15

    Fiber-fed time-resolved photoluminescence is demonstrated as a tool for immediate process feedback after deposition of the absorber layer for CuIn{sub x}Ga{sub 1-x}Se{sub 2} and Cu{sub 2}ZnSnSe{sub 4} photovoltaic devices. The technique uses a simplified configuration compared to typical laboratory time-resolved photoluminescence in the delivery of the exciting beam, signal collection, and electronic components. Correlation of instrument output with completed device efficiency is demonstrated over a large sample set. The extraction of the instrument figure of merit, depending on both the initial luminescence intensity and its time decay, is explained and justified. Limitations in the prediction of device efficiency by thismore » method, including surface effect, are demonstrated and discussed.« less

  18. Reduction process of nitroxyl spin probes used in Overhauser-enhanced magnetic resonance imaging: An ESR study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meenakumari, V.; Premkumar, S.; Benial, A. Milton Franklin, E-mail: miltonfranklin@yahoo.com

    The Electron spin resonance studies on the reduction process of nitroxyl spin probes were carried out for 1mM {sup 14}N- labeled nitroxyl radicals in pure water and 1 mM concentration of ascorbic acid as a function of time. The electron spin resonance parameters, such as line width, hyperfine coupling constant, g-factor, signal intensity ratio and rotational correlation time were estimated. The 3-carbamoyl-PROXYL radical has narrowest line width and fast tumbling motion compared with 3-carboxy-PROXYL, 4-methoxy-TEMPO, and 4-acetamido-TEMPO radicals. The half life time and decay rate were estimated for 1mM concentration of {sup 14}N- labeled nitroxyl radicals in 1 mM concentration ofmore » ascorbic acid. From the results, the 3-carbamoyl-PROXYL has long half life time and high stability compared with 3-carboxy-PROXYL, 4-methoxy-TEMPO and 4-acetamido-TEMPO radicals. Therefore, this study reveals that the 3-carbamoyl-PROXYL radical can act as a good redox sensitive spin probe for Overhauser-enhanced Magnetic Resonance Imaging.« less

  19. Reduction process of nitroxyl spin probes used in Overhauser-enhanced magnetic resonance imaging: An ESR study

    NASA Astrophysics Data System (ADS)

    Meenakumari, V.; Jawahar, A.; Premkumar, S.; Benial, A. Milton Franklin

    2016-05-01

    The Electron spin resonance studies on the reduction process of nitroxyl spin probes were carried out for 1mM 14N- labeled nitroxyl radicals in pure water and 1 mM concentration of ascorbic acid as a function of time. The electron spin resonance parameters, such as line width, hyperfine coupling constant, g-factor, signal intensity ratio and rotational correlation time were estimated. The 3-carbamoyl-PROXYL radical has narrowest line width and fast tumbling motion compared with 3-carboxy-PROXYL, 4-methoxy-TEMPO, and 4-acetamido-TEMPO radicals. The half life time and decay rate were estimated for 1mM concentration of 14N- labeled nitroxyl radicals in 1 mM concentration of ascorbic acid. From the results, the 3-carbamoyl-PROXYL has long half life time and high stability compared with 3-carboxy-PROXYL, 4-methoxy-TEMPO and 4-acetamido-TEMPO radicals. Therefore, this study reveals that the 3-carbamoyl-PROXYL radical can act as a good redox sensitive spin probe for Overhauser-enhanced Magnetic Resonance Imaging.

  20. Shimmed electron beam welding process

    DOEpatents

    Feng, Ganjiang; Nowak, Daniel Anthony; Murphy, John Thomas

    2002-01-01

    A modified electron beam welding process effects welding of joints between superalloy materials by inserting a weldable shim in the joint and heating the superalloy materials with an electron beam. The process insures a full penetration of joints with a consistent percentage of filler material and thereby improves fatigue life of the joint by three to four times as compared with the prior art. The process also allows variable shim thickness and joint fit-up gaps to provide increased flexibility for manufacturing when joining complex airfoil structures and the like.

  1. Time-dependent crack growth behavior of alloy 617 and alloy 230 at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Roy, Shawoon Kumar

    2011-12-01

    Two Ni-base solid-solution-strengthened superalloys: INCONEL 617 and HAYNES 230 were studied to check sustained loading crack growth (SLCG) behavior at elevated temperatures appropriate for Next Generation Nuclear Plant (NGNP) applictaions with constant stress intensity factor (Kmax= 27.75 MPa✓m) in air. The results indicate a time-dependent rate controlling process which can be characterized by a linear elastic fracture mechanics (LEFM) parameter -- stress intensity factor (K). At elevated temperatures, the crack growth mechanism was best described using a damage zone concept. Based on results and study, SAGBOE (stress accelerated grain boundary oxidation embrittlement) is considered the primary reason for time-dependent SLCG. A thermodynamic equation was considered to correlate all the SLCG results to determine the thermal activation energy in the process. A phenomenological model based on a time-dependent factor was developed considering the previous researcher's time-dependent fatigue crack propagation (FCP) results and current SLCG results to relate cycle-dependent and time-dependent FCP for both alloys. Further study includes hold time (3+300s) fatigue testing and no hold (1s) fatigue testing with various load ratios (R) at 700°C with a Kmax of 27.75 MPa✓m. Study results suggest an interesting point: crack growth behavior is significantly affected with the change in R value in cycle-dependent process whereas in time-dependent process, change in R does not have any significant effect. Fractography study showed intergranular cracking mode for all time-dependent processes and transgranular cracking mode for cycle-dependent processes. In Alloy 230, SEM images display intergranular cracking with carbide particles, dense oxides and dimple mixed secondary cracks for time-dependent 3+300s FCP and SLCG test. In all cases, Alloy 230 shows better crack growth resistance compared to Alloy 617.

  2. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis.

    PubMed

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A; Soni, Nipunjot; Mandal, Raju K; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y; Govender, Thavendran; Kruger, Hendrik G; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD 600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD 600 nm ): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties.

  3. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis

    PubMed Central

    Haque, Shafiul; Khan, Saif; Wahid, Mohd; Dar, Sajad A.; Soni, Nipunjot; Mandal, Raju K.; Singh, Vineeta; Tiwari, Dileep; Lohani, Mohtashim; Areeshi, Mohammed Y.; Govender, Thavendran; Kruger, Hendrik G.; Jawed, Arshad

    2016-01-01

    For a commercially viable recombinant intracellular protein production process, efficient cell lysis and protein release is a major bottleneck. The recovery of recombinant protein, cholesterol oxidase (COD) was studied in a continuous bead milling process. A full factorial response surface methodology (RSM) design was employed and compared to artificial neural networks coupled with genetic algorithm (ANN-GA). Significant process variables, cell slurry feed rate (A), bead load (B), cell load (C), and run time (D), were investigated and optimized for maximizing COD recovery. RSM predicted an optimum of feed rate of 310.73 mL/h, bead loading of 79.9% (v/v), cell loading OD600 nm of 74, and run time of 29.9 min with a recovery of ~3.2 g/L. ANN-GA predicted a maximum COD recovery of ~3.5 g/L at an optimum feed rate (mL/h): 258.08, bead loading (%, v/v): 80%, cell loading (OD600 nm): 73.99, and run time of 32 min. An overall 3.7-fold increase in productivity is obtained when compared to a batch process. Optimization and comparison of statistical vs. artificial intelligence techniques in continuous bead milling process has been attempted for the very first time in our study. We were able to successfully represent the complex non-linear multivariable dependence of enzyme recovery on bead milling parameters. The quadratic second order response functions are not flexible enough to represent such complex non-linear dependence. ANN being a summation function of multiple layers are capable to represent complex non-linear dependence of variables in this case; enzyme recovery as a function of bead milling parameters. Since GA can even optimize discontinuous functions present study cites a perfect example of using machine learning (ANN) in combination with evolutionary optimization (GA) for representing undefined biological functions which is the case for common industrial processes involving biological moieties. PMID:27920762

  4. A self-regulating biomolecular comparator for processing oscillatory signals

    PubMed Central

    Agrawal, Deepak K.; Franco, Elisa; Schulman, Rebecca

    2015-01-01

    While many cellular processes are driven by biomolecular oscillators, precise control of a downstream on/off process by a biochemical oscillator signal can be difficult: over an oscillator's period, its output signal varies continuously between its amplitude limits and spends a significant fraction of the time at intermediate values between these limits. Further, the oscillator's output is often noisy, with particularly large variations in the amplitude. In electronic systems, an oscillating signal is generally processed by a downstream device such as a comparator that converts a potentially noisy oscillatory input into a square wave output that is predominantly in one of two well-defined on and off states. The comparator's output then controls downstream processes. We describe a method for constructing a synthetic biochemical device that likewise produces a square-wave-type biomolecular output for a variety of oscillatory inputs. The method relies on a separation of time scales between the slow rate of production of an oscillatory signal molecule and the fast rates of intermolecular binding and conformational changes. We show how to control the characteristics of the output by varying the concentrations of the species and the reaction rates. We then use this control to show how our approach could be applied to process different in vitro and in vivo biomolecular oscillators, including the p53-Mdm2 transcriptional oscillator and two types of in vitro transcriptional oscillators. These results demonstrate how modular biomolecular circuits could, in principle, be combined to build complex dynamical systems. The simplicity of our approach also suggests that natural molecular circuits may process some biomolecular oscillator outputs before they are applied downstream. PMID:26378119

  5. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  6. Comparing the transient response of a resistive-type sensor with a thin film thermocouple during the post-exposure bake process

    NASA Astrophysics Data System (ADS)

    Kreider, Kenneth G.; DeWitt, David P.; Fowler, Joel B.; Proctor, James E.; Kimes, William A.; Ripple, Dean C.; Tsai, Benjamin K.

    2004-04-01

    Recent studies on dynamic temperature profiling and lithographic performance modeling of the post-exposure bake (PEB) process have demonstrated that the rate of heating and cooling may have an important influence on resist lithographic response. Measuring the transient surface temperature during the heating or cooling process with such accuracy can only be assured if the sensors embedded in or attached to the test wafer do not affect the temperature distribution in the bare wafer. In this paper we report on an experimental and analytical study to compare the transient response of embedded platinum resistance thermometer (PRT) sensors with surface-deposited, thin-film thermocouples (TFTC). The TFTCs on silicon wafers have been developed at NIST to measure wafer temperatures in other semiconductor thermal processes. Experiments are performed on a test bed built from a commercial, fab-qualified module with hot and chill plates using wafers that have been instrumented with calibrated type-E (NiCr/CuNi) TFTCs and commercial PRTs. Time constants were determined from an energy-balance analysis fitting the temperature-time derivative to the wafer temperature during the heating and cooling processes. The time constants for instrumented wafers ranged from 4.6 s to 5.1 s on heating for both the TFTC and PRT sensors, with an average difference less than 0.1 s between the TFTCs and PRTs and slightly greater differences on cooling.

  7. Microwave alkaline roasting-water dissolving process for germanium extraction from zinc oxide dust and its analysis by response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Wang, Wankun; Wang, Fuchun; Lu, Fanghai

    2017-12-01

    Microwave alkaline roasting-water dissolving process was proposed to improve the germanium (Ge) extraction from zinc oxide (ZnO) dust. The effects of important parameters were investigated and the process conditions were optimized using response surface methodology (RSM). The Ge extraction is consistent with the linear polynomial model type. Alkali-material ratio, microwave heating temperature and leaching temperature are the significant factors for this process. The optimized conditions are obtained as follows, alkali-material ratio of 0.9 kg/kg, aging time of 1.12 day, microwave heating at 658 K for 10 min, liquid-solid ratio of 4.31 L/kg, leaching temperature at 330 K, leaching time of 47 min with the Ge extraction about 99.38%. It is in consistence with the predictive value of 99.31%. Compared to the existed alkaline roasting process heated by electric furnace in literature, the alkaline roasting temperature and holding time. It shows a good prospect on leaching Ge from ZnO dust with microwave alkaline roasting-water dissolving process.

  8. Reduced order model based on principal component analysis for process simulation and optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lang, Y.; Malacina, A.; Biegler, L.

    2009-01-01

    It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  10. Intelligent sensor-model automated control of PMR-15 autoclave processing

    NASA Technical Reports Server (NTRS)

    Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.

    1992-01-01

    An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.

  11. Evaluating scale-up rules of a high-shear wet granulation process.

    PubMed

    Tao, Jing; Pandey, Preetanshu; Bindra, Dilbir S; Gao, Julia Z; Narang, Ajit S

    2015-07-01

    This work aimed to evaluate the commonly used scale-up rules for high-shear wet granulation process using a microcrystalline cellulose-lactose-based low drug loading formulation. Granule properties such as particle size, porosity, flow, and tabletability, and tablet dissolution were compared across scales using scale-up rules based on different impeller speed calculations or extended wet massing time. Constant tip speed rule was observed to produce slightly less granulated material at the larger scales. Longer wet massing time can be used to compensate for the lower shear experienced by the granules at the larger scales. Constant Froude number and constant empirical stress rules yielded granules that were more comparable across different scales in terms of compaction performance and tablet dissolution. Granule porosity was shown to correlate well with blend tabletability and tablet dissolution, indicating the importance of monitoring granule densification (porosity) during scale-up. It was shown that different routes can be chosen during scale-up to achieve comparable granule growth and densification by altering one of the three parameters: water amount, impeller speed, and wet massing time. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  12. Assessment of hospital processes using a process mining technique: Outpatient process analysis at a tertiary hospital.

    PubMed

    Yoo, Sooyoung; Cho, Minsu; Kim, Eunhye; Kim, Seok; Sim, Yerim; Yoo, Donghyun; Hwang, Hee; Song, Minseok

    2016-04-01

    Many hospitals are increasing their efforts to improve processes because processes play an important role in enhancing work efficiency and reducing costs. However, to date, a quantitative tool has not been available to examine the before and after effects of processes and environmental changes, other than the use of indirect indicators, such as mortality rate and readmission rate. This study used process mining technology to analyze process changes based on changes in the hospital environment, such as the construction of a new building, and to measure the effects of environmental changes in terms of consultation wait time, time spent per task, and outpatient care processes. Using process mining technology, electronic health record (EHR) log data of outpatient care before and after constructing a new building were analyzed, and the effectiveness of the technology in terms of the process was evaluated. Using the process mining technique, we found that the total time spent in outpatient care did not increase significantly compared to that before the construction of a new building, considering that the number of outpatients increased, and the consultation wait time decreased. These results suggest that the operation of the outpatient clinic was effective after changes were implemented in the hospital environment. We further identified improvements in processes using the process mining technique, thereby demonstrating the usefulness of this technique for analyzing complex hospital processes at a low cost. This study confirmed the effectiveness of process mining technology at an actual hospital site. In future studies, the use of process mining technology will be expanded by applying this approach to a larger variety of process change situations. Copyright © 2016. Published by Elsevier Ireland Ltd.

  13. Critical Thinking Outcomes of Computer-Assisted Instruction versus Written Nursing Process.

    ERIC Educational Resources Information Center

    Saucier, Bonnie L.; Stevens, Kathleen R.; Williams, Gail B.

    2000-01-01

    Nursing students (n=43) who used clinical case studies via computer-assisted instruction (CAI) were compared with 37 who used the written nursing process (WNP). California Critical Thinking Skills Test results did not show significant increases in critical thinking. The WNP method was more time consuming; the CAI group was more satisfied. Use of…

  14. My Face or Yours? Event-Related Potential Correlates of Self-Face Processing

    ERIC Educational Resources Information Center

    Keyes, Helen; Brady, Nuala; Reilly, Richard B.; Foxe, John J.

    2010-01-01

    The neural basis of self-recognition is mainly studied using brain-imaging techniques which reveal much about the localization of self-processing in the brain. There are comparatively few studies using EEG which allow us to study the time course of self-recognition. In this study, participants monitored a sequence of images, including 20 distinct…

  15. Native and L2 Processing of Homonyms in Sentential Context

    ERIC Educational Resources Information Center

    Elston-Guttler, K.E.; Friederici, A.D.

    2005-01-01

    We compare native and non-native processing of homonyms in sentence context whose two most frequent meanings are nouns (e.g., sentence) or a noun and a verb (e.g., trip). With both participant groups, we conducted a combined reaction time (RT)/event-related brain potential (ERP) lexical decision experiment with two stimulus-onset asynchronies…

  16. Exploiting Degrees of Inflectional Ambiguity: Stem Form and the Time Course of Morphological Processing

    ERIC Educational Resources Information Center

    Jarvikivi, Juhani; Pyykkonen, Pirita; Niemi, Jussi

    2009-01-01

    The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant…

  17. Relationships among Linguistic Processing Speed, Phonological Working Memory, and Attention in Children Who Stutter

    ERIC Educational Resources Information Center

    Anderson, Julie D.; Wagovich, Stacy A.

    2010-01-01

    Relatively recently, experimental studies of linguistic processing speed in children who stutter (CWS) have emerged, some of which suggest differences in performance among CWS compared to children who do not stutter (CWNS). What is not yet well understood is the extent to which underlying cognitive skills may impact performance on timed tasks of…

  18. Application of Cross-Correlation Greens Function Along With FDTD for Fast Computation of Envelope Correlation Coefficient Over Wideband for MIMO Antennas

    NASA Astrophysics Data System (ADS)

    Sarkar, Debdeep; Srivastava, Kumar Vaibhav

    2017-02-01

    In this paper, the concept of cross-correlation Green's functions (CGF) is used in conjunction with the finite difference time domain (FDTD) technique for calculation of envelope correlation coefficient (ECC) of any arbitrary MIMO antenna system over wide frequency band. Both frequency-domain (FD) and time-domain (TD) post-processing techniques are proposed for possible application with this FDTD-CGF scheme. The FDTD-CGF time-domain (FDTD-CGF-TD) scheme utilizes time-domain signal processing methods and exhibits significant reduction in ECC computation time as compared to the FDTD-CGF frequency domain (FDTD-CGF-FD) scheme, for high frequency-resolution requirements. The proposed FDTD-CGF based schemes can be applied for accurate and fast prediction of wideband ECC response, instead of the conventional scattering parameter based techniques which have several limitations. Numerical examples of the proposed FDTD-CGF techniques are provided for two-element MIMO systems involving thin-wire half-wavelength dipoles in parallel side-by-side as well as orthogonal arrangements. The results obtained from the FDTD-CGF techniques are compared with results from commercial electromagnetic solver Ansys HFSS, to verify the validity of proposed approach.

  19. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  20. Stroop-interference effect in post-traumatic stress disorder.

    PubMed

    Cui, Hong; Chen, Guoliang; Liu, Xiaohui; Shan, Moshui; Jia, Yanyan

    2014-12-01

    To investigate the conflict processing in posttraumatic stress disorder (PTSD) patients, we conducted the classical Stroop task by recording event-related potentials. Although the reaction time was overall slower for PTSD patients than healthy age-matched control group, the Stroop-interference effect of reaction time did not differ between the two groups. Compared with normal controls, the interference effects of N 2 and N 450 components were larger and the interference effect of slow potential component disappeared in PTSD. These data indicated the dysfunction of conflict processing in individuals with PTSD.

  1. Research of the chemical activity of microgrinding coals of various metamorphism degree

    NASA Astrophysics Data System (ADS)

    Burdukov, A. P.; Butakov, E. B.; Kuznetsov, A. V.

    2017-09-01

    In this paper, we investigate the effect of mechanically activating grinding of coals of various degrees of metamorphism by two different methods - determination of the flash time in a vertical tubular furnace and thermogravimetric analysis. In the experiments, the coals that had been processed on a vibrating centrifugal mill and a disintegrator, aged for some time, were compared. The experiments showed a decrease in the ignition temperature of mechanically activated coals - deactivation of fuel, as well as the effect of mechanical activation on the further process of thermal-oxidative degradation.

  2. Comparative analysis of the modified enclosed energy metric for self-focusing holograms from digital lensless holographic microscopy.

    PubMed

    Trujillo, Carlos; Garcia-Sucerquia, Jorge

    2015-06-01

    A comparative analysis of the performance of the modified enclosed energy (MEE) method for self-focusing holograms recorded with digital lensless holographic microscopy is presented. Notwithstanding the MEE analysis previously published, no extended analysis of its performance has been reported. We have tested the MEE in terms of the minimum axial distance allowed between the set of reconstructed holograms to search for the focal plane and the elapsed time to obtain the focused image. These parameters have been compared with those for some of the already reported methods in the literature. The MEE achieves better results in terms of self-focusing quality but at a higher computational cost. Despite its longer processing time, the method remains within a time frame to be technologically attractive. Modeled and experimental holograms have been utilized in this work to perform the comparative study.

  3. Engine Icing Data - An Analytics Approach

    NASA Technical Reports Server (NTRS)

    Fitzgerald, Brooke A.; Flegel, Ashlie B.

    2017-01-01

    Engine icing researchers at the NASA Glenn Research Center use the Escort data acquisition system in the Propulsion Systems Laboratory (PSL) to generate and collect a tremendous amount of data every day. Currently these researchers spend countless hours processing and formatting their data, selecting important variables, and plotting relationships between variables, all by hand, generally analyzing data in a spreadsheet-style program (such as Microsoft Excel). Though spreadsheet-style analysis is familiar and intuitive to many, processing data in spreadsheets is often unreproducible and small mistakes are easily overlooked. Spreadsheet-style analysis is also time inefficient. The same formatting, processing, and plotting procedure has to be repeated for every dataset, which leads to researchers performing the same tedious data munging process over and over instead of making discoveries within their data. This paper documents a data analysis tool written in Python hosted in a Jupyter notebook that vastly simplifies the analysis process. From the file path of any folder containing time series datasets, this tool batch loads every dataset in the folder, processes the datasets in parallel, and ingests them into a widget where users can search for and interactively plot subsets of columns in a number of ways with a click of a button, easily and intuitively comparing their data and discovering interesting dynamics. Furthermore, comparing variables across data sets and integrating video data (while extremely difficult with spreadsheet-style programs) is quite simplified in this tool. This tool has also gathered interest outside the engine icing branch, and will be used by researchers across NASA Glenn Research Center. This project exemplifies the enormous benefit of automating data processing, analysis, and visualization, and will help researchers move from raw data to insight in a much smaller time frame.

  4. Eye movement related brain responses to emotional scenes during free viewing

    PubMed Central

    Simola, Jaana; Torniainen, Jari; Moisala, Mona; Kivikangas, Markus; Krause, Christina M.

    2013-01-01

    Emotional stimuli are preferentially processed over neutral stimuli. Previous studies, however, disagree on whether emotional stimuli capture attention preattentively or whether the processing advantage is dependent on allocation of attention. The present study investigated attention and emotion processes by measuring brain responses related to eye movement events while 11 participants viewed images selected from the International Affective Picture System (IAPS). Brain responses to emotional stimuli were compared between serial and parallel presentation. An “emotional” set included one image with high positive or negative valence among neutral images. A “neutral” set comprised four neutral images. The participants were asked to indicate which picture—if any—was emotional and to rate that picture on valence and arousal. In the serial condition, the event-related potentials (ERPs) were time-locked to the stimulus onset. In the parallel condition, the ERPs were time-locked to the first eye entry on an image. The eye movement results showed facilitated processing of emotional, especially unpleasant information. The EEG results in both presentation conditions showed that the LPP (“late positive potential”) amplitudes at 400–500 ms were enlarged for the unpleasant and pleasant pictures as compared to neutral pictures. Moreover, the unpleasant scenes elicited stronger responses than pleasant scenes. The ERP results did not support parafoveal emotional processing, although the eye movement results suggested faster attention capture by emotional stimuli. Our findings, thus, suggested that emotional processing depends on overt attentional resources engaged in the processing of emotional content. The results also indicate that brain responses to emotional images can be analyzed time-locked to eye movement events, although the response amplitudes were larger during serial presentation. PMID:23970856

  5. Education in Rural Areas of China and South Africa: Comparative Perspectives on Policy and Educational Management. Comparative Perspectives: Education in China & South Africa.

    ERIC Educational Resources Information Center

    Gordon, Adele; Wang, Qiang

    This report on the educational systems in China and South Africa compares the policies and processes of the two countries as they attempt to expand and improve rural education. Both countries experienced a major political upheaval, and even though there is a 50-year time lag between these events, political changes ushered in radical educational…

  6. Cognitive inconsistency in bipolar patients is determined by increased intra-individual variability in initial phase of task performance.

    PubMed

    Krukow, Paweł; Szaniawska, Ola; Harciarek, Michał; Plechawska-Wójcik, Małgorzata; Jonak, Kamil

    2017-03-01

    Bipolar patients show high intra-individual variability during cognitive processing. However, it is not known whether there are a specific fluctuations of variability contributing to the overall high cognitive inconsistency. The objective was to compare dynamic profiles of patients and healthy controls to identify hypothetical differences and their associations with overall variability and processing speed. Changes of reaction times iSD during processing speed test performance over time was measured by dividing the iSD for whole task into four consecutive parts. Motor speed and cognitive effort were controlled. Patients with BD exhibited significantly lower results regarding processing speed and higher intra-individual variability comparing with HC. The profile of intra-individual variability changes over time of performance was significantly different in BD versus HC groups: F(3, 207)=8.60, p<0.0001, η p 2 =0.11. iSD of BD patients in the initial phase of performance was three times higher than in the last. There was no significant differences between four intervals in HC group. Inter-group difference in the initial part of the profiles was significant also after controlling for several cognitive and clinical variables. Applied computer version of Cognitive Speed Test was relatively new and, thus, replication studies are needed. Effect seen in the present study is driven mainly by the BD type I. Patients with BD exhibits problems with setting a stimulus-response association in starting phase of cognitive processing. This deficit may negatively interfere with the other cognitive functions, decreasing level of psychosocial functioning, therefore should be explored in future studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A comparison between atmospheric/humidity and vacuum cyanoacrylate fuming of latent fingermarks.

    PubMed

    Farrugia, Kevin J; Fraser, Joanna; Friel, Lauren; Adams, Duncan; Attard-Montalto, Nicola; Deacon, Paul

    2015-12-01

    A number of pseudo-operational trials were set up to compare the atmospheric/humidity and vacuum cyanoacrylate fuming processes on plastic carrier bags. The fuming processes were compared using two-step cyanoacrylate fuming with basic yellow 40 (BY40) staining and a one-step fluorescent cyanoacrylate fuming, Lumicyano 4%. Preliminary work using planted fingermarks and split depletions were performed to identify the optimum vacuum fuming conditions. The first pseudo-operational trial compared the different fuming conditions (atmospheric/humidity vs. vacuum) for the two-step process where an additional 50% more marks were detected with the atmospheric/humidity process. None of the marks by the vacuum process could be observed visually; however, a significant number of marks were detected by fluorescence after BY40 staining. The second trial repeated the same work in trial 1 using the one-step cyanoacrylate process, Lumicyano at a concentration of 4%. Trial 2 provided comparable results to trial 1 and all the items were then re-treated with Lumicyano 4% at atmospheric/humidity conditions before dyeing with BY40 to provide the sequences of process A (Lumicyano 4% atmospheric-Lumicyano 4% atmospheric-BY40) and process B (Lumicyano 4% vacuum-Lumicyano 4% atmospheric-BY40). The number of marks (visual and fluorescent) was counted after each treatment with a substantial increase in the number of detected marks in the second and third treatments of the process. The increased detection rate after the double Lumicyano process was unexpected and may have important implications. Trial 3 was performed to investigate whether the amount of cyanoacrylate and/or fuming time had an impact on the results observed in trial 2 whereas trial 4 assessed if the double process using conventional cyanoacrylate, rather than Lumicyano 4%, provided an increased detection rate. Trials 3 and 4 confirmed that doubling the amount of Lumicyano 4% cyanoacrylate and fuming time produced a lower detection rate than the double process with Lumicyano 4%. Furthermore, the double process with conventional cyanoacrylate did not provide any benefit. Scanning electron microscopy was also performed to investigate the morphology of the cyanoacrylate polymer under different conditions. The atmospheric/humidity process appears to be superior to the vacuum process for both the two-step and one-step cyanoacrylate fuming, although the two-step process performed better in comparison to the one-step process under vacuum conditions. Nonetheless, the use of vacuum cyanoacrylate fuming may have certain operational advantages and its use does not adversely affect subsequent cyanoacrylate fuming with atmospheric/humidity conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Reducing process delays for real-time earthquake parameter estimation - An application of KD tree to large databases for Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Yin, Lucy; Andrews, Jennifer; Heaton, Thomas

    2018-05-01

    Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.

  9. MNE Scan: Software for real-time processing of electrophysiological data.

    PubMed

    Esch, Lorenz; Sun, Limin; Klüber, Viktor; Lew, Seok; Baumgarten, Daniel; Grant, P Ellen; Okada, Yoshio; Haueisen, Jens; Hämäläinen, Matti S; Dinh, Christoph

    2018-06-01

    Magnetoencephalography (MEG) and Electroencephalography (EEG) are noninvasive techniques to study the electrophysiological activity of the human brain. Thus, they are well suited for real-time monitoring and analysis of neuronal activity. Real-time MEG/EEG data processing allows adjustment of the stimuli to the subject's responses for optimizing the acquired information especially by providing dynamically changing displays to enable neurofeedback. We introduce MNE Scan, an acquisition and real-time analysis software based on the multipurpose software library MNE-CPP. MNE Scan allows the development and application of acquisition and novel real-time processing methods in both research and clinical studies. The MNE Scan development follows a strict software engineering process to enable approvals required for clinical software. We tested the performance of MNE Scan in several device-independent use cases, including, a clinical epilepsy study, real-time source estimation, and Brain Computer Interface (BCI) application. Compared to existing tools we propose a modular software considering clinical software requirements expected by certification authorities. At the same time the software is extendable and freely accessible. We conclude that MNE Scan is the first step in creating a device-independent open-source software to facilitate the transition from basic neuroscience research to both applied sciences and clinical applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.

  11. Short Term Rain Prediction For Sustainability of Tanks in the Tropic Influenced by Shadow Rains

    NASA Astrophysics Data System (ADS)

    Suresh, S.

    2007-07-01

    Rainfall and flow prediction, adapting the Venkataraman single time series approach and Wiener multiple time series approach were conducted for Aralikottai tank system, and Kothamangalam tank system, Tamilnadu, India. The results indicated that the raw prediction of daily values is closer to actual values than trend identified predictions. The sister seasonal time series were more amenable for prediction than whole parent time series. Venkataraman single time approach was more suited for rainfall prediction. Wiener approach proved better for daily prediction of flow based on rainfall. The major conclusion is that the sister seasonal time series of rain and flow have their own identities even though they form part of the whole parent time series. Further studies with other tropical small watersheds are necessary to establish this unique characteristic of independent but not exclusive behavior of seasonal stationary stochastic processes as compared to parent non stationary stochastic processes.

  12. Efficient reactive Brownian dynamics

    DOE PAGES

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    2018-01-21

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently processmore » reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.« less

  13. Efficient reactive Brownian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently processmore » reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.« less

  14. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  15. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  16. An analysis of metropolitan land-use by machine processing of earth resources technology satellite data

    NASA Technical Reports Server (NTRS)

    Mausel, P. W.; Todd, W. J.; Baumgardner, M. F.

    1976-01-01

    A successful application of state-of-the-art remote sensing technology in classifying an urban area into its broad land use classes is reported. This research proves that numerous urban features are amenable to classification using ERTS multispectral data automatically processed by computer. Furthermore, such automatic data processing (ADP) techniques permit areal analysis on an unprecedented scale with a minimum expenditure of time. Also, classification results obtained using ADP procedures are consistent, comparable, and replicable. The results of classification are compared with the proposed U. S. G. S. land use classification system in order to determine the level of classification that is feasible to obtain through ERTS analysis of metropolitan areas.

  17. Valuation of exotic options in the framework of Levy processes

    NASA Astrophysics Data System (ADS)

    Milev, Mariyan; Georgieva, Svetla; Markovska, Veneta

    2013-12-01

    In this paper we explore a straightforward procedure to price derivatives by using the Monte Carlo approach when the underlying process is a jump-diffusion. We have compared the Black-Scholes model with one of its extensions that is the Merton model. The latter model is better in capturing the market's phenomena and is comparative to stochastic volatility models in terms of pricing accuracy. We have presented simulations of asset paths and pricing of barrier options for both Geometric Brownian motion and exponential Levy processes as it is the concrete case of the Merton model. A desired level of accuracy is obtained with simple computer operations in MATLAB for efficient computational time.

  18. Workflow and maintenance characteristics of five automated laboratory instruments for the diagnosis of sexually transmitted infections.

    PubMed

    Ratnam, Sam; Jang, Dan; Gilchrist, Jodi; Smieja, Marek; Poirier, Andre; Hatchette, Todd; Flandin, Jean-Frederic; Chernesky, Max

    2014-07-01

    The choice of a suitable automated system for a diagnostic laboratory depends on various factors. Comparative workflow studies provide quantifiable and objective metrics to determine hands-on time during specimen handling and processing, reagent preparation, return visits and maintenance, and test turnaround time and throughput. Using objective time study techniques, workflow characteristics for processing 96 and 192 tests were determined on m2000 RealTime (Abbott Molecular), Viper XTR (Becton Dickinson), cobas 4800 (Roche Molecular Diagnostics), Tigris (Hologic Gen-Probe), and Panther (Hologic Gen-Probe) platforms using second-generation assays for Chlamydia trachomatis and Neisseria gonorrhoeae. A combination of operational and maintenance steps requiring manual labor showed that Panther had the shortest overall hands-on times and Viper XTR the longest. Both Panther and Tigris showed greater efficiency whether 96 or 192 tests were processed. Viper XTR and Panther had the shortest times to results and m2000 RealTime the longest. Sample preparation and loading time was the shortest for Panther and longest for cobas 4800. Mandatory return visits were required only for m2000 RealTime and cobas 4800 when 96 tests were processed, and both required substantially more hands-on time than the other systems due to increased numbers of return visits when 192 tests were processed. These results show that there are substantial differences in the amount of labor required to operate each system. Assay performance, instrumentation, testing capacity, workflow, maintenance, and reagent costs should be considered in choosing a system. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  19. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    PubMed

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  20. Continuous cider fermentation with co-immobilized yeast and Leuconostoc oenos cells.

    PubMed

    Nedovic; Durieuxb; Van Nedervelde L; Rosseels; Vandegans; Plaisant; Simon

    2000-06-01

    Ca-alginate matrix was used to co-immobilize Saccharomyces bayanus and Leuconostoc oenos in one integrated biocatalytic system in order to perform simultaneously alcoholic and malo-lactic fermentation of apple juice to produce cider, in a continuous packed bed bioreactor. The continuous process permitted much faster fermentation compared with the traditional batch process. The flavor formation was also better controlled. By adjusting the flow rate of feeding substrate through the bioreactor, i.e. its residence time, it was possible to obtain either "soft" or "dry" cider. However, the profile of volatile compounds in the final product was modified comparatively to the batch process, especially for higher alcohols, isoamylacetate, and diacetyl. This modification is due to different physiology states of yeast in two processes. Nevertheless, the taste of cider was quite acceptable.

  1. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  2. The Timing and Effort of Lexical Access in Natural and Degraded Speech

    PubMed Central

    Wagner, Anita E.; Toffanin, Paolo; Başkent, Deniz

    2016-01-01

    Understanding speech is effortless in ideal situations, and although adverse conditions, such as caused by hearing impairment, often render it an effortful task, they do not necessarily suspend speech comprehension. A prime example of this is speech perception by cochlear implant users, whose hearing prostheses transmit speech as a significantly degraded signal. It is yet unknown how mechanisms of speech processing deal with such degraded signals, and whether they are affected by effortful processing of speech. This paper compares the automatic process of lexical competition between natural and degraded speech, and combines gaze fixations, which capture the course of lexical disambiguation, with pupillometry, which quantifies the mental effort involved in processing speech. Listeners’ ocular responses were recorded during disambiguation of lexical embeddings with matching and mismatching durational cues. Durational cues were selected due to their substantial role in listeners’ quick limitation of the number of lexical candidates for lexical access in natural speech. Results showed that lexical competition increased mental effort in processing natural stimuli in particular in presence of mismatching cues. Signal degradation reduced listeners’ ability to quickly integrate durational cues in lexical selection, and delayed and prolonged lexical competition. The effort of processing degraded speech was increased overall, and because it had its sources at the pre-lexical level this effect can be attributed to listening to degraded speech rather than to lexical disambiguation. In sum, the course of lexical competition was largely comparable for natural and degraded speech, but showed crucial shifts in timing, and different sources of increased mental effort. We argue that well-timed progress of information from sensory to pre-lexical and lexical stages of processing, which is the result of perceptual adaptation during speech development, is the reason why in ideal situations speech is perceived as an undemanding task. Degradation of the signal or the receiver channel can quickly bring this well-adjusted timing out of balance and lead to increase in mental effort. Incomplete and effortful processing at the early pre-lexical stages has its consequences on lexical processing as it adds uncertainty to the forming and revising of lexical hypotheses. PMID:27065901

  3. The processing of blend words in naming and sentence reading.

    PubMed

    Johnson, Rebecca L; Slate, Sarah Rose; Teevan, Allison R; Juhasz, Barbara J

    2018-04-01

    Research exploring the processing of morphologically complex words, such as compound words, has found that they are decomposed into their constituent parts during processing. Although much is known about the processing of compound words, very little is known about the processing of lexicalised blend words, which are created from parts of two words, often with phoneme overlap (e.g., brunch). In the current study, blends were matched with non-blend words on a variety of lexical characteristics, and blend processing was examined using two tasks: a naming task and an eye-tracking task that recorded eye movements during reading. Results showed that blend words were processed more slowly than non-blend control words in both tasks. Blend words led to longer reaction times in naming and longer processing times on several eye movement measures compared to non-blend words. This was especially true for blends that were long, rated low in word familiarity, but were easily recognisable as blends.

  4. [Study on baking processing technology of hui medicine Aconitum flavum].

    PubMed

    Fu, Xue-yan; Zhang, Bai-tong; Li, Ting-ting; Dong, Lin; Hao, Wen-jing; Yu, Liang

    2013-12-01

    To screen and optimize the processing technology of Aconitum flavum. The acute-toxicity, anti-inflammatory and analgesic experiments were used as indexes. Four processing methods, including decoction, streaming, baking and processing with Chebulae Fructus decoction, were compared to screen the optimum processing method for Aconitum flavum. The baking time was also optimized. The optimal baked technology was that 1-2 mm decoction pieces was baked at 105 degrees C for 3 hours. The baking method is proved to be the optimal processing method of Aconitum flavum. It is shown that this method is simple and stable.

  5. Performance of post-processing algorithms for rainfall intensity using measurements from tipping-bucket rain gauges

    NASA Astrophysics Data System (ADS)

    Stagnaro, Mattia; Colli, Matteo; Lanza, Luca Giovanni; Chan, Pak Wai

    2016-11-01

    Eight rainfall events recorded from May to September 2013 at Hong Kong International Airport (HKIA) have been selected to investigate the performance of post-processing algorithms used to calculate the rainfall intensity (RI) from tipping-bucket rain gauges (TBRGs). We assumed a drop-counter catching-type gauge as a working reference and compared rainfall intensity measurements with two calibrated TBRGs operated at a time resolution of 1 min. The two TBRGs differ in their internal mechanics, one being a traditional single-layer dual-bucket assembly, while the other has two layers of buckets. The drop-counter gauge operates at a time resolution of 10 s, while the time of tipping is recorded for the two TBRGs. The post-processing algorithms employed for the two TBRGs are based on the assumption that the tip volume is uniformly distributed over the inter-tip period. A series of data of an ideal TBRG is reconstructed using the virtual time of tipping derived from the drop-counter data. From the comparison between the ideal gauge and the measurements from the two real TBRGs, the performances of different post-processing and correction algorithms are statistically evaluated over the set of recorded rain events. The improvement obtained by adopting the inter-tip time algorithm in the calculation of the RI is confirmed. However, by comparing the performance of the real and ideal TBRGs, the beneficial effect of the inter-tip algorithm is shown to be relevant for the mid-low range (6-50 mmh-1) of rainfall intensity values (where the sampling errors prevail), while its role vanishes with increasing RI in the range where the mechanical errors prevail.

  6. The Impact of City-level Permitting Processes on Residential Photovoltaic Installation Prices and Development Times: An Empirical Analysis of Solar Systems in California Cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiser, Ryan; Dong, Changgui

    Business process or “soft” costs account for well over 50% of the installed price of residential photovoltaic (PV) systems in the United States, so understanding these costs is crucial for identifying PV cost-reduction opportunities. Among these costs are those imposed by city-level permitting processes, which may add both expense and time to the PV development process. Building on previous research, this study evaluates the effect of city-level permitting processes on the installed price of residential PV systems and on the time required to develop and install those systems. The study uses a unique dataset from the U.S. Department of Energy’smore » Rooftop Solar Challenge Program, which includes city-level permitting process “scores,” plus data from the California Solar Initiative and the U.S. Census. Econometric methods are used to quantify the price and development-time effects of city-level permitting processes on more than 3,000 PV installations across 44 California cities in 2011. Results indicate that city-level permitting processes have a substantial and statistically significant effect on average installation prices and project development times. The results suggest that cities with the most favorable (i.e., highest-scoring) permitting practices can reduce average residential PV prices by $0.27–$0.77/W (4%–12% of median PV prices in California) compared with cities with the most onerous (i.e., lowest-scoring) permitting practices, depending on the regression model used. Though the empirical models for development times are less robust, results suggest that the most streamlined permitting practices may shorten development times by around 24 days on average (25% of the median development time). These findings illustrate the potential price and development-time benefits of streamlining local permitting procedures for PV systems.« less

  7. Automatic detection of key innovations, rate shifts, and diversity-dependence on phylogenetic trees.

    PubMed

    Rabosky, Daniel L

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes.

  8. Automatic Detection of Key Innovations, Rate Shifts, and Diversity-Dependence on Phylogenetic Trees

    PubMed Central

    Rabosky, Daniel L.

    2014-01-01

    A number of methods have been developed to infer differential rates of species diversification through time and among clades using time-calibrated phylogenetic trees. However, we lack a general framework that can delineate and quantify heterogeneous mixtures of dynamic processes within single phylogenies. I developed a method that can identify arbitrary numbers of time-varying diversification processes on phylogenies without specifying their locations in advance. The method uses reversible-jump Markov Chain Monte Carlo to move between model subspaces that vary in the number of distinct diversification regimes. The model assumes that changes in evolutionary regimes occur across the branches of phylogenetic trees under a compound Poisson process and explicitly accounts for rate variation through time and among lineages. Using simulated datasets, I demonstrate that the method can be used to quantify complex mixtures of time-dependent, diversity-dependent, and constant-rate diversification processes. I compared the performance of the method to the MEDUSA model of rate variation among lineages. As an empirical example, I analyzed the history of speciation and extinction during the radiation of modern whales. The method described here will greatly facilitate the exploration of macroevolutionary dynamics across large phylogenetic trees, which may have been shaped by heterogeneous mixtures of distinct evolutionary processes. PMID:24586858

  9. An Order Insertion Scheduling Model of Logistics Service Supply Chain Considering Capacity and Time Factors

    PubMed Central

    Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851

  10. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    PubMed

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  11. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses.

  12. Comparative Mechanical Improvement of Stainless Steel 304 Through Three Methods

    NASA Astrophysics Data System (ADS)

    Mubarok, N.; Notonegoro, H. A.; Thosin, K. A. Z.

    2018-05-01

    Stainless Steel 304 (SS304) is one of stainless steel group widely used in industries for various purposes. In this paper, we compared the experimental process to enhance the mechanical properties of the surface SS304 through three different methods, cold rolled, annealed salt baht bronzing (ASB), and annealed salt baht boronizing-quench (ASB-Q). The phase change in SS304 due to the cold rolled process makes this method is to abandon. The increasing of the annealing time in the ASB method has a nonlinear relationship with the increases in hardness value. Comparing to the increases in hardness value of the ASB method, the hardness value of ASB-Q methods is still lower than that method.

  13. A data colocation grid framework for big data medical image processing: backend design

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  14. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.

    PubMed

    Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A

    2018-03-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.

  15. A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design

    PubMed Central

    Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.

    2018-01-01

    When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668

  16. Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.

    PubMed

    Hougaard, P; Lee, M L; Whitmore, G A

    1997-12-01

    Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.

  17. Transfer Credit Evaluations at New York State Colleges: Comparative Case Studies of Process Effectiveness

    ERIC Educational Resources Information Center

    Ott, Alexander Paul

    2012-01-01

    For the nearly one third of students who transfer from 1 college to another, transfer credit represents time and money. Unfortunately, many colleges provide degree-specific transfer credit evaluations only after students financially commit to attend the institution. The purpose of this study was to investigate, analyze, and compare early and late…

  18. Effects of selective attention on perceptual filling-in.

    PubMed

    De Weerd, P; Smith, E; Greenberg, P

    2006-03-01

    After few seconds, a figure steadily presented in peripheral vision becomes perceptually filled-in by its background, as if it "disappeared". We report that directing attention to the color, shape, or location of a figure increased the probability of perceiving filling-in compared to unattended figures, without modifying the time required for filling-in. This effect could be augmented by boosting attention. Furthermore, the frequency distribution of filling-in response times for attended figures could be predicted by multiplying the frequencies of response times for unattended figures with a constant. We propose that, after failure of figure-ground segregation, the neural interpolation processes that produce perceptual filling-in are enhanced in attended figure regions. As filling-in processes are involved in surface perception, the present study demonstrates that even very early visual processes are subject to modulation by cognitive factors.

  19. Two-degree-of-freedom fractional order-PID controllers design for fractional order processes with dead-time.

    PubMed

    Li, Mingjie; Zhou, Ping; Zhao, Zhicheng; Zhang, Jinggang

    2016-03-01

    Recently, fractional order (FO) processes with dead-time have attracted more and more attention of many researchers in control field, but FO-PID controllers design techniques available for the FO processes with dead-time suffer from lack of direct systematic approaches. In this paper, a simple design and parameters tuning approach of two-degree-of-freedom (2-DOF) FO-PID controller based on internal model control (IMC) is proposed for FO processes with dead-time, conventional one-degree-of-freedom control exhibited the shortcoming of coupling of robustness and dynamic response performance. 2-DOF control can overcome the above weakness which means it realizes decoupling of robustness and dynamic performance from each other. The adjustable parameter η2 of FO-PID controller is directly related to the robustness of closed-loop system, and the analytical expression is given between the maximum sensitivity specification Ms and parameters η2. In addition, according to the dynamic performance requirement of the practical system, the parameters η1 can also be selected easily. By approximating the dead-time term of the process model with the first-order Padé or Taylor series, the expressions for 2-DOF FO-PID controller parameters are derived for three classes of FO processes with dead-time. Moreover, compared with other methods, the proposed method is simple and easy to implement. Finally, the simulation results are given to illustrate the effectiveness of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  1. Fusion processing of itraconazole solid dispersions by kinetisol dispersing: a comparative study to hot melt extrusion.

    PubMed

    DiNunzio, James C; Brough, Chris; Miller, Dave A; Williams, Robert O; McGinity, James W

    2010-03-01

    KinetiSol Dispersing (KSD) is a novel high energy manufacturing process investigated here for the production of pharmaceutical solid dispersions. Solid dispersions of itraconazole (ITZ) and hypromellose were produced by KSD and compared to identical formulations produced by hot melt extrusion (HME). Materials were characterized for solid state properties by modulated differential scanning calorimetry and X-ray diffraction. Dissolution behavior was studied under supersaturated conditions. Oral bioavailability was determined using a Sprague-Dawley rat model. Results showed that KSD was able to produce amorphous solid dispersions in under 15 s while production by HME required over 300 s. Dispersions produced by KSD exhibited single phase solid state behavior indicated by a single glass transition temperature (T(g)) whereas compositions produced by HME exhibited two T(g)s. Increased dissolution rates for compositions manufactured by KSD were also observed compared to HME processed material. Near complete supersaturation was observed for solid dispersions produced by either manufacturing processes. Oral bioavailability from both processes showed enhanced AUC compared to crystalline ITZ. Based on the results presented from this study, KSD was shown to be a viable manufacturing process for the production of pharmaceutical solid dispersions, providing benefits over conventional techniques including: enhanced mixing for improved homogeneity and reduced processing times. 2009 Wiley-Liss, Inc. and the American Pharmacists Association

  2. Real-time solar magnetograph operation system software design and user's guide

    NASA Technical Reports Server (NTRS)

    Wang, C.

    1984-01-01

    The Real Time Solar Magnetograph (RTSM) Operation system software design on PDP11/23+ is presented along with the User's Guide. The RTSM operation software is for real time instrumentation control, data collection and data management. The data is used for vector analysis, plotting or graphics display. The processed data is then easily compared with solar data from other sources, such as the Solar Maximum Mission (SMM).

  3. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  4. Flexible Manufacturing Systems: What's in It for the Manufacturer.

    ERIC Educational Resources Information Center

    Chowdhury, A. R.; Peckman, Donald C.

    1987-01-01

    The authors define the Flexible Manufacturing System and outline its history. They describe what the processing time includes and provide advantages and disadvantages of Flexible Manufacturing Systems compared to conventional manufacturing. (CH)

  5. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  6. Rapid evolution in insect pests: the importance of space and time in population genomics studies.

    PubMed

    Pélissié, Benjamin; Crossley, Michael S; Cohen, Zachary Paul; Schoville, Sean D

    2018-04-01

    Pest species in agroecosystems often exhibit patterns of rapid evolution to environmental and human-imposed selection pressures. Although the role of adaptive processes is well accepted, few insect pests have been studied in detail and most research has focused on selection at insecticide resistance candidate genes. Emerging genomic datasets provide opportunities to detect and quantify selection in insect pest populations, and address long-standing questions about mechanisms underlying rapid evolutionary change. We examine the strengths of recent studies that stratify population samples both in space (along environmental gradients and comparing ancestral vs. derived populations) and in time (using chronological sampling, museum specimens and comparative phylogenomics), resulting in critical insights on evolutionary processes, and providing new directions for studying pests in agroecosystems. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Comparison of ultrasonic energy expenditures and corneal endothelial cell density reductions during modulated and non-modulated phacoemulsification.

    PubMed

    Davison, James A

    2007-01-01

    To compare the Legacy 20000 Advantec continuous and Infiniti hyperpulse modes (Alcon Laboratories, Fort Worth, TX) with respect to average power, machine-measured phacoemulsification time, total stopwatch real time spent within the phacoemulsification process, balanced salt solution (BSS) volume, and corneal endothelial cell density losses. A background study was done of consecutive patients operated on with the Legacy (n = 60) and Infiniti (n = 40) machines programmed with identical parameters and using the continuous mode only. A primary study of another set of consecutive cases was operated on using the Legacy (n = 87) and Infiniti (n = 94) with the same parameters, but using the hyperpulse mode during quadrant removal with the Infiniti. Measurements for each set included average power and phacoemulsification time with corneal endothelial cell densities, BSS volume, and time spent in the phacoemulsification process. Similarities were found in the background study for average power percent and average minutes of phacoemulsification time. In the primary study, similarities were found for total minutes in the phacoemulsification process, BSS usage, and ECD losses, and differences were found for average power percent (P< .001) and machine-measured phacoemulsification minutes (P< .001). The Legacy and Infiniti performed similarly in continuous mode. With the Infiniti hyperpulse mode, a total ultrasonic energy reduction of 66% was noted. The machines required the same amount of total stopwatch measured time to accomplish phacoemulsification and produced the same 5% corneal endothelial cell loss. Therefore, clinically, these two machines behave in a comparable manner relative to safety and effectiveness.

  8. Precise discussion of time-reversal asymmetries in B-meson decays

    DOE PAGES

    Morozumi, Takuya; Okane, Hideaki; Umeeda, Hiroyuki

    2015-02-26

    BaBar collaboration announced that they observed time reversal (T) asymmetry through B meson system. In the experiment, time dependencies of two distinctive processes, B_ →B¯ 0 and B¯ 0 → B_ (– expresses CP value) are compared with each other. In our study, we examine event number difference of these two processes. In contrast to the BaBar asymmetry, the asymmetry of events number includes the overall normalization difference for rates. Time dependence of the asymmetry is more general and it includes terms absent in one used by BaBar collaboration. Both of the BaBar asymmetry and ours are naively thought tomore » be T-odd since two processes compared are related with flipping time direction. We investigate the time reversal transformation property of our asymmetry. Using our notation, one can see that the asymmetry is not precisely a T-odd quantity, taking into account indirect CP and CPT violation of K meson systems. The effect of ϵK is extracted and gives rise to O(10 –3) contribution. The introduced parameters are invariant under rephasing of quarks so that the coefficients of our asymmetry are expressed as phase convention independent quantities. Some combinations of the asymmetry enable us to extract parameters for wrong sign decays of B d meson, CPT violation, etc. As a result, we also study the reason why the T-even terms are allowed to contribute to the asymmetry, and find that several conditions are needed for the asymmetry to be a T-odd quantity.« less

  9. Influence of Time-Series Normalization, Number of Nodes, Connectivity and Graph Measure Selection on Seizure-Onset Zone Localization from Intracranial EEG.

    PubMed

    van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge

    2018-04-26

    We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.

  10. Neural coding of time-varying interaural time differences and time-varying amplitude in the inferior colliculus

    PubMed Central

    2017-01-01

    Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct. PMID:28381487

  11. Numerical Simulation of Evacuation Process in Malaysia By Using Distinct-Element-Method Based Multi-Agent Model

    NASA Astrophysics Data System (ADS)

    Abustan, M. S.; Rahman, N. A.; Gotoh, H.; Harada, E.; Talib, S. H. A.

    2016-07-01

    In Malaysia, not many researches on crowd evacuation simulation had been reported. Hence, the development of numerical crowd evacuation process by taking into account people behavioral patterns and psychological characteristics is crucial in Malaysia. On the other hand, tsunami disaster began to gain attention of Malaysian citizens after the 2004 Indian Ocean Tsunami that need quick evacuation process. In relation to the above circumstances, we have conducted simulations of tsunami evacuation process at the Miami Beach of Penang Island by using Distinct Element Method (DEM)-based crowd behavior simulator. The main objectives are to investigate and reproduce current conditions of evacuation process at the said locations under different hypothetical scenarios for the efficiency study of the evacuation. The sim-1 is initial condition of evacuation planning while sim-2 as improvement of evacuation planning by adding new evacuation area. From the simulation result, sim-2 have a shorter time of evacuation process compared to the sim-1. The evacuation time recuded 53 second. The effect of the additional evacuation place is confirmed from decreasing of the evacuation completion time. Simultaneously, the numerical simulation may be promoted as an effective tool in studying crowd evacuation process.

  12. Pre- and post-head processing for single- and double-scrambled sentences of a head-final language as measured by the eye tracking method.

    PubMed

    Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko

    2014-04-01

    Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO₁O₂V canonical, (2) SO₂O₁V single-scrambled, and (3) O₁O₂SV double-scrambled order. First pass reading times indicated that the third noun phrase just before the verb in both single- and double-scrambled sentences required longer reading times compared to canonical sentences. Re-reading times (the sum of all fixations minus the first pass reading) showed that all noun phrases including the crucial phrase before the verb in double-scrambled sentences required longer re-reading times than those required for single-scrambled sentences; single-scrambled sentences had no difference from canonical ones. Therefore, a single filler-gap dependency can be resolved in pre-head anticipatory processing whereas two filler-gap dependencies require much greater cognitive loading than a single case. These two dependencies can be resolved in post-head processing using verb agreement information.

  13. An efficient ASIC implementation of 16-channel on-line recursive ICA processor for real-time EEG system.

    PubMed

    Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.

  14. Observations & modeling of solar-wind/magnetospheric interactions

    NASA Astrophysics Data System (ADS)

    Hoilijoki, Sanni; Von Alfthan, Sebastian; Pfau-Kempf, Yann; Palmroth, Minna; Ganse, Urs

    2016-07-01

    The majority of the global magnetospheric dynamics is driven by magnetic reconnection, indicating the need to understand and predict reconnection processes and their global consequences. So far, global magnetospheric dynamics has been simulated using mainly magnetohydrodynamic (MHD) models, which are approximate but fast enough to be executed in real time or near-real time. Due to their fast computation times, MHD models are currently the only possible frameworks for space weather predictions. However, in MHD models reconnection is not treated kinetically. In this presentation we will compare the results from global kinetic (hybrid-Vlasov) and global MHD simulations. Both simulations are compared with in-situ measurements. We will show that the kinetic processes at the bow shock, in the magnetosheath and at the magnetopause affect global dynamics even during steady solar wind conditions. Foreshock processes cause an asymmetry in the magnetosheath plasma, indicating that the plasma entering the magnetosphere is not symmetrical on different sides of the magnetosphere. Behind the bow shock in the magnetosheath kinetic wave modes appear. Some of these waves propagate to the magnetopause and have an effect on the magnetopause reconnection. Therefore we find that kinetic phenomena have a significant role in the interaction between the solar wind and the magnetosphere. While kinetic models cannot be executed in real time currently, they could be used to extract heuristics to be added in the faster MHD models.

  15. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    PubMed

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  16. High Resolution Near Real Time Image Processing and Support for MSSS Modernization

    NASA Astrophysics Data System (ADS)

    Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.

    2012-09-01

    This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.

  17. Influence of wholesale lamb marketing options and merchandising styles on retail yield and fabrication time.

    PubMed

    Lorenzen, C L; Martin, A M; Griffin, D B; Dockerty, T R; Walter, J P; Johnson, H K; Savell, J W

    1997-01-01

    Lamb carcasses (n = 94) from five packing plants, selected to vary in weight class and fat thickness, were used to determine retail yield and labor requirements of wholesale lamb fabrication. Carcasses were allotted randomly according to weight class to be fabricated as whole carcasses (n = 20), three-piece boxes (n = 22), or subprimals (n = 52). Processing times (seconds) were recorded and wholesale and retail weights (kilograms) were obtained to calculate retail yield. Subprimals were fabricated into bone-in retail cuts or boneless or semi-boneless retail cuts. Retail yield for subprimal lamb legs decreased from 85.3 +/- .6% for bone-in to 68.0 +/- .7% for a completely boneless retail product. Correspondingly, processing times increased from 126.1 +/- 5.4 s to 542.0 +/- 19.2 s for bone-in and boneless legs, respectively. For all subprimals, retail yield percentage tended to decrease and total processing time increase as cuts were fabricated to boneless or semi-boneless end points compared with a bone-in end point. Percentage retail yield did not differ (P > .05) among whole carcass, three-piece box, and subprimal marketing methods. Total processing time was shorter for subprimals (P < .05) than for the other two marketing methods.

  18. Genetic programming and serial processing for time series classification.

    PubMed

    Alfaro-Cid, Eva; Sharman, Ken; Esparcia-Alcázar, Anna I

    2014-01-01

    This work describes an approach devised by the authors for time series classification. In our approach genetic programming is used in combination with a serial processing of data, where the last output is the result of the classification. The use of genetic programming for classification, although still a field where more research in needed, is not new. However, the application of genetic programming to classification tasks is normally done by considering the input data as a feature vector. That is, to the best of our knowledge, there are not examples in the genetic programming literature of approaches where the time series data are processed serially and the last output is considered as the classification result. The serial processing approach presented here fills a gap in the existing literature. This approach was tested in three different problems. Two of them are real world problems whose data were gathered for online or conference competitions. As there are published results of these two problems this gives us the chance to compare the performance of our approach against top performing methods. The serial processing of data in combination with genetic programming obtained competitive results in both competitions, showing its potential for solving time series classification problems. The main advantage of our serial processing approach is that it can easily handle very large datasets.

  19. Driver compliance to take-over requests with different auditory outputs in conditional automation.

    PubMed

    Forster, Yannick; Naujoks, Frederik; Neukum, Alexandra; Huestegge, Lynn

    2017-12-01

    Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Online low-field NMR spectroscopy for process control of an industrial lithiation reaction-automated data analysis.

    PubMed

    Kern, Simon; Meyer, Klas; Guhl, Svetlana; Gräßer, Patrick; Paul, Andrea; King, Rudibert; Maiwald, Michael

    2018-05-01

    Monitoring specific chemical properties is the key to chemical process control. Today, mainly optical online methods are applied, which require time- and cost-intensive calibration effort. NMR spectroscopy, with its advantage being a direct comparison method without need for calibration, has a high potential for enabling closed-loop process control while exhibiting short set-up times. Compact NMR instruments make NMR spectroscopy accessible in industrial and rough environments for process monitoring and advanced process control strategies. We present a fully automated data analysis approach which is completely based on physically motivated spectral models as first principles information (indirect hard modeling-IHM) and applied it to a given pharmaceutical lithiation reaction in the framework of the European Union's Horizon 2020 project CONSENS. Online low-field NMR (LF NMR) data was analyzed by IHM with low calibration effort, compared to a multivariate PLS-R (partial least squares regression) approach, and both validated using online high-field NMR (HF NMR) spectroscopy. Graphical abstract NMR sensor module for monitoring of the aromatic coupling of 1-fluoro-2-nitrobenzene (FNB) with aniline to 2-nitrodiphenylamine (NDPA) using lithium-bis(trimethylsilyl) amide (Li-HMDS) in continuous operation. Online 43.5 MHz low-field NMR (LF) was compared to 500 MHz high-field NMR spectroscopy (HF) as reference method.

  1. Retrieval and Encoding Interference: Cross-Linguistic Evidence from Anaphor Processing

    PubMed Central

    Laurinavichyute, Anna; Jäger, Lena A.; Akinina, Yulia; Roß, Jennifer; Dragoy, Olga

    2017-01-01

    The main goal of this paper was to disentangle encoding and retrieval interference effects in anaphor processing and thus to evaluate the hypothesis predicting that structurally inaccessible nouns (distractors) are not considered to be potential anaphor antecedents during language processing (Nicol and Swinney, 1989). Three self-paced reading experiments were conducted: one in German, comparing gender-unmarked reflexives and gender-marked pronouns, and two in Russian, comparing gender-marked and -unmarked reflexives. In the German experiment, no interference effects were found. In the first experiment in Russian, an unexpected reading times pattern emerged: in the condition where the distractor matched the gender of the reflexive's antecedent, reading of the gender-unmarked, but not the gender-marked reflexives was slowed down. The same reading times pattern was replicated in a second experiment in Russian where the order of the reflexive and the main verb was inverted. We conclude that the results of the two experiments in Russian are inconsistent with the retrieval interference account, but can be explained by encoding interference and additional semantic processing efforts associated with the processing of gender-marked reflexives. In sum, we found no evidence that would allow us to reject the syntax as an early filer account (Nicol and Swinney, 1989). PMID:28649216

  2. Influence of glycemic control on some real-time biomarkers of free radical formation in type 2 diabetic patients: An EPR study.

    PubMed

    Gadjeva, Veselina Georgieva; Goycheva, Petia; Nikolova, Galina; Zheleva, Antoaneta

    2017-11-01

    The pathology of diabetes is associated with several mechanisms, one of which is oxidative stress (OS). The relationship between OS and diabetic complications has been extensively investigated. OS has been suggested to be involved in the genesis of both macroand microangiopathy. In contrast, the relationship between OS and insulin action is a neglected research area. The aim of this study is to elucidate the effect of glycemic control in type 2 diabetic patients by following the serum levels of some real-time oxidative stress biomarkers. The study group consisted of 53 type 2 diabetic patients (31 with poor glycemic control and 22 with good glycemic control) and 24 healthy control subjects. The oxidative stress biomarkers (ROS, Asc• and •NO) were measured by using electron paramagnetic resonance spectroscopy (EPR) methods and compared with clinical parameters. The statistically significantly higher levels of ROS products and •NO in type 2 diabetic patients in both groups compared to controls mean that the oxidation processes take place at the time the survey is performed. Free radical overproduction persists after the normalization of the glucose levels, and oxidative stress may be involved in the "metabolic memory" effect. This is confirmed by the positive correlation between ROS levels/•NO and average blood glucose levels, triglycerides, and total cholesterol. Furthermore, the low level of the ascorbate radical in both diabetes groups compared to controls confirmed an increase in oxidation processes. Higher levels of real-time biomarkers show that intensive insulin treatment does not lead to the expected decrease in oxidative processes involving ROS and •NO, probably due to "metabolic memory".

  3. Ethanol production from sweet sorghum bagasse through process optimization using response surface methodology.

    PubMed

    Lavudi, Saida; Oberoi, Harinder Singh; Mangamoori, Lakshmi Narasu

    2017-08-01

    In this study, comparative evaluation of acid- and alkali pretreatment of sweet sorghum bagasse (SSB) was carried out for sugar production after enzymatic hydrolysis. Results indicated that enzymatic hydrolysis of alkali-pretreated SSB resulted in higher production of glucose, xylose and arabinose, compared to the other alkali concentrations and also acid-pretreated biomass. Response Surface Methodology (RSM) was, therefore, used to optimize parameters, such as alkali concentration, temperature and time of pretreatment prior to enzymatic hydrolysis to maximize the production of sugars. The independent variables used during RSM included alkali concentration (1.5-4%), pretreatment temperature (125-140 °C) and pretreatment time (10-30 min) were investigated. Process optimization resulted in glucose and xylose concentration of 57.24 and 10.14 g/L, respectively. Subsequently, second stage optimization was conducted using RSM for optimizing parameters for enzymatic hydrolysis, which included substrate concentration (10-15%), incubation time (24-60 h), incubation temperature (40-60 °C) and Celluclast concentration (10-20 IU/g-dwt). Substrate concentration 15%, (w/v) temperature of 60 °C, Celluclast concentration of 20 IU/g-dwt and incubation time of 58 h led to a glucose concentration of 68.58 g/l. Finally, simultaneous saccharification fermentation (SSF) as well as separated hydrolysis and fermentation (SHF) was evaluated using Pichia kudriavzevii HOP-1 for production of ethanol. Significant difference in ethanol concentration was not found using either SSF or SHF; however, ethanol productivity was higher in case of SSF, compared to SHF. This study has established a platform for conducting scale-up studies using the optimized process parameters.

  4. Epidemic spreading in time-varying community networks.

    PubMed

    Ren, Guangming; Wang, Xingyuan

    2014-06-01

    The spreading processes of many infectious diseases have comparable time scale as the network evolution. Here, we present a simple networks model with time-varying community structure, and investigate susceptible-infected-susceptible epidemic spreading processes in this model. By both theoretic analysis and numerical simulations, we show that the efficiency of epidemic spreading in this model depends intensively on the mobility rate q of the individuals among communities. We also find that there exists a mobility rate threshold qc. The epidemic will survive when q > qc and die when q < qc. These results can help understanding the impacts of human travel on the epidemic spreading in complex networks with community structure.

  5. Assessment of laboratory test utilization for HIV/AIDS care in urban ART clinics of Lilongwe, Malawi.

    PubMed

    Palchaudhuri, Sonali; Tweya, Hannock; Hosseinipour, Mina

    2014-06-01

    The 2011 Malawi HIV guidelines promote CD4 monitoring for pre-ART assessment and considering HIVRNA monitoring for ART response assessment, while some clinics used CD4 for both. We assessed clinical ordering practices as compared to guidelines, and determined whether the samples were successfully and promptly processed. We conducted a retrospective review of all patients seen in from August 2010 through July 2011,, in two urban HIV-care clinics that utilized 6-monthly CD4 monitoring regardless of ART status. We calculated the percentage of patients on whom clinicians ordered CD4 or HIVRNA analysis. For all samples sent, we determined rates of successful lab-processing, and mean time to returned results. Of 20581 patients seen, 8029 (39%) had at least one blood draw for CD4 count. Among pre-ART patients, 2668/2844 (93.8%) had CD4 counts performed for eligibility. Of all CD4 samples sent, 8082/9207 (89%) samples were successfully processed. Of those, mean time to processing was 1.6 days (s.d 1.5) but mean time to results being available to clinician was 9.3 days (s.d. 3.7). Regarding HIVRNA, 172 patients of 17737 on ART had a blood draw and only 118/213 (55%) samples were successfully processed. Mean processing time was 39.5 days (s.d. 21.7); mean time to results being available to clinician was 43.1 days (s.d. 25.1). During the one-year evaluated, there were multiple lapses in processing HIVRNA samples for up to 2 months. Clinicians underutilize CD4 and HIVRNA as monitoring tools in HIV care. Laboratory processing failures and turnaround times are unacceptably high for viral load analysis. Alternative strategies need to be considered in order to meet laboratory monitoring needs.

  6. Photophysical Behaviors of Single Fluorophores Localized on Zinc Oxide Nanostructures

    PubMed Central

    Fu, Yi; Zhang, Jian; Lakowicz, Joseph R.

    2012-01-01

    Single-molecule fluorescence spectroscopy has now been widely used to investigate complex dynamic processes which would normally be obscured in an ensemble-averaged measurement. In this report we studied photophysical behaviors of single fluorophores in proximity to zinc oxide nanostructures by single-molecule fluorescence spectroscopy and time-correlated single-photon counting (TCSPC). Single fluorophores on ZnO surfaces showed enhanced fluorescence brightness to various extents compared with those on glass; the single-molecule time trajectories also illustrated pronounced fluctuations of emission intensities, with time periods distributed from milliseconds to seconds. We attribute fluorescence fluctuations to the interfacial electron transfer (ET) events. The fluorescence fluctuation dynamics were found to be inhomogeneous from molecule to molecule and from time to time, showing significant static and dynamic disorders in the interfacial electron transfer reaction processes. PMID:23109903

  7. The Development of Global and Local Processing: A Comparison of Children to Adults

    ERIC Educational Resources Information Center

    Peterson, Eric; Peterson, Robin L.

    2014-01-01

    In light of the adult model of a hemispheric asymmetry of global and local processing, we compared children (M [subscript age] = 8.4 years) to adults in a global-local reaction time (RT) paradigm. Hierarchical designs (large shapes made of small shapes) were presented randomly to each visual field, and participants were instructed to identify…

  8. The Evaluation of Synchronous Distance Ear Training Compared to the Traditional Ear Training

    ERIC Educational Resources Information Center

    Karahan, Ahmet Suat

    2014-01-01

    It is clearly seen that distance education, spreading all over the world recently, is increasingly used in music education process. That the method brings great flexibility to the teaching-learning process destroys the limits depending on time and space and it can easily reach wide audiences and so on outstanding features are the main factors…

  9. An Evaluation of Cognitive Processing Therapy for the Treatment of Posttraumatic Stress Disorder Related to Childhood Sexual Abuse

    ERIC Educational Resources Information Center

    Chard, Kathleen M.

    2005-01-01

    This study compared the effectiveness of cognitive processing therapy for sexual abuse survivors (CPT-SA) with that of the minimal attention (MA) given to a wait-listed control group. Seventy-one women were randomly assigned to 1 of the 2 groups. Participants were assessed at pretreatment and 3 times during posttreatment: immediately after…

  10. Possibilities of Implementation of Small Business Check-Up Methodology in Comparative Analysis of Secondary Schools and Universities in Slovakia

    ERIC Educational Resources Information Center

    Štofková, Katarína; Strícek, Ivan; Štofková, Jana

    2014-01-01

    The paper is aimed to evaluate the possibility of applying new methods and tools of more effective educational processes, with an emphasis on increasing their quality especially aimed on educational processes at secondary schools and universities. There are some contributions from practice for the effective implementation of time management, such…

  11. Processing Speed in Children: Examination of the Structure in Middle Childhood and Its Impact on Reading

    ERIC Educational Resources Information Center

    Gerst, Elyssa H.

    2017-01-01

    The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…

  12. Real-Time Sentence Processing in Children with Specific Language Impairment: The Contribution of Lexicosemantic, Syntactic, and World-Knowledge Information

    ERIC Educational Resources Information Center

    Pizzioli, Fabrizio; Schelstraete, Marie-Anne

    2013-01-01

    The present study investigated how lexicosemantic information, syntactic information, and world knowledge are integrated in the course of oral sentence processing in children with specific language impairment (SLI) as compared to children with typical language development. A primed lexical-decision task was used where participants had to make a…

  13. [Contents of total flavonoids in Rhizoma Arisaematis].

    PubMed

    Du, S S; Lin, H Y; Zhou, Y X; Wei, L X

    2001-06-01

    Comparing the contents of total flavonoides of Rhizoma Arisaematis, which collected in different time, regions, different varieties and processed. Determining the contents by ultraviolet spectro-photometry. The contents were found in the following sequence: 1. the end of July, the begin of July, August, September; 2. Beijing, Shanxi, Sichuan, Anhui; 3. Arisaema erubenscens, A. heterophyllum, A. amurense; 4. unprocessed product, processed product.

  14. Comparing Problem-Based Learning Students to Students in a Lecture-Based Curriculum: Learning Strategies and the Relation with Self-Study Time

    ERIC Educational Resources Information Center

    Wijnen, Marit; Loyens, Sofie M. M.; Smeets, Guus; Kroeze, Maarten; van der Molen, Henk

    2017-01-01

    In educational theory, deep processing (i.e., connecting different study topics together) and self-regulation (i.e., taking control over one's own learning process) are considered effective learning strategies. These learning strategies can be influenced by the learning environment. Problem-based learning (PBL), a student-centered educational…

  15. Aqua-Aura QuickDAM (QDAM) 2.0 Ops Concept

    NASA Technical Reports Server (NTRS)

    Nidhiry, John

    2015-01-01

    The presentation describes the Quick Debris Avoidance Maneuver (QDAM) 2.0 process used the Aqua and Aura flight teams to (a) reduce the work load and dependency on staff and systems; (b) reduce turn-around time and provide emergency last minute capabilities; and (c) increase burn parameter flexibility. The presentation also compares the QDAM 2.0 process to previous approaches.

  16. The neural correlates of implicit self-relevant processing in low self-esteem: an ERP study.

    PubMed

    Yang, Juan; Guan, Lili; Dedovic, Katarina; Qi, Mingming; Zhang, Qinglin

    2012-08-30

    Previous neuroimaging studies have shown that implicit and explicit processing of self-relevant (schematic) material elicit activity in many of the same brain regions. Electrophysiological studies on the neural processing of explicit self-relevant cues have generally supported the view that P300 is an index of attention to self-relevant stimuli; however, there has been no study to date investigating the temporal course of implicit self-relevant processing. The current study seeks to investigate the time course involved in implicit self-processing by comparing processing of self-relevant with non-self-relevant words while subjects are making a judgment about color of the words in an implicit attention task. Sixteen low self-esteem participants were examined using event-related potentials technology (ERP). We hypothesized that this implicit attention task would involve P2 component rather than the P300 component. Indeed, P2 component has been associated with perceptual analysis and attentional allocation and may be more likely to occur in unconscious conditions such as this task. Results showed that latency of P2 component, which indexes the time required for perceptual analysis, was more prolonged in processing self-relevant words compared to processing non-self-relevant words. Our results suggested that the judgment of the color of the word interfered with automatic processing of self-relevant information and resulted in less efficient processing of self-relevant word. Together with previous ERP studies examining processing of explicit self-relevant cues, these findings suggest that the explicit and the implicit processing of self-relevant information would not elicit the same ERP components. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Grouping principles in direct competition.

    PubMed

    Schmidt, Filipp; Schmidt, Thomas

    2013-08-09

    We (1) introduce a primed flanker task as an objective method to measure perceptual grouping, and (2) use it to directly compare the efficiency of different grouping cues in rapid visuomotor processing. In two experiments, centrally presented primes were succeeded by flanking targets with varying stimulus-onset asynchronies (SOAs). Primes and targets were grouped by the same or by different grouping cues (Exp. 1: brightness/shape, Exp. 2: brightness/size) and were consistent or inconsistent with respect to the required response. Subjective grouping strength was varied to identify its influence on overall response times, error rates, and priming effects, that served as a measure of visual feedforward processing. Our results show that stronger grouping in the targets enhanced overall response times while stronger grouping in the primes enhanced priming effects in motor responses. Also, we obtained differences between rapid visuomotor processing and the subjective impression with cues of brightness and shape but not with cues of brightness and size. Our findings establish the primed flanker task as an objective method to study the speeded visuomotor processing of grouping cues, making it a useful method for the comparative study of feedforward-transmitted base groupings (Roelfsema & Houtkamp, 2011). Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. High-Speed Particle-in-Cell Simulation Parallelized with Graphic Processing Units for Low Temperature Plasmas for Material Processing

    NASA Astrophysics Data System (ADS)

    Hur, Min Young; Verboncoeur, John; Lee, Hae June

    2014-10-01

    Particle-in-cell (PIC) simulations have high fidelity in the plasma device requiring transient kinetic modeling compared with fluid simulations. It uses less approximation on the plasma kinetics but requires many particles and grids to observe the semantic results. It means that the simulation spends lots of simulation time in proportion to the number of particles. Therefore, PIC simulation needs high performance computing. In this research, a graphic processing unit (GPU) is adopted for high performance computing of PIC simulation for low temperature discharge plasmas. GPUs have many-core processors and high memory bandwidth compared with a central processing unit (CPU). NVIDIA GeForce GPUs were used for the test with hundreds of cores which show cost-effective performance. PIC code algorithm is divided into two modules which are a field solver and a particle mover. The particle mover module is divided into four routines which are named move, boundary, Monte Carlo collision (MCC), and deposit. Overall, the GPU code solves particle motions as well as electrostatic potential in two-dimensional geometry almost 30 times faster than a single CPU code. This work was supported by the Korea Institute of Science Technology Information.

  19. Ultrasound aided smooth dispensing for high viscoelastic epoxy in microelectronic packaging.

    PubMed

    Chen, Yun; Li, Han-Xiong; Shan, Xiuyang; Gao, Jian; Chen, Xin; Wang, Fuliang

    2016-01-01

    Epoxy dispensing is one of the most critical processes in microelectronic packaging. However, due its high viscoelasticity, dispensing of epoxy is extremely difficult, and a lower viscoelasticity epoxy is desired to improve the process. In this paper, a novel method is proposed to achieve a lowered viscoelastic epoxy by using ultrasound. The viscoelasticity and molecular structures of the epoxies were compared and analyzed before and after experimentation. Different factors of the ultrasonic process, including power, processing time and ultrasonic energy, were studied in this study. It is found that elasticity is more sensitive to ultrasonic processing while viscosity is little affected. Further, large power and long processing time can minimize the viscoelasticity to ideal values. Due to the reduced loss modulus and storage modulus after ultrasonic processing, smooth dispensing is demonstrated for the processed epoxy. The subsequently color temperature experiments show that ultrasonic processing will not affect LED's lighting. It is clear that the ultrasonic processing will have good potential to aide smooth dispensing for high viscoelastic epoxy in electronic industry. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Effects of pelleting conditioner retention time on nursery pig growth performance.

    PubMed

    Lewis, L L; Stark, C R; Fahrenholz, A C; Goncalves, M A D; DeRouchey, J M; Jones, C K

    2015-03-01

    A total of 180 nursery pigs (PIC 327 × 1050; initially 12.6 kg) were used in an 18-d study to determine the effects of pellet mill conditioning parameters and feed form on pig performance. All diets were similar, and different feed processing parameters were used to create experimental treatments. Factors considered were conditioning time (15, 30, or 60 s) and feed form (mash or pelleted). To remove the confounding factor of feed form, pelleted samples were reground to a similar particle size as the mash diet. Treatments included: 1) mash diet without thermal processing (negative control), 2) pelleted diet conditioned for 30 s (positive control), 3) pelleted diet conditioned for 15 s and reground, 4) pelleted diet conditioned for 30 s and reground, and 5) pelleted diet conditioned for 60 s and reground. Pigs were weaned and fed a common acclimation diet for 21 d before the start of the experiment. Growth and feed disappearance were then measured for 18 d. All diets had similar levels of percentage total starch, but thermally processed diets had a 1.67 to 1.87-fold increase in percentage gelatinized starch compared to the mash diet. Average daily gain and G:F did not differ between treatments overall, but pigs fed the positive control pelleted diet had decreased ADFI ( < 0.05) compared to pigs fed all other diets. Preplanned contrasts revealed that pigs fed mash diets tended to have greater ADG ( < 0.10) compared to those fed pelleted and reground diets. This suggests that processing may have had a negative influence on feed utilization, which is further supported by the finding that pigs fed mash diets tended to have greater ADG ( < 0.10) compared to those fed diets that were thermally processed, regardless of regrinding. Considering these results, it was not surprising that pigs fed mash diets had greater ADG and ADFI ( < 0.05) than those fed pelleted diets. When directly comparing diets conditioned at 60 rpm, fed either as whole pellets or reground to mash consistency, pigs fed pelleted diets had improved G:F ( < 0.05) due to lower ADFI ( < 0.05) but similar ADG. The expected improvement in G:F from pelleting (6.8%) was observed but lost when diets were reground to near original mash particle size. This may indicate that diet form from the actual pelleting process impacts G:F more than conditioner retention time.

  1. Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method

    NASA Astrophysics Data System (ADS)

    Kuai, Ken Z.; Tsai, Christina W.

    2012-02-01

    SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.

  2. Clinical evaluation of reducing acquisition time on single-photon emission computed tomography image quality using proprietary resolution recovery software.

    PubMed

    Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B

    2013-11-01

    A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.

  3. When teams shift among processes: insights from simulation and optimization.

    PubMed

    Kennedy, Deanna M; McComb, Sara A

    2014-09-01

    This article introduces process shifts to study the temporal interplay among transition and action processes espoused in the recurring phase model proposed by Marks, Mathieu, and Zacarro (2001). Process shifts are those points in time when teams complete a focal process and change to another process. By using team communication patterns to measure process shifts, this research explores (a) when teams shift among different transition processes and initiate action processes and (b) the potential of different interventions, such as communication directives, to manipulate process shift timing and order and, ultimately, team performance. Virtual experiments are employed to compare data from observed laboratory teams not receiving interventions, simulated teams receiving interventions, and optimal simulated teams generated using genetic algorithm procedures. Our results offer insights about the potential for different interventions to affect team performance. Moreover, certain interventions may promote discussions about key issues (e.g., tactical strategies) and facilitate shifting among transition processes in a manner that emulates optimal simulated teams' communication patterns. Thus, we contribute to theory regarding team processes in 2 important ways. First, we present process shifts as a way to explore the timing of when teams shift from transition to action processes. Second, we use virtual experimentation to identify those interventions with the greatest potential to affect performance by changing when teams shift among processes. Additionally, we employ computational methods including neural networks, simulation, and optimization, thereby demonstrating their applicability in conducting team research. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Optimizing Urine Processing Protocols for Protein and Metabolite Detection.

    PubMed

    Siddiqui, Nazema Y; DuBois, Laura G; St John-Williams, Lisa; Will, Thompson J; Grenier, Carole; Burke, Emily; Fraser, Matthew O; Amundsen, Cindy L; Murphy, Susan K

    In urine, factors such as timing of voids, and duration at room temperature (RT) may affect the quality of recovered protein and metabolite data. Additives may aid with detection, but can add more complexity in sample collection or analysis. We aimed to identify the optimal urine processing protocol for clinically-obtained urine samples that allows for the highest protein and metabolite yields with minimal degradation. Healthy women provided multiple urine samples during the same day. Women collected their first morning (1 st AM) void and another "random void". Random voids were aliquotted with: 1) no additive; 2) boric acid (BA); 3) protease inhibitor (PI); or 4) both BA + PI. Of these aliquots, some were immediately stored at 4°C, and some were left at RT for 4 hours. Proteins and individual metabolites were quantified, normalized to creatinine concentrations, and compared across processing conditions. Sample pools corresponding to each processing condition were analyzed using mass spectrometry to assess protein degradation. Ten Caucasian women between 35-65 years of age provided paired 1 st morning and random voided urine samples. Normalized protein concentrations were slightly higher in 1 st AM compared to random "spot" voids. The addition of BA did not significantly change proteins, while PI significantly improved normalized protein concentrations, regardless of whether samples were immediately cooled or left at RT for 4 hours. In pooled samples, there were minimal differences in protein degradation under the various conditions we tested. In metabolite analyses, there were significant differences in individual amino acids based on the timing of the void. For comparative translational research using urine, information about void timing should be collected and standardized. For urine samples processed in the same day, BA does not appear to be necessary while the addition of PI enhances protein yields, regardless of 4°C or RT storage temperature.

  5. A Time-Variant Reliability Model for Copper Bending Pipe under Seawater-Active Corrosion Based on the Stochastic Degradation Process

    PubMed Central

    Li, Mengmeng; Feng, Qiang; Yang, Dezhen

    2018-01-01

    In the degradation process, the randomness and multiplicity of variables are difficult to describe by mathematical models. However, they are common in engineering and cannot be neglected, so it is necessary to study this issue in depth. In this paper, the copper bending pipe in seawater piping systems is taken as the analysis object, and the time-variant reliability is calculated by solving the interference of limit strength and maximum stress. We did degradation experiments and tensile experiments on copper material, and obtained the limit strength at each time. In addition, degradation experiments on copper bending pipe were done and the thickness at each time has been obtained, then the response of maximum stress was calculated by simulation. Further, with the help of one kind of Monte Carlo method we propose, the time-variant reliability of copper bending pipe was calculated based on the stochastic degradation process and interference theory. Compared with traditional methods and verified by maintenance records, the results show that the time-variant reliability model based on the stochastic degradation process proposed in this paper has better applicability in the reliability analysis, and it can be more convenient and accurate to predict the replacement cycle of copper bending pipe under seawater-active corrosion. PMID:29584695

  6. Nd:YOV4 laser polishing on WC-Co HVOF coating

    NASA Astrophysics Data System (ADS)

    Giorleo, L.; Ceretti, E.; Montesano, L.; La Vecchia, G. M.

    2017-10-01

    WC/Co coatings are widely applied to different types of components due to their extraordinary performance properties including high hardness and wear properties. In industrial applications High Velocity Oxy-Fuel (HVOF) technique is extensively used to deposit hard metal coatings. The main advantage of HVOF compared to other thermal spray techniques is the ability to accelerate the melted powder particles of the feedstock material at a relatively high velocity, leading to obtain good adhesion and low porosity level. However, despite the mentioned benefits, the surface finish quality of WC-Co HVOF coatings results to be poor (Ra higher than 5 µm) thus a mechanical polishing process is often needed. The main problem is that the high hardness of coating leads the polishing process expensive in terms of time and tool wear; moreover polishing becomes difficult and not always possible in case of limited accessibility of a part, micro dimensions or undercuts. Nowadays a different technique available to improve surface roughness is the laser polishing process. The polishing principle is based on focused radiation of a laser beam that melts a microscopic layer of surface material. Compared to conventional polishing process (as grinding) it ensures the possibility of avoiding tool wear, less pollution (no abrasive or liquids), no debris, less machining time and coupled with a galvo system it results to be more suitable in case of 3D complex workpieces. In this paper laser polishing process executed with a Nd:YOV4 Laser was investigated: the effect of different process parameters as initial coating morphology, laser scan speed and loop cycles were tested. Results were compared by a statistical approach in terms of average roughness along with a morphological analysis carried out by Scanning Electron Microscope (SEM) investigation coupled with EDS spectra.

  7. An integrated process analytical technology (PAT) approach to monitoring the effect of supercooling on lyophilization product and process parameters of model monoclonal antibody formulations.

    PubMed

    Awotwe Otoo, David; Agarabi, Cyrus; Khan, Mansoor A

    2014-07-01

    The aim of the present study was to apply an integrated process analytical technology (PAT) approach to control and monitor the effect of the degree of supercooling on critical process and product parameters of a lyophilization cycle. Two concentrations of a mAb formulation were used as models for lyophilization. ControLyo™ technology was applied to control the onset of ice nucleation, whereas tunable diode laser absorption spectroscopy (TDLAS) was utilized as a noninvasive tool for the inline monitoring of the water vapor concentration and vapor flow velocity in the spool during primary drying. The instantaneous measurements were then used to determine the effect of the degree of supercooling on critical process and product parameters. Controlled nucleation resulted in uniform nucleation at lower degrees of supercooling for both formulations, higher sublimation rates, lower mass transfer resistance, lower product temperatures at the sublimation interface, and shorter primary drying times compared with the conventional shelf-ramped freezing. Controlled nucleation also resulted in lyophilized cakes with more elegant and porous structure with no visible collapse or shrinkage, lower specific surface area, and shorter reconstitution times compared with the uncontrolled nucleation. Uncontrolled nucleation however resulted in lyophilized cakes with relatively lower residual moisture contents compared with controlled nucleation. TDLAS proved to be an efficient tool to determine the endpoint of primary drying. There was good agreement between data obtained from TDLAS-based measurements and SMART™ technology. ControLyo™ technology and TDLAS showed great potential as PAT tools to achieve enhanced process monitoring and control during lyophilization cycles. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  8. Toward zero waste to landfill: an effective method for recycling zeolite waste from refinery industry

    NASA Astrophysics Data System (ADS)

    Homchuen, K.; Anuwattana, R.; Limphitakphong, N.; Chavalparit, O.

    2017-07-01

    One-third of landfill waste of refinery plant in Thailand was spent chloride zeolite, which wastes a huge of land, cost and time for handling. Toward zero waste to landfill, this study was aimed at determining an effective method for recycling zeolite waste by comparing the chemical process with the electrochemical process. To investigate the optimum conditions of both processes, concentration of chemical solution and reaction time were carried out for the former, while the latter varied in term of current density, initial pH of water, and reaction time. The results stated that regenerating zeolite waste from refinery industry in Thailand should be done through the chemical process with alkaline solution because it provided the best chloride adsorption efficiency with cost the least. A successful recycling will be beneficial not only in reducing the amount of landfill waste but also in reducing material and disposal costs and consumption of natural resources as well.

  9. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  10. Real-time spectral characterization of a photon pair source using a chirped supercontinuum seed.

    PubMed

    Erskine, Jennifer; England, Duncan; Kupchak, Connor; Sussman, Benjamin

    2018-02-15

    Photon pair sources have wide ranging applications in a variety of quantum photonic experiments and protocols. Many of these protocols require well controlled spectral correlations between the two output photons. However, due to low cross-sections, measuring the joint spectral properties of photon pair sources has historically been a challenging and time-consuming task. Here, we present an approach for the real-time measurement of the joint spectral properties of a fiber-based four wave mixing source. We seed the four wave mixing process using a broadband chirped pulse, studying the stimulated process to extract information regarding the spontaneous process. In addition, we compare stimulated emission measurements with the spontaneous process to confirm the technique's validity. Joint spectral measurements have taken many hours historically and several minutes with recent techniques. Here, measurements have been demonstrated in 5-30 s depending on resolution, offering substantial improvement. Additional benefits of this approach include flexible resolution, large measurement bandwidth, and reduced experimental overhead.

  11. Analyzing Discrepancies in a Software Development Project Change Request (CR) Assessment Process and Recommendations for Process Improvements

    NASA Technical Reports Server (NTRS)

    Cunningham, Kenneth James

    2003-01-01

    The Change Request (CR) assessment process is essential in the display development cycle. The assessment process is performed to ensure that the changes stated in the description of the CR match the changes in the actual display requirements. If a discrepancy is found between the CR and the requirements, the CR must be returned to the originator for corrections. Data was gathered from each of the developers to determine the type of discrepancies and the amount of time spent assessing each CR. This study sought to determine the most common types of discrepancies, and the amount of time required to assessing those issues. The study found that even though removing discrepancy before an assessment would save half the time needed to assess an CR with a discrepancy, the number of CR's found to have a discrepancy was very small compared to the total number of CR's assessed during the data gathering period.

  12. The interaction between short-term heat-treatment and the formability of an Al-Mg-Si alloy regarding deep drawing processes

    NASA Astrophysics Data System (ADS)

    Machhammer, M.; Sommitsch, C.

    2016-11-01

    Research conducted in recent years has shown that heat-treatable Al-Mg-Si alloys (6xxx) have great potential concerning the design of lightweight car bodies. Compared to conventional deep drawing steels the field of application is limited by a lower formability. In order to minimize the disadvantage of a lower drawability a short-term heat-treatment (SHT) can be applied before the forming process. The SHT, conducted in selected areas on the initial blank, leads to a local reduction of strength aiming at the decrease of critical stress during the deep drawing process. For the successful procedure of the SHT a solid knowledge about the crucial process parameters such as the design of the SHT layout, the SHT process time and the maximum SHT temperature are urgently required. It also should be noted that the storage time between the SHT and the forming processes affects the mechanical properties of the SHT area. In this paper, the effect of diverse SHT process parameters and various storage time-frames on the major and minor strain situation of a deep drawn part is discussed by the evaluation of the forming limit diagram. For the purpose of achieving short heating times and a homogenous temperature distribution a one side contact heating tool has been used for the heat treatment in this study.

  13. Recurrence plots of discrete-time Gaussian stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick

    2016-09-01

    We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.

  14. Real-time spatio-temporal coherence estimation for autonomous mode identification and invariance tracking

    NASA Technical Reports Server (NTRS)

    Park, Han G. (Inventor); Zak, Michail (Inventor); James, Mark L. (Inventor); Mackey, Ryan M. E. (Inventor)

    2003-01-01

    A general method of anomaly detection from time-correlated sensor data is disclosed. Multiple time-correlated signals are received. Their cross-signal behavior is compared against a fixed library of invariants. The library is constructed during a training process, which is itself data-driven using the same time-correlated signals. The method is applicable to a broad class of problems and is designed to respond to any departure from normal operation, including faults or events that lie outside the training envelope.

  15. The Effects of Dynamical Rates on Species Coexistence in a Variable Environment: The Paradox of the Plankton Revisited.

    PubMed

    Li, Lina; Chesson, Peter

    2016-08-01

    Hutchinson's famous hypothesis for the "paradox of the plankton" has been widely accepted, but critical aspects have remained unchallenged. Hutchinson argued that environmental fluctuations would promote coexistence when the timescale for environmental change is comparable to the timescale for competitive exclusion. Using a consumer-resource model, we do find that timescales of processes are important. However, it is not the time to exclusion that must be compared with the time for environmental change but the time for resource depletion. Fast resource depletion, when resource consumption is favored for different species at different times, strongly promotes coexistence. The time for exclusion is independent of the rate of resource depletion. Therefore, the widely believed predictions of Hutchinson are misleading. Fast resource depletion, as determined by environmental conditions, ensures strong coupling of environmental processes and competition, which leads to enhancement over time of intraspecific competition relative to interspecific competition as environmental shifts favor different species at different times. This critical coupling is measured by the covariance between environment and competition. Changes in this quantity as densities change determine the stability of coexistence and provide the key to rigorous analysis, both theoretically and empirically, of coexistence in a variable environment. These ideas apply broadly to diversity maintenance in variable environments whether the issue is species diversity or genetic diversity and competition or apparent competition.

  16. Improving Emergency Department radiology transportation time: a successful implementation of lean methodology.

    PubMed

    Hitti, Eveline A; El-Eid, Ghada R; Tamim, Hani; Saleh, Rana; Saliba, Miriam; Naffaa, Lena

    2017-09-05

    Emergency Department overcrowding has become a global problem and a growing safety and quality concern. Radiology and laboratory turnaround time, ED boarding and increased ED visits are some of the factors that contribute to ED overcrowding. Lean methods have been used in the ED to address multiple flow challenges from improving door-to-doctor time to reducing length of stay. The objective of this study is to determine the effectiveness of using Lean management methods on improving Emergency Department transportation times for plain radiography. We performed a before and after study at an academic urban Emergency Department with 49,000 annual visits after implementing a Lean driven intervention. The primary outcome was mean radiology transportation turnaround time (TAT). Secondary outcomes included overall study turnaround time from order processing to preliminary report time as well as ED length of stay. All ED patients undergoing plain radiography 6 months pre-intervention were compared to all ED patients undergoing plain radiography 6 months post-intervention after a 1 month washout period. Post intervention there was a statistically significant decrease in the mean transportation TAT (mean ± SD: 9.87 min ± 15.05 versus 22.89 min ± 22.05, respectively, p-value <0.0001). In addition, it was found that 71.6% of patients in the post-intervention had transportation TAT ≤ 10 min, as compared to 32.3% in the pre-intervention period, p-value <0.0001, with narrower interquartile ranges in the post-intervention period. Similarly, the "study processing to preliminary report time" and the length of stay were lower in the post-intervention as compared to the pre-intervention, (52.50 min ± 35.43 versus 54.04 min ± 34.72, p-value = 0.02 and 3.65 h ± 5.17 versus 4.57 h ± 10.43, p < 0.0001, respectively), in spite of an increase in the time it took to elease a preliminary report in the post-intervention period. Using Lean change management techniques can be effective in reducing transportation time to plain radiography in the Emergency Department as well as improving process reliability.

  17. Near real-time shadow detection and removal in aerial motion imagery application

    NASA Astrophysics Data System (ADS)

    Silva, Guilherme F.; Carneiro, Grace B.; Doth, Ricardo; Amaral, Leonardo A.; Azevedo, Dario F. G. de

    2018-06-01

    This work presents a method to automatically detect and remove shadows in urban aerial images and its application in an aerospace remote monitoring system requiring near real-time processing. Our detection method generates shadow masks and is accelerated by GPU programming. To obtain the shadow masks, we converted images from RGB to CIELCh model, calculated a modified Specthem ratio, and applied multilevel thresholding. Morphological operations were used to reduce shadow mask noise. The shadow masks are used in the process of removing shadows from the original images using the illumination ratio of the shadow/non-shadow regions. We obtained shadow detection accuracy of around 93% and shadow removal results comparable to the state-of-the-art while maintaining execution time under real-time constraints.

  18. Fabrication and hydrophobic characteristics of micro / nanostructures on polydimethylsiloxane surface prepared by picosecond laser

    NASA Astrophysics Data System (ADS)

    Bin, Wang; Dong, Shiyun; Yan, Shixing; Gang, Xiao; Xie, Zhiwei

    2018-03-01

    Picosecond laser has ultrashort pulse width and ultrastrong peak power, which makes it widely used in the field of micro-nanoscale fabrication. polydimethylsiloxane (PDMS) is a typical silicone elastomer with good hydrophobicity. In order to further improve the hydrophobicity of PDMS, the picosecond laser was used to fabricate a grid-like microstructure on the surface of PDMS, and the relationship between hydrophobicity of PDMS with surface microstructure and laser processing parameters, such as processing times and cell spacing was studied. The results show that: compared with the unprocessed PDMS, the presence of surface microstructure significantly improved the hydrophobicity of PDMS. When the number of processing is constant, the hydrophobicity of PDMS decreases with the increase of cell spacing. However, when the cell spacing is fixed, the hydrophobicity of PDMS first increases and then decreases with the increase of processing times. In particular, when the times of laser processing is 6 and the cell spacing is 50μm, the contact angle of PDMS increased from 113° to 154°, which reached the level of superhydrophobic.

  19. Solvent-Free Manufacturing of Electrodes for Lithium-ion Batteries

    NASA Astrophysics Data System (ADS)

    Ludwig, Brandon; Zheng, Zhangfeng; Shou, Wan; Wang, Yan; Pan, Heng

    2016-03-01

    Lithium ion battery electrodes were manufactured using a new, completely dry powder painting process. The solvents used for conventional slurry-cast electrodes have been completely removed. Thermal activation time has been greatly reduced due to the time and resource demanding solvent evaporation process needed with slurry-cast electrode manufacturing being replaced by a hot rolling process. It has been found that thermal activation time to induce mechanical bonding of the thermoplastic polymer to the remaining active electrode particles is only a few seconds. Removing the solvent and drying process allows large-scale Li-ion battery production to be more economically viable in markets such as automotive energy storage systems. By understanding the surface energies of various powders which govern the powder mixing and binder distribution, bonding tests of the dry-deposited particles onto the current collector show that the bonding strength is greater than slurry-cast electrodes, 148.8 kPa as compared to 84.3 kPa. Electrochemical tests show that the new electrodes outperform conventional slurry processed electrodes, which is due to different binder distribution.

  20. Planning and production of grammatical and lexical verbs in multi-word messages.

    PubMed

    Michel Lange, Violaine; Messerschmidt, Maria; Harder, Peter; Siebner, Hartwig Roman; Boye, Kasper

    2017-01-01

    Grammatical words represent the part of grammar that can be most directly contrasted with the lexicon. Aphasiological studies, linguistic theories and psycholinguistic studies suggest that their processing is operated at different stages in speech production. Models of sentence production propose that at the formulation stage, lexical words are processed at the functional level while grammatical words are processed at a later positional level. In this study we consider proposals made by linguistic theories and psycholinguistic models to derive two predictions for the processing of grammatical words compared to lexical words. First, based on the assumption that grammatical words are less crucial for communication and therefore paid less attention to, it is predicted that they show shorter articulation times and/or higher error rates than lexical words. Second, based on the assumption that grammatical words differ from lexical words in being dependent on a lexical host, it is hypothesized that the retrieval of a grammatical word has to be put on hold until its lexical host is available, and it is predicted that this is reflected in longer reaction times (RTs) for grammatical compared to lexical words. We investigated these predictions by comparing fully homonymous sentences with only a difference in verb status (grammatical vs. lexical) elicited by a specific context. We measured RTs, duration and accuracy rate. No difference in duration was observed. Longer RTs and a lower accuracy rate for grammatical words were reported, successfully reflecting grammatical word properties as defined by linguistic theories and psycholinguistic models. Importantly, this study provides insight into the span of encoding and grammatical encoding processes in speech production.

  1. Time course of implicit processing and explicit processing of emotional faces and emotional words.

    PubMed

    Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred

    2011-05-01

    Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Planning and production of grammatical and lexical verbs in multi-word messages

    PubMed Central

    Messerschmidt, Maria; Harder, Peter; Siebner, Hartwig Roman; Boye, Kasper

    2017-01-01

    Grammatical words represent the part of grammar that can be most directly contrasted with the lexicon. Aphasiological studies, linguistic theories and psycholinguistic studies suggest that their processing is operated at different stages in speech production. Models of sentence production propose that at the formulation stage, lexical words are processed at the functional level while grammatical words are processed at a later positional level. In this study we consider proposals made by linguistic theories and psycholinguistic models to derive two predictions for the processing of grammatical words compared to lexical words. First, based on the assumption that grammatical words are less crucial for communication and therefore paid less attention to, it is predicted that they show shorter articulation times and/or higher error rates than lexical words. Second, based on the assumption that grammatical words differ from lexical words in being dependent on a lexical host, it is hypothesized that the retrieval of a grammatical word has to be put on hold until its lexical host is available, and it is predicted that this is reflected in longer reaction times (RTs) for grammatical compared to lexical words. We investigated these predictions by comparing fully homonymous sentences with only a difference in verb status (grammatical vs. lexical) elicited by a specific context. We measured RTs, duration and accuracy rate. No difference in duration was observed. Longer RTs and a lower accuracy rate for grammatical words were reported, successfully reflecting grammatical word properties as defined by linguistic theories and psycholinguistic models. Importantly, this study provides insight into the span of encoding and grammatical encoding processes in speech production. PMID:29091940

  3. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671

  4. Evaluation of ultrasound based sterilization approaches in terms of shelf life and quality parameters of fruit and vegetable juices.

    PubMed

    Khandpur, Paramjeet; Gogate, Parag R

    2016-03-01

    The present work evaluates the performance of ultrasound based sterilization approaches for processing of different fruit and vegetable juices in terms of microbial growth and changes in the quality parameters during the storage. Comparison with the conventional thermal processing has also been presented. A novel approach based on combination of ultrasound with ultraviolet irradiation and crude extract of essential oil from orange peels has been used for the first time. Identification of the microbial growth (total bacteria and yeast content) in the juices during the subsequent storage and assessing the safety for human consumption along with the changes in the quality parameters (Brix, titratable acidity, pH, ORP, salt, conductivity, TSS and TDS) has been investigated in details. The optimized ultrasound parameters for juice sterilization were established as ultrasound power of 100 W and treatment time of 15 min for the constant frequency operation (20 kHz). It has been established that more than 5 log reduction was achieved using the novel combined approaches based on ultrasound. The treated juices using different approaches based on ultrasound also showed lower microbial growth and improved quality characteristics as compared to the thermally processed juice. Scale up studies were also performed using spinach juice as the test sample with processing at 5 L volume for the first time. The ultrasound treated juice satisfied the microbiological and physiochemical safety limits in refrigerated storage conditions for 20 days for the large scale processing. Overall the present work conclusively established the usefulness of combined treatment approaches based on ultrasound for maintaining the microbiological safety of beverages with enhanced shelf life and excellent quality parameters as compared to the untreated and thermally processed juices. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Multi-parameter comparison of a standardized mixed meal tolerance test in healthy and type 2 diabetic subjects: the PhenFlex challenge.

    PubMed

    Wopereis, Suzan; Stroeve, Johanna H M; Stafleu, Annette; Bakker, Gertruud C M; Burggraaf, Jacobus; van Erk, Marjan J; Pellis, Linette; Boessen, Ruud; Kardinaal, Alwine A F; van Ommen, Ben

    2017-01-01

    A key feature of metabolic health is the ability to adapt upon dietary perturbations. Recently, it was shown that metabolic challenge tests in combination with the new generation biomarkers allow the simultaneous quantification of major metabolic health processes. Currently, applied challenge tests are largely non-standardized. A systematic review defined an optimal nutritional challenge test, the "PhenFlex test" (PFT). This study aimed to prove that PFT modulates all relevant processes governing metabolic health thereby allowing to distinguish subjects with different metabolic health status. Therefore, 20 healthy and 20 type 2 diabetic (T2D) male subjects were challenged both by PFT and oral glucose tolerance test (OGTT). During the 8-h response time course, 132 parameters were quantified that report on 26 metabolic processes distributed over 7 organs (gut, liver, adipose, pancreas, vasculature, muscle, kidney) and systemic stress. In healthy subjects, 110 of the 132 parameters showed a time course response. Patients with T2D showed 18 parameters to be significantly different after overnight fasting compared to healthy subjects, while 58 parameters were different in the post-challenge time course after the PFT. This demonstrates the added value of PFT in distinguishing subjects with different health status. The OGTT and PFT response was highly comparable for glucose metabolism as identical amounts of glucose were present in both challenge tests. Yet the PFT reports on additional processes, including vasculature, systemic stress, and metabolic flexibility. The PFT enables the quantification of all relevant metabolic processes involved in maintaining or regaining homeostasis of metabolic health. Studying both healthy subjects and subjects with impaired metabolic health showed that the PFT revealed new processes laying underneath health. This study provides the first evidence towards adopting the PFT as gold standard in nutrition research.

  6. Generation of Non-Homogeneous Poisson Processes by Thinning: Programming Considerations and Comparision with Competing Algorithms.

    DTIC Science & Technology

    1978-12-01

    Poisson processes . The method is valid for Poisson processes with any given intensity function. The basic thinning algorithm is modified to exploit several refinements which reduce computer execution time by approximately one-third. The basic and modified thinning programs are compared with the Poisson decomposition and gap-statistics algorithm, which is easily implemented for Poisson processes with intensity functions of the form exp(a sub 0 + a sub 1t + a sub 2 t-squared. The thinning programs are competitive in both execution

  7. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  8. Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes

    PubMed Central

    Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2013-01-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563

  9. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    PubMed

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  10. Comparing Eye Tracking with Electrooculography for Measuring Individual Sentence Comprehension Duration

    PubMed Central

    Müller, Jana Annina; Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas

    2016-01-01

    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. PMID:27764125

  11. Real-time separation of multineuron recordings with a DSP32C signal processor.

    PubMed

    Gädicke, R; Albus, K

    1995-04-01

    We have developed a hardware and software package for real-time discrimination of multiple-unit activities recorded simultaneously from multiple microelectrodes using a VME-Bus system. Compared with other systems cited in literature or commercially available, our system has the following advantages. (1) Each electrode is served by its own preprocessor (DSP32C); (2) On-line spike discrimination is performed independently for each electrode. (3) The VME-bus allows processing of data received from 16 electrodes. The digitized (62.5 kHz) spike form is itself used as the model spike; the algorithm allows for comparing and sorting complete wave forms in real time into 8 different models per electrode.

  12. The exercise and affect relationship: evidence for the dual-mode model and a modified opponent process theory.

    PubMed

    Markowitz, Sarah M; Arent, Shawn M

    2010-10-01

    This study examined the relationship between exertion level and affect using the framework of opponent-process theory and the dual-mode model, with the Activation-Deactivation Adjective Checklist and the State Anxiety Inventory among 14 active and 14 sedentary participants doing 20 min of treadmill exercise at speeds of 5% below, 5% above, and at lactate threshold (LT). We found a significant effect of time, condition, Time × Condition, and Time × Group, but no group, Group × Condition, or Time × Group × Condition effects, such that the 5% above LT condition produced a worsening of affect in-task compared with all other conditions whereas, across conditions, participants experienced in-task increases in energy and tension, and in-task decreases in tiredness and calmness relative to baseline. Posttask, participants experienced mood improvement (decreased tension, anxiety, and increased calmness) across conditions, with a 30-min delay in the above LT condition. These results partially support the dual-mode model and a modified opponent-process theory.

  13. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  14. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  15. Pitting temporal against spatial integration in schizophrenic patients.

    PubMed

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  16. Surface modification of graphene using HBC-6ImBr in solution-processed OLEDs

    NASA Astrophysics Data System (ADS)

    Cheng, Tsung-Chin; Ku, Ting-An; Huang, Kuo-You; Chou, Ang-Sheng; Chang, Po-Han; Chang, Chao-Chen; Yue, Cheng-Feng; Liu, Chia-Wei; Wang, Po-Han; Wong, Ken-Tsung; Wu, Chih-I.

    2018-01-01

    In this work, we report a simple method for solution-processed organic light emitting devices (OLEDs), where single-layer graphene acts as the anode and the hexa-peri-hexabenzocoronene exfoliating agent (HBC-6ImBr) provides surface modification. In SEM images, the PEDOT:PSS solution fully covered the graphene electrode after coating with HBC-6ImBr. The fabricated solution-processed OLEDs with a single-layer graphene anode showed outstanding brightness at 3182 cd/m2 and current efficiency up to 6 cd/A which is comparable to that of indium tin oxide films, and the OLED device brightness performance increases six times compared to tri-layer graphene treated with UV-Ozone at the same driving voltage. This method can be used in a wide variety of solution-processed organic optoelectronics on surface-modified graphene anodes.

  17. The role of reading time complexity and reading speed in text comprehension.

    PubMed

    Wallot, Sebastian; O'Brien, Beth A; Haussmann, Anna; Kloos, Heidi; Lyby, Marlene S

    2014-11-01

    Reading speed is commonly used as an index of reading fluency. However, reading speed is not a consistent predictor of text comprehension, when speed and comprehension are measured on the same text within the same reader. This might be due to the somewhat ambiguous nature of reading speed, which is sometimes regarded as a feature of the reading process, and sometimes as a product of that process. We argue that both reading speed and comprehension should be seen as the result of the reading process, and that the process of fluent text reading can instead be described by complexity metrics that quantify aspects of the stability of the reading process. In this article, we introduce complexity metrics in the context of reading and apply them to data from a self-paced reading study. In this study, children and adults read a text silently or aloud and answered comprehension questions after reading. Our results show that recurrence metrics that quantify the degree of temporal structure in reading times yield better prediction of text comprehension compared to reading speed. However, the results for fractal metrics are less clear. Furthermore, prediction of text comprehension is generally strongest and most consistent across silent and oral reading when comprehension scores are normalized by reading speed. Analyses of word length and word frequency indicate that the observed complexity in reading times is not a simple function of the lexical properties of the text, suggesting that text reading might work differently compared to reading of isolated word or sentences. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Task Effects Reveal Cognitive Flexibility Responding to Frequency and Predictability: Evidence from Eye Movements in Reading and Proofreading

    PubMed Central

    Schotter, Elizabeth R.; Bicknell, Klinton; Howard, Ian; Levy, Roger; Rayner, Keith

    2014-01-01

    It is well-known that word frequency and predictability affect processing time. These effects change magnitude across tasks, but studies testing this use tasks with different response types (e.g., lexical decision, naming, and fixation time during reading; Schilling, Rayner & Chumbley, 1998), preventing direct comparison. Recently, Kaakinen and Hyönä (2010) overcame this problem, comparing fixation times in reading for comprehension and proofreading, showing that the frequency effect was larger in proofreading than in reading. This result could be explained by readers exhibiting substantial cognitive flexibility, and qualitatively changing how they process words in the proofreading task in a way that magnifies effects of word frequency. Alternatively, readers may not change word processing so dramatically, and instead may perform more careful identification generally, increasing the magnitude of many word processing effects (e.g., both frequency and predictability). We tested these possibilities with two experiments: subjects read for comprehension and then proofread for spelling errors (letter transpositions) that produce nonwords (e.g., trcak for track as in Kaakinen & Hyönä) or that produce real but unintended words (e.g., trial for trail) to compare how the task changes these effects. Replicating Kaakinen and Hyönä, frequency effects increased during proofreading. However, predictability effects only increased when integration with the sentence context was necessary to detect errors (i.e., when spelling errors produced words that were inappropriate in the sentence; trial for trail). The results suggest that readers adopt sophisticated word processing strategies to accommodate task demands. PMID:24434024

  19. Computed Tomography Window Blending: Feasibility in Thoracic Trauma.

    PubMed

    Mandell, Jacob C; Wortman, Jeremy R; Rocha, Tatiana C; Folio, Les R; Andriole, Katherine P; Khurana, Bharti

    2018-02-07

    This study aims to demonstrate the feasibility of processing computed tomography (CT) images with a custom window blending algorithm that combines soft-tissue, bone, and lung window settings into a single image; to compare the time for interpretation of chest CT for thoracic trauma with window blending and conventional window settings; and to assess diagnostic performance of both techniques. Adobe Photoshop was scripted to process axial DICOM images from retrospective contrast-enhanced chest CTs performed for trauma with a window-blending algorithm. Two emergency radiologists independently interpreted the axial images from 103 chest CTs with both blended and conventional windows. Interpretation time and diagnostic performance were compared with Wilcoxon signed-rank test and McNemar test, respectively. Agreement with Nexus CT Chest injury severity was assessed with the weighted kappa statistic. A total of 13,295 images were processed without error. Interpretation was faster with window blending, resulting in a 20.3% time saving (P < .001), with no difference in diagnostic performance, within the power of the study to detect a difference in sensitivity of 5% as determined by post hoc power analysis. The sensitivity of the window-blended cases was 82.7%, compared to 81.6% for conventional windows. The specificity of the window-blended cases was 93.1%, compared to 90.5% for conventional windows. All injuries of major clinical significance (per Nexus CT Chest criteria) were correctly identified in all reading sessions, and all negative cases were correctly classified. All readers demonstrated near-perfect agreement with injury severity classification with both window settings. In this pilot study utilizing retrospective data, window blending allows faster preliminary interpretation of axial chest CT performed for trauma, with no significant difference in diagnostic performance compared to conventional window settings. Future studies would be required to assess the utility of window blending in clinical practice. Copyright © 2018 The Association of University Radiologists. All rights reserved.

  20. Comparing High-latitude Ionospheric and Thermospheric Lagrangian Coherent Structures

    NASA Astrophysics Data System (ADS)

    Wang, N.; Ramirez, U.; Flores, F.; Okic, D.; Datta-Barua, S.

    2015-12-01

    Lagrangian Coherent Structures (LCSs) are invisible boundaries in time varying flow fields that may be subject to mixing and turbulence. The LCS is defined by the local maxima of the finite time Lyapunov exponent (FTLE), a scalar field quantifying the degree of stretching of fluid elements over the flow domain. Although the thermosphere is dominated by neutral wind processes and the ionosphere is governed by plasma electrodynamics, we can compare the LCS in the two modeled flow fields to yield insight into transport and interaction processes in the high-latitude IT system. For obtaining thermospheric LCS, we use the Horizontal Wind Model 2014 (HWM14) [1] at a single altitude to generate the two-dimensional velocity field. The FTLE computation is applied to study the flow field of the neutral wind, and to visualize the forward-time Lagrangian Coherent Structures in the flow domain. The time-varying structures indicate a possible thermospheric LCS ridge in the auroral oval area. The results of a two-day run during a geomagnetically quiet period show that the structures are diurnally quasi-periodic, thus that solar radiation influences the neutral wind flow field. To find the LCS in the high-latitude ionospheric drifts, the Weimer 2001 [2] polar electric potential model and the International Geomagnetic Reference Field 11 [3] are used to compute the ExB drift flow field in ionosphere. As with the neutral winds, the Lagrangian Coherent Structures are obtained by applying the FTLE computation. The relationship between the thermospheric and ionospheric LCS is analyzed by comparing overlapping FTLE maps. Both a publicly available FTLE solver [4] and a custom-built FTLE computation are used and compared for validation [5]. Comparing the modeled IT LCSs on a quiet day with the modeled IT LCSs on a storm day indicates important factors on the structure and time evolution of the LCS.

  1. Quantifying memory in complex physiological time-series.

    PubMed

    Shirazi, Amir H; Raoufy, Mohammad R; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R; Amodio, Piero; Jafari, G Reza; Montagnese, Sara; Mani, Ali R

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of "memory length" was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are 'forgotten' quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations.

  2. Quantifying Memory in Complex Physiological Time-Series

    PubMed Central

    Shirazi, Amir H.; Raoufy, Mohammad R.; Ebadi, Haleh; De Rui, Michele; Schiff, Sami; Mazloom, Roham; Hajizadeh, Sohrab; Gharibzadeh, Shahriar; Dehpour, Ahmad R.; Amodio, Piero; Jafari, G. Reza; Montagnese, Sara; Mani, Ali R.

    2013-01-01

    In a time-series, memory is a statistical feature that lasts for a period of time and distinguishes the time-series from a random, or memory-less, process. In the present study, the concept of “memory length” was used to define the time period, or scale over which rare events within a physiological time-series do not appear randomly. The method is based on inverse statistical analysis and provides empiric evidence that rare fluctuations in cardio-respiratory time-series are ‘forgotten’ quickly in healthy subjects while the memory for such events is significantly prolonged in pathological conditions such as asthma (respiratory time-series) and liver cirrhosis (heart-beat time-series). The memory length was significantly higher in patients with uncontrolled asthma compared to healthy volunteers. Likewise, it was significantly higher in patients with decompensated cirrhosis compared to those with compensated cirrhosis and healthy volunteers. We also observed that the cardio-respiratory system has simple low order dynamics and short memory around its average, and high order dynamics around rare fluctuations. PMID:24039811

  3. New generation photoelectric converter structure optimization using nano-structured materials

    NASA Astrophysics Data System (ADS)

    Dronov, A.; Gavrilin, I.; Zheleznyakova, A.

    2014-12-01

    In present work the influence of anodizing process parameters on PAOT geometric parameters for optimizing and increasing ETA-cell efficiency was studied. During the calculations optimal geometrical parameters were obtained. Parameters such as anodizing current density, electrolyte composition and temperature, as well as the anodic oxidation process time were selected for this investigation. Using the optimized TiO2 photoelectrode layer with 3,6 μm porous layer thickness and pore diameter more than 80 nm the ETA-cell efficiency has been increased by 3 times comparing to not nanostructured TiO2 photoelectrode.

  4. [Methodology of determination of the time of death and outlooks for the further development].

    PubMed

    Novikov, P I; Vlasov, A Iu; Shved, E F; Natsentov, E O; Korshunov, N V; Belykh, S A

    2004-01-01

    A methodological analysis of diagnosing the prescription of death coming (PDC) is described in the paper. Key philosophic fundamentals for further novel and more effective methods of PDC determination are elucidated. Main requirement applicable to postmortem diagnosis are defined. Different methods of modeling the postmortem process are demonstrated by the example of cadaver cooling, i.e. in real time, by analogue computer systems and by mathematic modeling. The traditional empiric and the adaptive approaches are comparatively analyzed in modeling the postmortem processes for the PDC diagnosis. A variety of promising trends for further related research is outlined.

  5. Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations.

    PubMed

    Zala, Sarah M; Reitschmidt, Doris; Noll, Anton; Balazs, Peter; Penn, Dustin J

    2017-01-01

    House mice (Mus musculus) emit complex ultrasonic vocalizations (USVs) during social and sexual interactions, which have features similar to bird song (i.e., they are composed of several different types of syllables, uttered in succession over time to form a pattern of sequences). Manually processing complex vocalization data is time-consuming and potentially subjective, and therefore, we developed an algorithm that automatically detects mouse ultrasonic vocalizations (Automatic Mouse Ultrasound Detector or A-MUD). A-MUD is a script that runs on STx acoustic software (S_TOOLS-STx version 4.2.2), which is free for scientific use. This algorithm improved the efficiency of processing USV files, as it was 4-12 times faster than manual segmentation, depending upon the size of the file. We evaluated A-MUD error rates using manually segmented sound files as a 'gold standard' reference, and compared them to a commercially available program. A-MUD had lower error rates than the commercial software, as it detected significantly more correct positives, and fewer false positives and false negatives. The errors generated by A-MUD were mainly false negatives, rather than false positives. This study is the first to systematically compare error rates for automatic ultrasonic vocalization detection methods, and A-MUD and subsequent versions will be made available for the scientific community.

  6. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs.

    PubMed

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture.

  7. Effects of thermomechanical processing on tensile and long-time creep behavior of Nb-1 percent Zr-0.1 percent C sheet

    NASA Technical Reports Server (NTRS)

    Titran, Robert H.; Uz, Mehmet

    1994-01-01

    Effects of thermomechanical processing on the mechanical properties of Nb-1 wt. percent Zr-0.1 wt. percent C, a candidate alloy for use in advanced space power systems, were investigated. Sheet bars were cold rolled into 1-mm thick sheets following single, double, or triple extrusion operations at 1900 K. All the creep and tensile specimens were given a two-step heat treatment 1 hr at 1755 K + 2 hr 1475 K prior to testing. Tensile properties were determined at 300 as well as at 1350 K. Microhardness measurements were made on cold rolled, heat treated, and crept samples. Creep tests were carried out at 1350 K and 34.5 MPa for times of about 10,000 to 19,000 hr. The results show that the number of extrusions had some effects on both the microhardness and tensile properties. However, the long-time creep behavior of the samples were comparable, and all were found to have adequate properties to meet the design requirements of advanced power systems regardless of thermomechanical history. The results are discussed in correlation with processing and microstructure, and further compared to the results obtained from the testing of Nb-1 wt. percent Zr and Nb-1 wt. percent Zr-0.06 wt. percent C alloys.

  8. A Multi-Objective Compounded Local Mobile Cloud Architecture Using Priority Queues to Process Multiple Jobs

    PubMed Central

    Wei, Xiaohui; Sun, Bingyi; Cui, Jiaxu; Xu, Gaochao

    2016-01-01

    As a result of the greatly increased use of mobile devices, the disadvantages of portable devices have gradually begun to emerge. To solve these problems, the use of mobile cloud computing assisted by cloud data centers has been proposed. However, cloud data centers are always very far from the mobile requesters. In this paper, we propose an improved multi-objective local mobile cloud model: Compounded Local Mobile Cloud Architecture with Dynamic Priority Queues (LMCpri). This new architecture could briefly store jobs that arrive simultaneously at the cloudlet in different priority positions according to the result of auction processing, and then execute partitioning tasks on capable helpers. In the Scheduling Module, NSGA-II is employed as the scheduling algorithm to shorten processing time and decrease requester cost relative to PSO and sequential scheduling. The simulation results show that the number of iteration times that is defined to 30 is the best choice of the system. In addition, comparing with LMCque, LMCpri is able to effectively accommodate a requester who would like his job to be executed in advance and shorten execution time. Finally, we make a comparing experiment between LMCpri and cloud assisting architecture, and the results reveal that LMCpri presents a better performance advantage than cloud assisting architecture. PMID:27419854

  9. A Comparative Study of Cyclic Oxidation and Sulfates-Induced Hot Corrosion Behavior of Arc-Sprayed Ni-Cr-Ti Coatings at Moderate Temperatures

    NASA Astrophysics Data System (ADS)

    Guo, Wenmin; Wu, Yuping; Zhang, Jianfeng; Hong, Sheng; Chen, Liyan; Qin, Yujiao

    2015-06-01

    The cyclic oxidation and sulfates-induced hot corrosion behaviors of a Ni-43Cr-0.3Ti arc-sprayed coating at 550-750 °C were characterized and compared in this study. In general, all the oxidation and hot corrosion kinetic curves of the coating followed a parabolic law, i.e., the weight of the specimens showed a rapid growth initially and then reached the gradual state. However, the initial stage of the hot corrosion process was approximately two times longer than that of the oxidation process, indicating a longer preparation time required for the formation of a protective scale in the former process. At 650 °C, the parabolic rate constant for the hot corrosion was 7.2 × 10-12 g2/(cm4·s), approximately 1.7 times higher than that for the oxidation at the same temperature. The lower parabolic rate constant for the oxidation was mainly attributed to the formation of a protective oxide scale on the surface of corroded specimens, which was composed of a mixture of NiO, Cr2O3, and NiCr2O4. However, as the liquid molten salts emerged during the hot corrosion, these protective oxides would be dissolved and the coating was corrupted acceleratedly.

  10. Evaluation of the traffic parameters in a metropolitan area by fusing visual perceptions and CNN processing of webcam images.

    PubMed

    Faro, Alberto; Giordano, Daniela; Spampinato, Concetto

    2008-06-01

    This paper proposes a traffic monitoring architecture based on a high-speed communication network whose nodes are equipped with fuzzy processors and cellular neural network (CNN) embedded systems. It implements a real-time mobility information system where visual human perceptions sent by people working on the territory and video-sequences of traffic taken from webcams are jointly processed to evaluate the fundamental traffic parameters for every street of a metropolitan area. This paper presents the whole methodology for data collection and analysis and compares the accuracy and the processing time of the proposed soft computing techniques with other existing algorithms. Moreover, this paper discusses when and why it is recommended to fuse the visual perceptions of the traffic with the automated measurements taken from the webcams to compute the maximum traveling time that is likely needed to reach any destination in the traffic network.

  11. Properties of the internal clock.

    PubMed

    Church, R M

    1984-01-01

    Evidence has been cited for the following properties of the parts of the psychological process used for timing intervals: The pacemaker has a mean rate that can be varied by drugs, diet, and stress. The switch has a latency to operate and it can be operated in various modes, such as run, stop, and reset. The accumulator times up, in absolute, arithmetic units. Working memory can be reset on command or, after lesions have been created in the fimbria fornix, when there is a gap in a signal. The transformation from the accumulator to reference memory is done with a multiplicative constant that is affected by drugs, lesions, and individual differences. The comparator uses a ratio between the value in the accumulator (or working memory) and reference memory. Finally, there must be multiple switch-accumulator modules to handle simultaneous temporal processing; and the psychological timing process may be used on some occasions and not on others.

  12. American Society of Composites, 32nd Technical Conference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aitharaju, Venkat; Yu, Hang; Zhao, Selina

    Resin transfer molding (RTM) has become increasingly popular for the manufacturing of composite parts. To enable high volume manufacturing and obtain good quality parts at an acceptable cost to automotive industry, accurate process simulation tools are necessary to optimize the process conditions. Towards that goal, General Motors and the ESI-group are involved in developing a state of the art process simulation tool for composite manufacturing in a project supported by the Department of Energy. This paper describes the modeling of various stages in resin transfer molding such as resin injection, resin curing, and part distortion. An instrumented RTM system locatedmore » at the General Motors Research and Development center was used to perform flat plaque molding experiments. The experimental measurements of fill time, in-mold pressure versus time, cure variation with time, and part deformation were compared with the model predictions and very good correlations were observed.« less

  13. Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation

    PubMed Central

    Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk

    2015-01-01

    A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740

  14. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  15. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  16. The effect of manipulating context-specific information on perceptual-cognitive processes during a simulated anticipation task.

    PubMed

    McRobert, Allistair P; Ward, Paul; Eccles, David W; Williams, A Mark

    2011-08-01

    We manipulated contextual information in order to examine the perceptual-cognitive processes that support anticipation using a simulated cricket-batting task. Skilled (N= 10) and less skilled (N= 10) cricket batters responded to video simulations of opponents bowling a cricket ball under high and low contextual information conditions. Skilled batters were more accurate, demonstrated more effective search behaviours, and provided more detailed verbal reports of thinking. Moreover, when they viewed their opponent multiple times (high context), they reduced their mean fixation time. All batters improved performance and altered thought processes when in the high context, compared to when they responded to their opponent without previously seeing them bowl (low context). Findings illustrate how context influences performance and the search for relevant information when engaging in a dynamic, time-constrained task. ©2011 The British Psychological Society.

  17. Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques

    PubMed Central

    Barzaghi, Riccardo; De Gaetani, Carlo Iapige

    2018-01-01

    Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650

  18. Event-Related Potentials of Bottom-Up and Top-Down Processing of Emotional Faces

    PubMed Central

    Moradi, Afsane; Mehrinejad, Seyed Abolghasem; Ghadiri, Mohammad; Rezaei, Farzin

    2017-01-01

    Introduction: Emotional stimulus is processed automatically in a bottom-up way or can be processed voluntarily in a top-down way. Imaging studies have indicated that bottom-up and top-down processing are mediated through different neural systems. However, temporal differentiation of top-down versus bottom-up processing of facial emotional expressions has remained to be clarified. The present study aimed to explore the time course of these processes as indexed by the emotion-specific P100 and late positive potential (LPP) event-related potential (ERP) components in a group of healthy women. Methods: Fourteen female students of Alzahra University, Tehran, Iran aged 18–30 years, voluntarily participated in the study. The subjects completed 2 overt and covert emotional tasks during ERP acquisition. Results: The results indicated that fearful expressions significantly produced greater P100 amplitude compared to other expressions. Moreover, the P100 findings showed an interaction between emotion and processing conditions. Further analysis indicated that within the overt condition, fearful expressions elicited more P100 amplitude compared to other emotional expressions. Also, overt conditions created significantly more LPP latencies and amplitudes compared to covert conditions. Conclusion: Based on the results, early perceptual processing of fearful face expressions is enhanced in top-down way compared to bottom-up way. It also suggests that P100 may reflect an attentional bias toward fearful emotions. However, no such differentiation was observed within later processing stages of face expressions, as indexed by the ERP LPP component, in a top-down versus bottom-up way. Overall, this study provides a basis for further exploring of bottom-up and top-down processes underlying emotion and may be typically helpful for investigating the temporal characteristics associated with impaired emotional processing in psychiatric disorders. PMID:28446947

  19. Semiautomated Sample Preparation for Protein Stability and Formulation Screening via Buffer Exchange.

    PubMed

    Ying, William; Levons, Jaquan K; Carney, Andrea; Gandhi, Rajesh; Vydra, Vicky; Rubin, A Erik

    2016-06-01

    A novel semiautomated buffer exchange process workflow was developed to enable efficient early protein formulation screening. An antibody fragment protein, BMSdab, was used to demonstrate the workflow. The process afforded 60% to 80% cycle time and scientist time savings and significant material efficiencies. These efficiencies ultimately facilitated execution of this stability work earlier in the drug development process, allowing this tool to inform the developability of potential candidates for development from a formulation perspective. To overcome the key technical challenges, the protein solution was buffer-exchanged by centrifuge filtration into formulations for stability screening in a 96-well plate with an ultrafiltration membrane, leveraging automated liquid handling and acoustic volume measurements to allow several cycles of exchanges. The formulations were transferred into a vacuum manifold and sterile filtered into a rack holding 96 glass vials. The vials were sealed with a capmat of individual caps and placed in stability stations. Stability of the samples prepared by this process and by the standard process was demonstrated to be comparable. This process enabled screening a number of formulations of a protein at an early pharmaceutical development stage with a short sample preparation time. © 2015 Society for Laboratory Automation and Screening.

  20. Effects of valence and arousal on emotional word processing are modulated by concreteness: Behavioral and ERP evidence from a lexical decision task.

    PubMed

    Yao, Zhao; Yu, Deshui; Wang, Lili; Zhu, Xiangru; Guo, Jingjing; Wang, Zhenhong

    2016-12-01

    We investigated whether the effects of valence and arousal on emotional word processing are modulated by concreteness using event-related potentials (ERPs). The stimuli included concrete words (Experiment 1) and abstract words (Experiment 2) that were organized in an orthogonal design, with valence (positive and negative) and arousal (low and high) as factors in a lexical decision task. In Experiment 1, the impact of emotion on the effects of concrete words mainly resulted from the contribution of valence. Positive concrete words were processed more quickly than negative words and elicited a reduction of N400 (300-410ms) and enhancement of late positive complex (LPC; 450-750ms), whereas no differences in response times or ERPs were found between high and low levels of arousal. In Experiment 2, the interaction between valence and arousal influenced the impact of emotion on the effects of abstract words. Low-arousal positive words were associated with shorter response times and a reduction of LPC amplitudes compared with high-arousal positive words. Low-arousal negative words were processed more slowly and elicited a reduction of N170 (140-200ms) compared with high-arousal negative words. The present study indicates that word concreteness modulates the contributions of valence and arousal to the effects of emotion, and this modulation occurs during the early perceptual processing stage (N170) and late elaborate processing stage (LPC) for emotional words and at the end of all cognitive processes (i.e., reflected by response times). These findings support an embodied theory of semantic representation and help clarify prior inconsistent findings regarding the ways in which valance and arousal influence different stages of word processing, at least in a lexical decision task. Copyright © 2016 Elsevier B.V. All rights reserved.

Top