Sample records for average computation time

  1. 5 CFR 831.703 - Computation of annuities for part-time service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... during those periods of creditable service. Pre-April 7, 1986, average pay means the largest annual rate..., 1986, service is computed in accordance with 5 U.S.C. 8339 using the pre-April 7, 1986, average pay and... computed in accordance with 5 U.S.C. 8339 using the post-April 6, 1986, average pay and length of service...

  2. Computations of unsteady multistage compressor flows in a workstation environment

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen L.

    1992-01-01

    High-end graphics workstations are becoming a necessary tool in the computational fluid dynamics environment. In addition to their graphic capabilities, workstations of the latest generation have powerful floating-point-operation capabilities. As workstations become common, they could provide valuable computing time for such applications as turbomachinery flow calculations. This report discusses the issues involved in implementing an unsteady, viscous multistage-turbomachinery code (STAGE-2) on workstations. It then describes work in which the workstation version of STAGE-2 was used to study the effects of axial-gap spacing on the time-averaged and unsteady flow within a 2 1/2-stage compressor. The results included time-averaged surface pressures, time-averaged pressure contours, standard deviation of pressure contours, pressure amplitudes, and force polar plots.

  3. Performance Comparison of Big Data Analytics With NEXUS and Giovanni

    NASA Astrophysics Data System (ADS)

    Jacob, J. C.; Huang, T.; Lynnes, C.

    2016-12-01

    NEXUS is an emerging data-intensive analysis framework developed with a new approach for handling science data that enables large-scale data analysis. It is available through open source. We compare performance of NEXUS and Giovanni for 3 statistics algorithms applied to NASA datasets. Giovanni is a statistics web service at NASA Distributed Active Archive Centers (DAACs). NEXUS is a cloud-computing environment developed at JPL and built on Apache Solr, Cassandra, and Spark. We compute global time-averaged map, correlation map, and area-averaged time series. The first two algorithms average over time to produce a value for each pixel in a 2-D map. The third algorithm averages spatially to produce a single value for each time step. This talk is our report on benchmark comparison findings that indicate 15x speedup with NEXUS over Giovanni to compute area-averaged time series of daily precipitation rate for the Tropical Rainfall Measuring Mission (TRMM with 0.25 degree spatial resolution) for the Continental United States over 14 years (2000-2014) with 64-way parallelism and 545 tiles per granule. 16-way parallelism with 16 tiles per granule worked best with NEXUS for computing an 18-year (1998-2015) TRMM daily precipitation global time averaged map (2.5 times speedup) and 18-year global map of correlation between TRMM daily precipitation and TRMM real time daily precipitation (7x speedup). These and other benchmark results will be presented along with key lessons learned in applying the NEXUS tiling approach to big data analytics in the cloud.

  4. A Real-Time Phase Vector Display for EEG Monitoring

    NASA Technical Reports Server (NTRS)

    Finger, Herbert J.; Anliker, James E.; Rimmer, Tamara

    1973-01-01

    A real-time, computer-based, phase vector display system has been developed which will output a vector whose phase is equal to the delay between a trigger and the peak of a function which is quasi-coherent with respect to the trigger. The system also contains a sliding averager which enables the operator to average successive trials before calculating the phase vector. Data collection, averaging and display generation are performed on a LINC-8 computer. Output displays appear on several X-Y CRT display units and on a kymograph camera/oscilloscope unit which is used to generate photographs of time-varying phase vectors or contourograms of time-varying averages of input functions.

  5. A STUDY OF SOME SOFTWARE PARAMETERS IN TIME-SHARING SYSTEMS.

    DTIC Science & Technology

    A review is made of some existing time-sharing computer systems and an exploration of various software characteristics is conducted. This...of the various parameters upon the average response cycle time, the average number in the queue awaiting service , the average length of time a user is

  6. Average waiting time in FDDI networks with local priorities

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.

  7. Effects of Turbulence Model on Prediction of Hot-Gas Lateral Jet Interaction in a Supersonic Crossflow

    DTIC Science & Technology

    2015-07-01

    performance computing time from the US Department of Defense (DOD) High Performance Computing Modernization program at the US Army Research Laboratory...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...dimensional, compressible, Reynolds-averaged Navier-Stokes (RANS) equations are solved using a finite volume method. A point-implicit time - integration

  8. Applicability of Time-Averaged Holography for Micro-Electro-Mechanical System Performing Non-Linear Oscillations

    PubMed Central

    Palevicius, Paulius; Ragulskis, Minvydas; Palevicius, Arvydas; Ostasevicius, Vytautas

    2014-01-01

    Optical investigation of movable microsystem components using time-averaged holography is investigated in this paper. It is shown that even a harmonic excitation of a non-linear microsystem may result in an unpredictable chaotic motion. Analytical results between parameters of the chaotic oscillations and the formation of time-averaged fringes provide a deeper insight into computational and experimental interpretation of time-averaged MEMS holograms. PMID:24451467

  9. SIMULATION OF FLOOD HYDROGRAPHS FOR GEORGIA STREAMS.

    USGS Publications Warehouse

    Inman, E.J.; Armbruster, J.T.

    1986-01-01

    Flood hydrographs are needed for the design of many highway drainage structures and embankments. A method for simulating these flood hydrographs at urban and rural ungauged sites in Georgia is presented. The O'Donnell method was used to compute unit hydrographs from 355 flood events from 80 stations. An average unit hydrograph and an average lag time were computed for each station. These average unit hydrographs were transformed to unit hydrographs having durations of one-fourth, one-third, one-half, and three-fourths lag time and then reduced to dimensionless terms by dividing the time by lag time and the discharge by peak discharge. Hydrographs were simulated for these 355 flood events and their widths were compared with the widths of the observed hydrographs at 50 and 75 percent of peak flow. For simulating hydrographs at sites larger than 500 mi**2, the U. S. Geological Survey computer model CONROUT can be used.

  10. Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

    NASA Technical Reports Server (NTRS)

    Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

    2005-01-01

    Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

  11. ASSURED CLOUD COMPUTING UNIVERSITY CENTER OFEXCELLENCE (ACC UCOE)

    DTIC Science & Technology

    2018-01-18

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...infrastructure security -Design of algorithms and techniques for real- time assuredness in cloud computing -Map-reduce task assignment with data locality...46 DESIGN OF ALGORITHMS AND TECHNIQUES FOR REAL- TIME ASSUREDNESS IN CLOUD COMPUTING

  12. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle

    NASA Astrophysics Data System (ADS)

    Yen, Guan-Wei

    1989-08-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  13. Explicit and implicit calculations of turbulent cavity flows with and without yaw angle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Yen, Guan-Wei

    1989-01-01

    Computations were performed to simulate turbulent supersonic flows past three-dimensional deep cavities with and without yaw. Simulation of these self-sustained oscillatory flows were generated through time accurate solutions of the Reynolds averaged complete Navier-Stokes equations using two different schemes: (1) MacCormack, finite-difference; and (2) implicit, upwind, finite-volume schemes. The second scheme, which is approximately 30 percent faster, is found to produce better time accurate results. The Reynolds stresses were modeled, using the Baldwin-Lomax algebraic turbulence model with certain modifications. The computational results include instantaneous and time averaged flow properties everywhere in the computational domain. Time series analyses were performed for the instantaneous pressure values on the cavity floor. The time averaged computational results show good agreement with the experimental data along the cavity floor and walls. When the yaw angle is nonzero, there is no longer a single length scale (length-to-depth ratio) for the flow, as is the case for zero yaw angle flow. The dominant directions and inclinations of the vortices are dramatically different for this nonsymmetric flow. The vortex shedding from the cavity into the mainstream flow is captured computationally. This phenomenon, which is due to the oscillation of the shear layer, is confirmed by the solutions of both schemes.

  14. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  15. 26 CFR 1.411(d)-3 - Section 411(d)(6) protected benefits.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... an annual benefit of 2% of career average pay times years of service commencing at normal retirement... an annual benefit of 1.3% of final pay times years of service, with final pay computed as the average... has 16 years of service, M's career average pay is $37,500, and the average of M's highest 3...

  16. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  17. Computer games and prosocial behaviour.

    PubMed

    Mengel, Friederike

    2014-01-01

    We relate different self-reported measures of computer use to individuals' propensity to cooperate in the Prisoner's dilemma. The average cooperation rate is positively related to the self-reported amount participants spend playing computer games. None of the other computer time use variables (including time spent on social media, browsing internet, working etc.) are significantly related to cooperation rates.

  18. A modification in the technique of computing average lengths from the scales of fishes

    USGS Publications Warehouse

    Van Oosten, John

    1953-01-01

    In virtually all the studies that employ scales, otollths, or bony structures to obtain the growth history of fishes, it has been the custom to compute lengths for each individual fish and from these data obtain the average growth rates for any particular group. This method involves a considerable amount of mathematical manipulation, time, and effort. Theoretically it should be possible to obtain the same information simply by averaging the scale measurements for each year of life and the length of the fish employed and computing the average lengths from these data. This method would eliminate all calculations for individual fish. Although Van Oosten (1929: 338) pointed out many years ago the validity of this method of computation, his statements apparently have been overlooked by subsequent investigators.

  19. Fault Tolerant Real-Time Networks

    DTIC Science & Technology

    2007-05-30

    Alberto Sangiovanni-Vincentelli, editors Hybrid Systems: Computation and Control. Fourth International Workshop (HSCC󈧅, Rome, Italy, March 2001...average dwell time by solving optimization problems. In Ashish Tiwari and Joao P. Hespanha, editors, Hybrid Systems: Computation and Control (HSCC 06

  20. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  1. 5 CFR 550.707 - Computation of severance pay fund.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... hours in the employee's basic work schedule (excluding overtime hours) varies during the year because of part-time work requirements, compute the weekly average of those hours and multiply that average by the... differential pay under 5 U.S.C. 5343(f) varies from week to week under a regularly recurring cycle of work...

  2. On the tip of the tongue: learning typing and pointing with an intra-oral computer interface.

    PubMed

    Caltenco, Héctor A; Breidegard, Björn; Struijk, Lotte N S Andreasen

    2014-07-01

    To evaluate typing and pointing performance and improvement over time of four able-bodied participants using an intra-oral tongue-computer interface for computer control. A physically disabled individual may lack the ability to efficiently control standard computer input devices. There have been several efforts to produce and evaluate interfaces that provide individuals with physical disabilities the possibility to control personal computers. Training with the intra-oral tongue-computer interface was performed by playing games over 18 sessions. Skill improvement was measured through typing and pointing exercises at the end of each training session. Typing throughput improved from averages of 2.36 to 5.43 correct words per minute. Pointing throughput improved from averages of 0.47 to 0.85 bits/s. Target tracking performance, measured as relative time on target, improved from averages of 36% to 47%. Path following throughput improved from averages of 0.31 to 0.83 bits/s and decreased to 0.53 bits/s with more difficult tasks. Learning curves support the notion that the tongue can rapidly learn novel motor tasks. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, which makes the tongue a feasible input organ for computer control. Intra-oral computer interfaces could provide individuals with severe upper-limb mobility impairments the opportunity to control computers and automatic equipment. Typing and pointing performance of the tongue-computer interface is comparable to performances of other proficient assistive devices, but does not cause fatigue easily and might be invisible to other people, which is highly prioritized by assistive device users. Combination of visual and auditory feedback is vital for a good performance of an intra-oral computer interface and helps to reduce involuntary or erroneous activations.

  3. A Lagrangian subgrid-scale model with dynamic estimation of Lagrangian time scale for large eddy simulation of complex flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman; Mahesh, Krishnan

    2012-08-01

    The dynamic Lagrangian averaging approach for the dynamic Smagorinsky model for large eddy simulation is extended to an unstructured grid framework and applied to complex flows. The Lagrangian time scale is dynamically computed from the solution and does not need any adjustable parameter. The time scale used in the standard Lagrangian model contains an adjustable parameter θ. The dynamic time scale is computed based on a "surrogate-correlation" of the Germano-identity error (GIE). Also, a simple material derivative relation is used to approximate GIE at different events along a pathline instead of Lagrangian tracking or multi-linear interpolation. Previously, the time scale for homogeneous flows was computed by averaging along directions of homogeneity. The present work proposes modifications for inhomogeneous flows. This development allows the Lagrangian averaged dynamic model to be applied to inhomogeneous flows without any adjustable parameter. The proposed model is applied to LES of turbulent channel flow on unstructured zonal grids at various Reynolds numbers. Improvement is observed when compared to other averaging procedures for the dynamic Smagorinsky model, especially at coarse resolutions. The model is also applied to flow over a cylinder at two Reynolds numbers and good agreement with previous computations and experiments is obtained. Noticeable improvement is obtained using the proposed model over the standard Lagrangian model. The improvement is attributed to a physically consistent Lagrangian time scale. The model also shows good performance when applied to flow past a marine propeller in an off-design condition; it regularizes the eddy viscosity and adjusts locally to the dominant flow features.

  4. A straightforward frequency-estimation technique for GPS carrier-phase time transfer.

    PubMed

    Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen

    2006-09-01

    Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).

  5. Measuring exertion time, duty cycle and hand activity level for industrial tasks using computer vision.

    PubMed

    Akkas, Oguz; Lee, Cheng Hsien; Hu, Yu Hen; Harris Adamson, Carisa; Rempel, David; Radwin, Robert G

    2017-12-01

    Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.

  6. Use of the computer and Internet among Italian families: first national study.

    PubMed

    Bricolo, Francesco; Gentile, Douglas A; Smelser, Rachel L; Serpelloni, Giovanni

    2007-12-01

    Although home Internet access has continued to increase, little is known about actual usage patterns in homes. This nationally representative study of over 4,700 Italian households with children measured computer and Internet use of each family member across 3 months. Data on actual computer and Internet usage were collected by Nielsen//NetRatings service and provide national baseline information on several variables for several age groups separately, including children, adolescents, and adult men and women. National averages are shown for the average amount of time spent using computers and on the Web, the percentage of each age group online, and the types of Web sites viewed. Overall, about one-third of children ages 2 to 11, three-fourths of adolescents and adult women, and over four-fifths of adult men access the Internet each month. Children spend an average of 22 hours/month on the computer, with a jump to 87 hours/month for adolescents. Adult women spend less time (about 60 hours/month), and adult men spend more (over 100). The types of Web sites visited are reported, including the top five for each age group. In general, search engines and Web portals are the top sites visited, regardless of age group. These data provide a baseline for comparisons across time and cultures.

  7. Numerical Prediction of Pitch Damping Stability Derivatives for Finned Projectiles

    DTIC Science & Technology

    2013-11-01

    in part by a grant of high-performance computing time from the U.S. DOD High Performance Computing Modernization Program (HPCMP) at the Army...to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...12 3.3.2 Time -Accurate Simulations

  8. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  9. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  10. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  11. Network Coding for Function Computation

    ERIC Educational Resources Information Center

    Appuswamy, Rathinakumar

    2011-01-01

    In this dissertation, the following "network computing problem" is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the "computing…

  12. Computed versus measured ion velocity distribution functions in a Hall effect thruster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrigues, L.; CNRS, LAPLACE, F-31062 Toulouse; Mazouffre, S.

    2012-06-01

    We compare time-averaged and time-varying measured and computed ion velocity distribution functions in a Hall effect thruster for typical operating conditions. The ion properties are measured by means of laser induced fluorescence spectroscopy. Simulations of the plasma properties are performed with a two-dimensional hybrid model. In the electron fluid description of the hybrid model, the anomalous transport responsible for the electron diffusion across the magnetic field barrier is deduced from the experimental profile of the time-averaged electric field. The use of a steady state anomalous mobility profile allows the hybrid model to capture some properties like the time-averaged ion meanmore » velocity. Yet, the model fails at reproducing the time evolution of the ion velocity. This fact reveals a complex underlying physics that necessitates to account for the electron dynamics over a short time-scale. This study also shows the necessity for electron temperature measurements. Moreover, the strength of the self-magnetic field due to the rotating Hall current is found negligible.« less

  13. Image communication scheme based on dynamic visual cryptography and computer generated holography

    NASA Astrophysics Data System (ADS)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  14. The Red Atrapa Sismos (Quake Catcher Network in Mexico): assessing performance during large and damaging earthquakes.

    USGS Publications Warehouse

    Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.

    2015-01-01

    Each volunteer computer monitors ground motion and communicates using the Berkeley Open Infrastructure for Network Computing (BOINC, Anderson, 2004). Using a standard short‐term average, long‐term average (STLA) algorithm (Earle and Shearer, 1994; Cochran, Lawrence, Christensen, Chung, 2009; Cochran, Lawrence, Christensen, and Jakka, 2009), volunteer computer and sensor systems detect abrupt changes in the acceleration recordings. Each time a possible trigger signal is declared, a small package of information containing sensor and ground‐motion information is streamed to one of the QCN servers (Chung et al., 2011). Trigger signals, correlated in space and time, are then processed by the QCN server to look for potential earthquakes.

  15. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  16. Convergence to equilibrium under a random Hamiltonian.

    PubMed

    Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  17. Convergence to equilibrium under a random Hamiltonian

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  18. Turbomachinery

    NASA Technical Reports Server (NTRS)

    Simoneau, Robert J.; Strazisar, Anthony J.; Sockol, Peter M.; Reid, Lonnie; Adamczyk, John J.

    1987-01-01

    The discipline research in turbomachinery, which is directed toward building the tools needed to understand such a complex flow phenomenon, is based on the fact that flow in turbomachinery is fundamentally unsteady or time dependent. Success in building a reliable inventory of analytic and experimental tools will depend on how the time and time-averages are treated, as well as on who the space and space-averages are treated. The raw tools at disposal (both experimentally and computational) are truly powerful and their numbers are growing at a staggering pace. As a result of this power, a case can be made that a situation exists where information is outstripping understanding. The challenge is to develop a set of computational and experimental tools which genuinely increase understanding of the fluid flow and heat transfer in a turbomachine. Viewgraphs outline a philosophy based on working on a stairstep hierarchy of mathematical and experimental complexity to build a system of tools, which enable one to aggressively design the turbomachinery of the next century. Examples of the types of computational and experimental tools under current development at Lewis, with progress to date, are examined. The examples include work in both the time-resolved and time-averaged domains. Finally, an attempt is made to identify the proper place for Lewis in this continuum of research.

  19. Overview of aerothermodynamic loads definition study

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    1991-01-01

    The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.

  20. Using Discrete Event Simulation to predict KPI's at a Projected Emergency Room.

    PubMed

    Concha, Pablo; Neriz, Liliana; Parada, Danilo; Ramis, Francisco

    2015-01-01

    Discrete Event Simulation (DES) is a powerful factor in the design of clinical facilities. DES enables facilities to be built or adapted to achieve the expected Key Performance Indicators (KPI's) such as average waiting times according to acuity, average stay times and others. Our computational model was built and validated using expert judgment and supporting statistical data. One scenario studied resulted in a 50% decrease in the average cycle time of patients compared to the original model, mainly by modifying the patient's attention model.

  1. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  2. Computing return times or return periods with rare event algorithms

    NASA Astrophysics Data System (ADS)

    Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy

    2018-04-01

    The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.

  3. A modeling study of the time-averaged electric currents in the vicinity of isolated thunderstorms

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Baginski, Michael E.

    1992-01-01

    A thorough examination of the results of a time-dependent computer model of a dipole thunderstorm revealed that there are numerous similarities between the time-averaged electrical properties and the steady-state properties of an active thunderstorm. Thus, the electrical behavior of the atmosphere in the vicinity of a thunderstorm can be determined with a formulation similar to what was first described by Holzer and Saxon (1952). From the Maxwell continuity equation of electric current, a simple analytical equation was derived that expresses a thunderstorm's average current contribution to the global electric circuit in terms of the generator current within the thundercloud, the intracloud lightning current, the cloud-to-ground lightning current, the altitudes of the charge centers, and the conductivity profile of the atmosphere. This equation was found to be nearly as accurate as the more computationally expensive numerical model, even when it is applied to a thunderstorm with a reduced conductivity thundercloud, a time-varying generator current, a varying flash rate, and a changing lightning mix.

  4. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.

  5. Conversion of cardiac performance data in analog form for digital computer entry

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1972-01-01

    A system is presented which will reduce analog cardiac performance data and convert the results to digital form for direct entry into a commercial time-shared computer. Circuits are discussed which perform the measurement and digital conversion of instantaneous systolic and diastolic parameters from the analog blood pressure waveform. Digital averaging over a selected number of heart cycles is performed on these measurements, as well as those of flow and heart rate. The determination of average cardiac output and peripheral resistance, including trends, is the end result after processing by digital computer.

  6. Children's Sociometric Membership Group and Computer-Supported Interaction in School Settings

    ERIC Educational Resources Information Center

    Koivusaari, Ritva

    2004-01-01

    This study analyzed what kind of role sociometric status has in non-real time computer conversations. Computer-supported conversations were investigated by using two local area networks. Participants were 52 9 to 10-year-old schoolchildren selected from three sociometric strata: rejected, average, and popular. Children's preferred friends, school…

  7. Real-time data acquisition and alerts may reduce reaction time and improve perfusionist performance during cardiopulmonary bypass.

    PubMed

    Beck, J R; Fung, K; Lopez, H; Mongero, L B; Argenziano, M

    2015-01-01

    Delayed perfusionist identification and reaction to abnormal clinical situations has been reported to contribute to increased mortality and morbidity. The use of automated data acquisition and compliance safety alerts has been widely accepted in many industries and its use may improve operator performance. A study was conducted to evaluate the reaction time of perfusionists with and without the use of compliance alert. A compliance alert is a computer-generated pop-up banner on a pump-mounted computer screen to notify the user of clinical parameters outside of a predetermined range. A proctor monitored and recorded the time from an alert until the perfusionist recognized the parameter was outside the desired range. Group one included 10 cases utilizing compliance alerts. Group 2 included 10 cases with the primary perfusionist blinded to the compliance alerts. In Group 1, 97 compliance alerts were identified and, in group two, 86 alerts were identified. The average reaction time in the group using compliance alerts was 3.6 seconds. The average reaction time in the group not using the alerts was nearly ten times longer than the group using computer-assisted, real-time data feedback. Some believe that real-time computer data acquisition and feedback improves perfusionist performance and may allow clinicians to identify and rectify potentially dangerous situations. © The Author(s) 2014.

  8. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  9. Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.

  10. Unsteady Aerodynamic Force Sensing from Strain Data

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2017-01-01

    A simple approach for computing unsteady aerodynamic forces from simulated measured strain data is proposed in this study. First, the deflection and slope of the structure are computed from the unsteady strain using the two-step approach. Velocities and accelerations of the structure are computed using the autoregressive moving average model, on-line parameter estimator, low-pass filter, and a least-squares curve fitting method together with analytical derivatives with respect to time. Finally, aerodynamic forces over the wing are computed using modal aerodynamic influence coefficient matrices, a rational function approximation, and a time-marching algorithm.

  11. Deterministic Stress Modeling of Hot Gas Segregation in a Turbine

    NASA Technical Reports Server (NTRS)

    Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger

    1998-01-01

    Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.

  12. Blessing and curse of chaos in numerical turbulence simulations

    NASA Astrophysics Data System (ADS)

    Lee, Jon

    1994-03-01

    Because of the trajectory instability, time reversal is not possible beyond a certain evolution time and hence the time irreversibility prevails under the finite-accuracy trajectory computation. This therefore provides a practical reconciliation of the dynamic reversibility and macroscopic irreversibility (blessing of chaos). On the other hand, the trajectory instability is also responsible for a limited evolution time, so that finite-accuracy computation would yield a pseudo-orbit which is totally unrelated to the true trajectory (curse of chaos). For the inviscid 2D flow, however, we can accurately compute the long- time average of flow quantities with a pseudo-orbit by invoking the ergodic theorem.

  13. Computer-assisted virtual preoperative planning in orthopedic surgery for acetabular fractures based on actual computed tomography data.

    PubMed

    Wang, Guang-Ye; Huang, Wen-Jun; Song, Qi; Qin, Yun-Tian; Liang, Jin-Feng

    2016-12-01

    Acetabular fractures have always been very challenging for orthopedic surgeons; therefore, appropriate preoperative evaluation and planning are particularly important. This study aimed to explore the application methods and clinical value of preoperative computer simulation (PCS) in treating pelvic and acetabular fractures. Spiral computed tomography (CT) was performed on 13 patients with pelvic and acetabular fractures, and Digital Imaging and Communications in Medicine (DICOM) data were then input into Mimics software to reconstruct three-dimensional (3D) models of actual pelvic and acetabular fractures for preoperative simulative reduction and fixation, and to simulate each surgical procedure. The times needed for virtual surgical modeling and reduction and fixation were also recorded. The average fracture-modeling time was 45 min (30-70 min), and the average time for bone reduction and fixation was 28 min (16-45 min). Among the surgical approaches planned for these 13 patients, 12 were finally adopted; 12 cases used the simulated surgical fixation, and only 1 case used a partial planned fixation method. PCS can provide accurate surgical plans and data support for actual surgeries.

  14. Index to Computer Assisted Instruction.

    ERIC Educational Resources Information Center

    Lekan, Helen A., Ed.

    The computer assisted instruction (CAI) programs and projects described in this index are listed by subject matter. The index gives the program name, author, source, description, prerequisites, level of instruction, type of student, average completion time, logic and program, purpose for which program was designed, supplementary…

  15. Temporal correlation functions of concentration fluctuations: an anomalous case.

    PubMed

    Lubelski, Ariel; Klafter, Joseph

    2008-10-09

    We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.

  16. Computer Games and Instruction

    ERIC Educational Resources Information Center

    Tobias, Sigmund, Ed.; Fletcher, J. D., Ed.

    2011-01-01

    There is intense interest in computer games. A total of 65 percent of all American households play computer games, and sales of such games increased 22.9 percent last year. The average amount of game playing time was found to be 13.2 hours per week. The popularity and market success of games is evident from both the increased earnings from games,…

  17. Computing Flow through Well Screens Using an Embedded Well Technique

    DTIC Science & Technology

    2015-08-01

    average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed...necessary to solve the continuity equation and the momentum equation using small time - steps . With the assumption that the well flow reaches...well system so that much greater time - steps can be used for computation. The 1D steady- state well equation can be written as well well well well well

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinuesa, Ricardo; Fick, Lambert; Negi, Prabal

    In the present document we describe a toolbox for the spectral-element code Nek5000, aimed at computing turbulence statistics. The toolbox is presented for a small test case, namely a square duct with L x = 2h, L y = 2h and L z = 4h, where x, y and z are the horizontal, vertical and streamwise directions, respectively. The number of elements in the xy-plane is 16 X 16 = 256, and the number of elements in z is 4, leading to a total of 1,204 spectral elements. A polynomial order of N = 5 is chosen, and the meshmore » is generated using the Nek5000 tool genbox. The toolbox presented here allows to compute mean-velocity components, the Reynolds-stress tensor as well as turbulent kinetic energy (TKE) and Reynolds-stress budgets. Note that the present toolbox allows to compute turbulence statistics in turbulent flows with one homogeneous direction (where the statistics are based on time-averaging as well as averaging in the homogeneous direction), as well as in fully three-dimensional flows (with no periodic directions, where only time-averaging is considered).« less

  19. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGES

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; ...

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  20. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  1. Annealed importance sampling with constant cooling rate

    NASA Astrophysics Data System (ADS)

    Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo

    2015-02-01

    Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.

  2. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  3. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep.

    PubMed

    Christensen, Matthew A; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T; Nguyen, Kaylin T; Olgin, Jeffrey E; Pletcher, Mark J; Marcus, Gregory M

    2016-01-01

    Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index-PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep.

  4. Light-Frame Wall Systems: Performance and Predictability.

    Treesearch

    David S. Gromala

    1983-01-01

    This paper compares results of all wall tests with analytical predictions of performance.Conventional wood-stud walls of one configuration failed at bending loads that were 4 to 6 times design load.The computer model overpredicted wall strength by and average of 10 percent and deflection by an average of 6 percent.

  5. Allocation of Internal Medicine Resident Time in a Swiss Hospital: A Time and Motion Study of Day and Evening Shifts.

    PubMed

    Wenger, Nathalie; Méan, Marie; Castioni, Julien; Marques-Vidal, Pedro; Waeber, Gérard; Garnier, Antoine

    2017-04-18

    Little current evidence documents how internal medicine residents spend their time at work, particularly with regard to the proportions of time spent in direct patient care versus using computers. To describe how residents allocate their time during day and evening hospital shifts. Time and motion study. Internal medicine residency at a university hospital in Switzerland, May to July 2015. 36 internal medicine residents with an average of 29 months of postgraduate training. Trained observers recorded the residents' activities using a tablet-based application. Twenty-two activities were categorized as directly related to patients, indirectly related to patients, communication, academic, nonmedical tasks, and transition. In addition, the presence of a patient or colleague and use of a computer or telephone during each activity was recorded. Residents were observed for a total of 696.7 hours. Day shifts lasted 11.6 hours (1.6 hours more than scheduled). During these shifts, activities indirectly related to patients accounted for 52.4% of the time, and activities directly related to patients accounted for 28.0%. Residents spent an average of 1.7 hours with patients, 5.2 hours using computers, and 13 minutes doing both. Time spent using a computer was scattered throughout the day, with the heaviest use after 6:00 p.m. The study involved a small sample from 1 institution. At this Swiss teaching hospital, internal medicine residents spent more time at work than scheduled. Activities indirectly related to patients predominated, and about half the workday was spent using a computer. Information Technology Department and Department of Internal Medicine of Lausanne University Hospital.

  6. A straightforward method to compute average stochastic oscillations from data samples.

    PubMed

    Júlvez, Jorge

    2015-10-19

    Many biological systems exhibit sustained stochastic oscillations in their steady state. Assessing these oscillations is usually a challenging task due to the potential variability of the amplitude and frequency of the oscillations over time. As a result of this variability, when several stochastic replications are averaged, the oscillations are flattened and can be overlooked. This can easily lead to the erroneous conclusion that the system reaches a constant steady state. This paper proposes a straightforward method to detect and asses stochastic oscillations. The basis of the method is in the use of polar coordinates for systems with two species, and cylindrical coordinates for systems with more than two species. By slightly modifying these coordinate systems, it is possible to compute the total angular distance run by the system and the average Euclidean distance to a reference point. This allows us to compute confidence intervals, both for the average angular speed and for the distance to a reference point, from a set of replications. The use of polar (or cylindrical) coordinates provides a new perspective of the system dynamics. The mean trajectory that can be obtained by averaging the usual cartesian coordinates of the samples informs about the trajectory of the center of mass of the replications. In contrast to such a mean cartesian trajectory, the mean polar trajectory can be used to compute the average circular motion of those replications, and therefore, can yield evidence about sustained steady state oscillations. Both, the coordinate transformation and the computation of confidence intervals, can be carried out efficiently. This results in an efficient method to evaluate stochastic oscillations.

  7. Improved workflow for quantification of left ventricular volumes and mass using free-breathing motion corrected cine imaging.

    PubMed

    Cross, Russell; Olivieri, Laura; O'Brien, Kendall; Kellman, Peter; Xue, Hui; Hansen, Michael

    2016-02-25

    Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p < 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37% reduction in acquisition and reconstruction time. The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction.

  8. Time averaging of NMR chemical shifts in the MLF peptide in the solid state.

    PubMed

    De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele

    2010-05-05

    Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.

  9. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  10. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  11. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  12. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  13. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  14. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the... operating temperature and results of incinerator inspections. For all periods when the average temperature... microfilm, on a computer, on computer disks, on magnetic tape disks, or on microfiche. (e) Report the...

  15. 20 CFR 226.62 - Computing average monthly compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Computing average monthly compensation. 226... RETIREMENT ACT COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Years of Service and Average Monthly Compensation § 226.62 Computing average monthly compensation. The employee's average monthly compensation is...

  16. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  17. Introduction to Computing: Lab Manual. Faculty Guide [and] Student Guide.

    ERIC Educational Resources Information Center

    Frasca, Joseph W.

    This lab manual is designed to accompany a college course introducing students to computing. The exercises are designed to be completed by the average student in a supervised 2-hour block of time at a computer lab over 15 weeks. The intent of each lab session is to introduce a topic and have the student feel comfortable with the use of the machine…

  18. Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization

    NASA Astrophysics Data System (ADS)

    Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo

    2011-03-01

    We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.

  19. Power strain imaging based on vibro-elastography techniques

    NASA Astrophysics Data System (ADS)

    Wen, Xu; Salcudean, S. E.

    2007-03-01

    This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.

  20. Time and Teaching

    NASA Astrophysics Data System (ADS)

    Zielinski, Theresa Julia; Brooks, David W.; Crippen, Kent J.; March, Joe L.

    2001-06-01

    Time management is an important issue for teachers and students. This article discusses teachers' use of time from the perspective of curriculum and instruction. Average high school students spend fewer than 5 hours per week in outside-of-class study; average college students spend about 20 hours. Procrastination, often viewed in a negative light by teachers, usually pays off so well for college students that seniors become better at it than freshmen. Three suggestions for designing instruction are: test early and often; do not waste the best students' time in an effort to improve overall performance; and use engaging activities that motivate students to give of their time. The impact of computers on curricula is a double-edged sword. Time must be devoted to teaching the use of applications, but the programs reduce busywork. Will this turn out to be a simple tradeoff, or will the programs make us much more efficient so that less time is required? Will computer programs ultimately lead to an expanded criterion for expertise, thus demanding even more time to become an expert? These issues are described and suggestions for controlling time during instruction are provided.

  1. Sitting Time in Adults 65 Years and Over: Behavior, Knowledge, and Intentions to Change.

    PubMed

    Alley, Stephanie; van Uffelen, Jannique G Z; Duncan, Mitch J; De Cocker, Katrien; Schoeppe, Stephanie; Rebar, Amanda L; Vandelanotte, Corneel

    2018-04-01

    This study examined sitting time, knowledge, and intentions to change sitting time in older adults. An online survey was completed by 494 Australians aged 65+. Average daily sitting was high (9.0 hr). Daily sitting time was the highest during TV (3.3 hr), computer (2.1 hr), and leisure (1.7 hr). A regression analysis demonstrated that women were more knowledgeable about the health risks of sitting compared to men. The percentage of older adults intending to sit less were the highest for TV (24%), leisure (24%), and computer (19%) sitting time. Regression analyses demonstrated that intentions varied by gender (for TV sitting), education (leisure and work sitting), body mass index (computer, leisure, and transport sitting), and physical activity (TV, computer, and leisure sitting). Interventions should target older adults' TV, computer, and leisure time sitting, with a focus on intentions in older males and older adults with low education, those who are active, and those with a normal weight.

  2. Associations between parental rules, style of communication and children's screen time.

    PubMed

    Bjelland, Mona; Soenens, Bart; Bere, Elling; Kovács, Éva; Lien, Nanna; Maes, Lea; Manios, Yannis; Moschonis, George; te Velde, Saskia J

    2015-10-01

    Research suggests an inverse association between parental rules and screen time in pre-adolescents, and that parents' style of communication with their children is related to the children's time spent watching TV. The aims of this study were to examine associations of parental rules and parental style of communication with children's screen time and perceived excessive screen time in five European countries. UP4FUN was a multi-centre, cluster randomised controlled trial with pre- and post-test measurements in each of five countries; Belgium, Germany, Greece, Hungary and Norway. Questionnaires were completed by the children at school and the parent questionnaire was brought home. Three structural equation models were tested based on measures of screen time and parental style of communication from the pre-test questionnaires. Of the 152 schools invited, 62 (41 %) schools agreed to participate. In total 3325 children (average age 11.2 years and 51 % girls) and 3038 parents (81 % mothers) completed the pre-test questionnaire. The average TV/DVD times across the countries were between 1.5 and 1.8 h/day, while less time was used for computer/games console (0.9-1.4 h/day). The children's perceived parental style of communication was quite consistent for TV/DVD and computer/games console. The presence of rules was significantly associated with less time watching TV/DVD and use of computer/games console time. Moreover, the use of an autonomy-supportive style was negatively related to both time watching TV/DVD and use of computer/games console time. The use of a controlling style was related positively to perceived excessive time used on TV/DVD and excessive time used on computer/games console. With a few exceptions, results were similar across the five countries. This study suggests that an autonomy-supportive style of communicating rules for TV/DVD or computer/ games console use is negatively related to children's time watching TV/DVD and use of computer/games console time. In contrast, a controlling style is associated with more screen time and with more perceived excessive screen time in particular. Longitudinal research is needed to further examine effects of parental style of communication on children's screen time as well as possible reciprocal effects. International Standard Randomized Controlled Trial Number Register, registration number: ISRCTN34562078 . Date applied29/07/2011, Date assigned11/10/2011.

  3. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  4. A daily huddle facilitates patient transports from a neonatal intensive care unit

    PubMed Central

    Hughes Driscoll, Colleen; El Metwally, Dina

    2014-01-01

    To improve hospital access for expectant women and newborns in the state of Maryland, a quality improvement team reviewed the patient flow characteristics of our neonatal intensive care unit. We identified inefficiencies in patient discharges, including delays in patient transports. Several patient transport delays were caused by late preparation and delivery of the patient transfer summary. Baseline data collection revealed that transfer summaries were prepared on-time by the resident or nurse practitioner only 41% of the time on average, while the same transfer summaries were signed on-time by the neonatologist 5% of the time on average. Our aim was to improve the rate of on-time transfer summaries to 50% over a four month time period. We performed two PDSA cycles based on feedback from our quality improvement team. In the first cycle, we instituted a daily huddle to increase opportunities for communication about patient transports. In the second cycle, we increased computer access for residents and nurse practitioners preparing the transfer summaries. The on-time summary preparation by residents/nurse practitioners improved to an average of 72% over a nine month period. The same summaries were signed on-time by a neonatologist 26% of the time on average over a nine month period. In conclusion, institution of a daily huddle combined with augmented computer resources significantly increased the percentage of on-time transfer summaries. Current data show a trend toward improved ability to accept patient referrals. Further data collection and analysis is needed to determine the impact of these interventions on access to hospital care for expectant women and newborns in our state. PMID:26734275

  5. Lagrangian Descriptors: A Method for Revealing Phase Space Structures of General Time Dependent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina

    2013-11-01

    Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).

  6. 5 CFR 842.407 - Proration of annuity for part-time service.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... service is computed in accordance with § 842.403, using the average pay based on the annual rate of basic pay for full-time service. This amount is then multiplied by the proration factor. The result is the...

  7. A comment on Baker et al. 'The time dependence of an atom-vacancy encounter due to the vacancy mechanism of diffusion'

    NASA Astrophysics Data System (ADS)

    Dasenbrock-Gammon, Nathan; Zacate, Matthew O.

    2017-05-01

    Baker et al. derived time-dependent expressions for calculating average number of jumps per encounter and displacement probabilities for vacancy diffusion in crystal lattice systems with infinitesimal vacancy concentrations. As shown in this work, their formulation is readily expanded to include finite vacancy concentration, which allows calculation of concentration-dependent, time-averaged quantities. This is useful because it provides a computationally efficient method to express lineshapes of nuclear spectroscopic techniques through the use of stochastic fluctuation models.

  8. Direct Measurements of Smartphone Screen-Time: Relationships with Demographics and Sleep

    PubMed Central

    Christensen, Matthew A.; Bettencourt, Laura; Kaye, Leanne; Moturu, Sai T.; Nguyen, Kaylin T.; Olgin, Jeffrey E.; Pletcher, Mark J.; Marcus, Gregory M.

    2016-01-01

    Background Smartphones are increasingly integrated into everyday life, but frequency of use has not yet been objectively measured and compared to demographics, health information, and in particular, sleep quality. Aims The aim of this study was to characterize smartphone use by measuring screen-time directly, determine factors that are associated with increased screen-time, and to test the hypothesis that increased screen-time is associated with poor sleep. Methods We performed a cross-sectional analysis in a subset of 653 participants enrolled in the Health eHeart Study, an internet-based longitudinal cohort study open to any interested adult (≥ 18 years). Smartphone screen-time (the number of minutes in each hour the screen was on) was measured continuously via smartphone application. For each participant, total and average screen-time were computed over 30-day windows. Average screen-time specifically during self-reported bedtime hours and sleeping period was also computed. Demographics, medical information, and sleep habits (Pittsburgh Sleep Quality Index–PSQI) were obtained by survey. Linear regression was used to obtain effect estimates. Results Total screen-time over 30 days was a median 38.4 hours (IQR 21.4 to 61.3) and average screen-time over 30 days was a median 3.7 minutes per hour (IQR 2.2 to 5.5). Younger age, self-reported race/ethnicity of Black and "Other" were associated with longer average screen-time after adjustment for potential confounders. Longer average screen-time was associated with shorter sleep duration and worse sleep-efficiency. Longer average screen-times during bedtime and the sleeping period were associated with poor sleep quality, decreased sleep efficiency, and longer sleep onset latency. Conclusions These findings on actual smartphone screen-time build upon prior work based on self-report and confirm that adults spend a substantial amount of time using their smartphones. Screen-time differs across age and race, but is similar across socio-economic strata suggesting that cultural factors may drive smartphone use. Screen-time is associated with poor sleep. These findings cannot support conclusions on causation. Effect-cause remains a possibility: poor sleep may lead to increased screen-time. However, exposure to smartphone screens, particularly around bedtime, may negatively impact sleep. PMID:27829040

  9. Television viewing, computer game playing, and Internet use and self-reported time to bed and time out of bed in secondary-school children.

    PubMed

    Van den Bulck, Jan

    2004-02-01

    To investigate the relationship between the presence of a television set, a gaming computer, and/or an Internet connection in the room of adolescents and television viewing, computer game playing, and Internet use on the one hand, and time to bed, time up, time spent in bed, and overall tiredness in first- and fourth-year secondary-school children on the other hand. A random sample of students from 15 schools in Flanders, Belgium, yielded 2546 children who completed a questionnaire with questions about media presence in bedrooms; volume of television viewing, computer game playing, and Internet use; time to bed and time up on average weekdays and average weekend days; and questions regarding the level of tiredness in the morning, at school, after a day at school, and after the weekend. Children with a television set in their rooms went to bed significantly later on weekdays and weekend days and got up significantly later on weekend days. Overall, they spent less time in bed on weekdays. Children with a gaming computer in their rooms went to bed significantly later on weekdays. On weekdays, they spent significantly less time in bed. Children who watched more television went to bed later on weekdays and weekend days and got up later on weekend days. They spent less time in bed on weekdays. They reported higher overall levels of being tired. Children who spent more time playing computer games went to bed later on weekdays and weekend days and got up later on weekend days. On weekdays, they actually got up significantly earlier. They spent less time in bed on weekdays and reported higher levels of tiredness. Children who spent more time using the Internet went to bed significantly later during the week and during the weekend. They got up later on weekend days. They spent less time in bed during the week and reported higher levels of tiredness. Going out was also significantly related to sleeping later and less. Concerns about media use should not be limited to television. Computer game playing and Internet use are related to sleep behavior as well. Leisure activities that are unstructured seem to be negatively related to good sleep patterns. Imposing more structure (eg, end times) might reduce impact.

  10. Dynamic Average-Value Modeling of Doubly-Fed Induction Generator Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Shahab, Azin

    In a Doubly-fed Induction Generator (DFIG) wind energy conversion system, the rotor of a wound rotor induction generator is connected to the grid via a partial scale ac/ac power electronic converter which controls the rotor frequency and speed. In this research, detailed models of the DFIG wind energy conversion system with Sinusoidal Pulse-Width Modulation (SPWM) scheme and Optimal Pulse-Width Modulation (OPWM) scheme for the power electronic converter are developed in detail in PSCAD/EMTDC. As the computer simulation using the detailed models tends to be computationally extensive, time consuming and even sometimes not practical in terms of speed, two modified approaches (switching-function modeling and average-value modeling) are proposed to reduce the simulation execution time. The results demonstrate that the two proposed approaches reduce the simulation execution time while the simulation results remain close to those obtained using the detailed model simulation.

  11. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  12. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  13. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... your average monthly wage, we consider all the wages, compensation, self-employment income, and deemed... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing...

  14. 20 CFR 404.221 - Computing your average monthly wage.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... your average monthly wage, we consider all the wages, compensation, self-employment income, and deemed... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing your average monthly wage. 404.221... DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Average-Monthly-Wage Method of Computing...

  15. 29 CFR 548.300 - Introductory statement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Authorized Basic Rates § 548... has determined that they are substantially equivalent to the straight-time average hourly earnings of...

  16. 29 CFR 548.300 - Introductory statement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... AUTHORIZATION OF ESTABLISHED BASIC RATES FOR COMPUTING OVERTIME PAY Interpretations Authorized Basic Rates § 548... has determined that they are substantially equivalent to the straight-time average hourly earnings of...

  17. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Bilsky, A. V.; Lozhkin, V. A.; Markovich, D. M.; Tokarev, M. P.

    2013-04-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART.

  18. Improvements to Busquet's Non LTE algorithm in NRL's Hydro code

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Colombant, D.

    1996-11-01

    Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.

  19. Modelling NOX concentrations through CFD-RANS in an urban hot-spot using high resolution traffic emissions and meteorology from a mesoscale model

    NASA Astrophysics Data System (ADS)

    Sanchez, Beatriz; Santiago, Jose Luis; Martilli, Alberto; Martin, Fernando; Borge, Rafael; Quaassdorff, Christina; de la Paz, David

    2017-08-01

    Air quality management requires more detailed studies about air pollution at urban and local scale over long periods of time. This work focuses on obtaining the spatial distribution of NOx concentration averaged over several days in a heavily trafficked urban area in Madrid (Spain) using a computational fluid dynamics (CFD) model. A methodology based on weighted average of CFD simulations is applied computing the time evolution of NOx dispersion as a sequence of steady-state scenarios taking into account the actual atmospheric conditions. The inputs of emissions are estimated from the traffic emission model and the meteorological information used is derived from a mesoscale model. Finally, the computed concentration map correlates well with 72 passive samplers deployed in the research area. This work reveals the potential of using urban mesoscale simulations together with detailed traffic emissions so as to provide accurate maps of pollutant concentration at microscale using CFD simulations.

  20. Simulation of Synthetic Jets in Quiescent Air Using Unsteady Reynolds Averaged Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Turkel, Eli

    2006-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for the simulation of a synthetic jet created by a single diaphragm piezoelectric actuator in quiescent air. This configuration was designated as Case 1 for the CFDVAL2004 workshop held at Williamsburg, Virginia, in March 2004. Time-averaged and instantaneous data for this case were obtained at NASA Langley Research Center, using multiple measurement techniques. Computational results for this case using one-equation Spalart-Allmaras and two-equation Menter's turbulence models are presented along with the experimental data. The effect of grid refinement, preconditioning and time-step variation are also examined in this paper.

  1. Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM

    NASA Astrophysics Data System (ADS)

    Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng

    2015-07-01

    We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.

  2. Enhancements to the Redmine Database Metrics Plug in

    DTIC Science & Technology

    2017-08-01

    management web application has been adopted within the US Army Research Laboratory’s Computational and Information Sciences Directorate as a database...Metrics Plug-in by Terry C Jameson Computational and Information Sciences Directorate, ARL Approved for public... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  3. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less

  5. Short-term effects of playing computer games on attention.

    PubMed

    Tahiroglu, Aysegul Yolga; Celik, Gonca Gul; Avci, Ayse; Seydaoglu, Gulsah; Uzel, Mehtap; Altunbas, Handan

    2010-05-01

    The main aim of the present study is to investigate the short-term cognitive effects of computer games in children with different psychiatric disorders and normal controls. One hundred one children are recruited for the study (aged between 9 and 12 years). All participants played a motor-racing game on the computer for 1 hour. The TBAG form of the Stroop task was administered to all participants twice, before playing and immediately after playing the game. Participants with improved posttest scores, compared to their pretest scores, used the computer on average 0.67 +/- 1.1 hr/day, while the average administered was measured at 1.6 +/- 1.4 hr/day and 1.3 +/- 0.9 hr/day computer use for participants with worse or unaltered scores, respectively. According to the regression model, male gender, younger ages, duration of daily computer use, and ADHD inattention type were found to be independent risk factors for worsened posttest scores. Time spent playing computer games can exert a short-term effect on attention as measured by the Stroop test.

  6. Computation of backwater and discharge at width constrictions of heavily vegetated flood plains

    USGS Publications Warehouse

    Schneider, V.R.; Board, J.W.; Colson, B.E.; Lee, F.N.; Druffel, Leroy

    1977-01-01

    The U.S. Geological Survey, cooperated with the Federal Highway Administration and the State Highway Departments of Mississippi, Alabama, and Louisiana, to develop a proposed method for computing backwater and discharge at width constrictions of heavily vegetated flood plains. Data were collected at 20 single opening sites for 31 floods. Flood-plain width varied from 4 to 14 times the bridge opening width. The recurrence intervals of peak discharge ranged from a 2-year flood to greater than a 100-year flood, with a median interval of 6 years. Measured backwater ranged from 0.39 to 3.16 feet. Backwater computed by the present standard Geological Survey method averaged 29 percent less than the measured, and that computed by the currently used Federal Highway Administration method averaged 47 percent less than the measured. Discharge computed by the Survey method averaged 21 percent more then the measured. Analysis of data showed that the flood-plain widths and the Manning 's roughness coefficient are larger than those used to develop the standard methods. A method to more accurately compute backwater and discharge was developed. The difference between the contracted and natural water-surface profiles computed using standard step-backwater procedures is defined as backwater. The energy loss terms in the step-backwater procedure are computed as the product of the geometric mean of the energy slopes and the flow distance in the reach was derived from potential flow theory. The mean error was 1 percent when using the proposed method for computing backwater and 3 percent for computing discharge. (Woodard-USGS)

  7. Computed tomographic analysis of temporal maxillary stability and pterygomaxillary generate formation following pediatric Le Fort III distraction advancement.

    PubMed

    Hopper, Richard A; Sandercoe, Gavin; Woo, Albert; Watts, Robyn; Kelley, Patrick; Ettinger, Russell E; Saltzman, Babette

    2010-11-01

    Le Fort III distraction requires generation of bone in the pterygomaxillary region. The authors performed retrospective digital analysis on temporal fine-cut computed tomographic images to quantify both radiographic evidence of pterygomaxillary region bone formation and relative maxillary stability. Fifteen patients with syndromic midface hypoplasia were included in the study. The average age of the patients was 8.7 years; 11 had either Crouzon or Apert syndrome. The average displacement of the maxilla during distraction was 16.2 mm (range, 7 to 31 mm). Digital analysis was performed on fine-cut computed tomographic scans before surgery, at device removal, and at annual follow-up. Seven patients also had mid-consolidation computed tomographic scans. Relative maxillary stability and density of radiographic bone in the pterygomaxillary region were calculated between each scan. There was no evidence of clinically significant maxillary relapse, rotation, or growth between the end of consolidation and 1-year follow-up, other than a relatively small 2-mm subnasal maxillary vertical growth. There was an average radiographic ossification of 0.5 mm/mm advancement at the time of device removal, with a 25th percentile value of 0.3 mm/mm. The time during consolidation that each patient reached the 25th percentile of pterygomaxillary region bone density observed in this series of clinically stable advancements ranged from 1.3 to 9.8 weeks (average, 3.7 weeks). There was high variability in the amount of bone formed in the pterygomaxillary region associated with clinical stability of the advanced Le Fort III segment. These data suggest that a subsection of patients generate the minimal amount of pterygomaxillary region bone formation associated with advancement stability as early as 4 weeks into consolidation.

  8. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    PubMed

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.

  9. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  10. Intelligent Systems for Assessing Aging Changes: Home-Based, Unobtrusive, and Continuous Assessment of Aging

    PubMed Central

    Maxwell, Shoshana A.; Mattek, Nora; Hayes, Tamara L.; Dodge, Hiroko; Pavel, Misha; Jimison, Holly B.; Wild, Katherine; Boise, Linda; Zitzelberger, Tracy A.

    2011-01-01

    Objectives. To describe a longitudinal community cohort study, Intelligent Systems for Assessing Aging Changes, that has deployed an unobtrusive home-based assessment platform in many seniors homes in the existing community. Methods. Several types of sensors have been installed in the homes of 265 elderly persons for an average of 33 months. Metrics assessed by the sensors include total daily activity, time out of home, and walking speed. Participants were given a computer as well as training, and computer usage was monitored. Participants are assessed annually with health and function questionnaires, physical examinations, and neuropsychological testing. Results. Mean age was 83.3 years, mean years of education was 15.5, and 73% of cohort were women. During a 4-week snapshot, participants left their home twice a day on average for a total of 208 min per day. Mean in-home walking speed was 61.0 cm/s. Participants spent 43% of days on the computer averaging 76 min per day. Discussion. These results demonstrate for the first time the feasibility of engaging seniors in a large-scale deployment of in-home activity assessment technology and the successful collection of these activity metrics. We plan to use this platform to determine if continuous unobtrusive monitoring may detect incident cognitive decline. PMID:21743050

  11. Effects of diluting medium and holding time on sperm motility analysis by CASA in ram.

    PubMed

    Mostafapor, Somayeh; Farrokhi Ardebili, Farhad

    2014-01-01

    The aim of this study was to evaluate the effects of dilution rate and holding time on various motility parameters using computer-assisted sperm analysis (CASA). The semen samples were collected from three Ghezel rams. Samples were diluted in seminal plasma (SP), phosphate-buffered saline (PBS) containing 1% bovine serum albumin (BSA) and Bioexcell. The motility parameters that computed and recorded by CASA include curvilinear velocity (VCL), straight line velocity (VSL), average path velocity (VAP), straightness (STR), linearity (LIN), amplitude of lateral head displacement (ALH), and beat cross frequency (BCF). In all diluters, there was a decrease in the average of all three parameters of sperms movement velocity as the time passed, but density of this decrease was more intensive in SP. The average of ALH between diluters indicated a significant difference, as it was more in Bioexcell in comparison with the similar amount in SP and PBS. The average of LIN in the diluted sperms in Bioexcell was less than two other diluters in all three times. The motility parameters of the diluted sperms in Bioexcell and PBS indicated an important and considerable difference with the diluted sperms in SP. According to the gained results, the Bioexcell has greater ability in preserving motility of sperm in comparison with the other diluters but as SP is considered as physiological environment for sperm. It seems that the evaluation of the motility parameters in Bioexcell and PBS cannot be an accurate and comparable evaluation with SP.

  12. Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow

    NASA Astrophysics Data System (ADS)

    Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen

    2015-11-01

    The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.

  13. Time-averaged current analysis of a thunderstorm using ground-based measurements

    NASA Astrophysics Data System (ADS)

    Driscoll, Kevin T.; Blakeslee, Richard J.; Koshak, William J.

    1994-05-01

    The amount of upward current provided to the ionosphere by a thunderstorm that appeared over the Kennedy Space Center (KSC) on July 11, 1978, is reexamined using an analytic equation that describes a bipolar thunderstorm's current contribution to the global circuit in terms of its generator current, lightning currents, the altitudes of its charge centers, and the conductivity profile of the atmosphere. Ground-based measurements, which were obtained from a network of electric field mills positioned at various distances from the thunderstorm, were used to characterize the electrical activity inside the thundercloud. The location of the lightning discharges, the type of lightning, and the amount of charge neutralized during this thunderstorm were computed through a least squares inversion of the measured changes in the electric fields following each lightning discharge. These measurements provided the information necessary to implement the analytic equation, and consequently, a time-averaged estimate of this thunderstorm's current contribution to the global circuit was calculated. From these results the amount of conduction current supplied to the ionosphere by this small thunderstorm was computed to be less than 25% of the time-averaged generator current that flowed between the two vertically displaced charge centers.

  14. Minimizing the extra-oral time in autogeneous tooth transplantation: use of computer-aided rapid prototyping (CARP) as a duplicate model tooth.

    PubMed

    Lee, Seung-Jong; Kim, Euiseong

    2012-08-01

    The maintenance of the healthy periodontal ligament cells of the root surface of donor tooth and intimate surface contact between the donor tooth and the recipient bone are the key factors for successful tooth transplantation. In order to achieve these purposes, a duplicated donor tooth model can be utilized to reduce the extra-oral time using the computer-aided rapid prototyping (CARP) technique. Briefly, a three-dimensional digital imaging and communication in medicine (DICOM) image with the real dimensions of the donor tooth was obtained from a computed tomography (CT), and a life-sized resin tooth model was fabricated. Dimensional errors between real tooth, 3D CT image model and CARP model were calculated. And extra-oral time was recorded during the autotransplantation of the teeth. The average extra-oral time was 7 min 25 sec with the range of immediate to 25 min in cases which extra-oral root canal treatments were not performed while it was 9 min 15 sec when extra-oral root canal treatments were performed. The average radiographic distance between the root surface and the alveolar bone was 1.17 mm and 1.35 mm at mesial cervix and apex; they were 0.98 mm and 1.26 mm at the distal cervix and apex. When the dimensional errors between real tooth, 3D CT image model and CARP model were measured in cadavers, the average of absolute error was 0.291 mm between real teeth and CARP model. These data indicate that CARP may be of value in minimizing the extra-oral time and the gap between the donor tooth and the recipient alveolar bone in tooth transplantation.

  15. USSR and Eastern Europe Scientific Abstracts, Cybernetics, Computers and Automation Technology, Number 29.

    DTIC Science & Technology

    1978-01-17

    approach to designing computers: Formal mathematical methods were applied and computers themselves began to be widely used in designing other...capital, labor resources and the funds of consumers. Analysis of the model indicates that at the present time the average complexity of production of...ALGORITHMIC COMPLETENESS AND COMPLEXITY OF MICROPROGRAMS Kiev KIBERNETIKA in Russian No 3, May/Jun 77 pp 1-15 manuscript received 22 Dec 76 G0LUNK0V

  16. A users manual for a computer program which calculates time optical geocentric transfers using solar or nuclear electric and high thrust propulsion

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.

    1974-01-01

    This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.

  17. A Computational Fluid-Dynamics Assessment of the Improved Performance of Aerodynamic Rain Gauges

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Pollock, Michael; Stagnaro, Mattia; Lanza, Luca G.; Dutton, Mark; O'Connell, Enda

    2018-02-01

    The airflow surrounding any catching-type rain gauge when impacted by wind is deformed by the presence of the gauge body, resulting in the acceleration of wind above the orifice of the gauge, which deflects raindrops and snowflakes away from the collector (the wind-induced undercatch). The method of mounting a gauge with the collector at or below the level of the ground, or the use of windshields to mitigate this effect, is often not practicable. The physical shape of a gauge has a significant impact on its collection efficiency. In this study, we show that appropriate "aerodynamic" shapes are able to reduce the deformation of the airflow, which can reduce undercatch. We have employed computational fluid-dynamic simulations to evaluate the time-averaged airflow realized around "aerodynamic" rain gauge shapes when impacted by wind. Terms of comparison are provided by the results obtained for two standard "conventional" rain gauge shapes. The simulations have been run for different wind speeds and are based on a time-averaged Reynolds-Averaged Navier-Stokes model. The shape of the aerodynamic gauges is shown to have a positive impact on the time-averaged airflow patterns observed around the orifice compared to the conventional shapes. Furthermore, the turbulent air velocity fields for the aerodynamic shapes present "recirculating" structures, which may improve the particle-catching capabilities of the gauge collector.

  18. 47 CFR 69.153 - Presubscribed interexchange carrier charge (PICC).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CARRIER SERVICES (CONTINUED) ACCESS CHARGES Computation of Charges for Price Cap Local Exchange Carriers... to recover revenues totaling Average Price Cap CMT Revenues per Line month times the number of base...

  19. OSA Imaging and Applied Optics Congress Support

    DTIC Science & Technology

    2017-02-16

    ranged from theoretical to experimental demonstration and verification of the latest advances in computational imaging research . This meeting covered...Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...Applied Optics Congress was a four-day meeting that encompassed the latest advances in computational imaging research . emphasizing integration of

  20. 78 FR 59775 - Blueberry Promotion, Research and Information Order; Assessment Rate Increase

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-30

    ... demand. \\6\\ The econometric model used statistical methods with time series data to measure how strongly... been over 15 times greater than the costs. At the opposite end of the spectrum in the supply response, the average BCR was computed to be 5.36, implying that the benefits of the USHBC were over five times...

  1. Gravity and magma induces spreading of Mount Etna volcano revealed by satellite radar interferometry

    NASA Technical Reports Server (NTRS)

    Lungren, P.; Casu, F.; Manzo, M.; Pepe, A.; Berardino, P.; Sansosti, E.; Lanari, R.

    2004-01-01

    Mount Etna underwent a cycle of eruptive activity over the past ten years. Here we compute ground displacement maps and deformation time series from more than 400 radar interferograms to reveal Mount Etna's average and time varying surface deformation from 1992 to 2001.

  2. A Computer Program for the Generation of ARIMA Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Noles, Keith O.

    1977-01-01

    The autoregressive integrated moving averages model (ARIMA) has been applied to time series data in psychological and educational research. A program is described that generates ARIMA data of a known order. The program enables researchers to explore statistical properties of ARIMA data and simulate systems producing time dependent observations.…

  3. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    NASA Astrophysics Data System (ADS)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  4. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless..., average terrain elevation must be calculated by computer using elevations from a 30 second point or better..., if the results differ significantly from the computer derived averages. (a) Radial average terrain...

  5. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless..., average terrain elevation must be calculated by computer using elevations from a 30 second point or better..., if the results differ significantly from the computer derived averages. (a) Radial average terrain...

  6. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    NASA Astrophysics Data System (ADS)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  7. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations

    NASA Astrophysics Data System (ADS)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-01

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  8. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    PubMed

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  9. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  10. Noise reduction in single time frame optical DNA maps

    PubMed Central

    Müller, Vilhelm; Westerlund, Fredrik

    2017-01-01

    In optical DNA mapping technologies sequence-specific intensity variations (DNA barcodes) along stretched and stained DNA molecules are produced. These “fingerprints” of the underlying DNA sequence have a resolution of the order one kilobasepairs and the stretching of the DNA molecules are performed by surface adsorption or nano-channel setups. A post-processing challenge for nano-channel based methods, due to local and global random movement of the DNA molecule during imaging, is how to align different time frames in order to produce reproducible time-averaged DNA barcodes. The current solutions to this challenge are computationally rather slow. With high-throughput applications in mind, we here introduce a parameter-free method for filtering a single time frame noisy barcode (snap-shot optical map), measured in a fraction of a second. By using only a single time frame barcode we circumvent the need for post-processing alignment. We demonstrate that our method is successful at providing filtered barcodes which are less noisy and more similar to time averaged barcodes. The method is based on the application of a low-pass filter on a single noisy barcode using the width of the Point Spread Function of the system as a unique, and known, filtering parameter. We find that after applying our method, the Pearson correlation coefficient (a real number in the range from -1 to 1) between the single time-frame barcode and the time average of the aligned kymograph increases significantly, roughly by 0.2 on average. By comparing to a database of more than 3000 theoretical plasmid barcodes we show that the capabilities to identify plasmids is improved by filtering single time-frame barcodes compared to the unfiltered analogues. Since snap-shot experiments and computational time using our method both are less than a second, this study opens up for high throughput optical DNA mapping with improved reproducibility. PMID:28640821

  11. Propagation of gaseous detonation waves in a spatially inhomogeneous reactive medium

    NASA Astrophysics Data System (ADS)

    Mi, XiaoCheng; Higgins, Andrew J.; Ng, Hoi Dick; Kiyanda, Charles B.; Nikiforakis, Nikolaos

    2017-05-01

    Detonation propagation in a compressible medium wherein the energy release has been made spatially inhomogeneous is examined via numerical simulation. The inhomogeneity is introduced via step functions in the reaction progress variable, with the local value of energy release correspondingly increased so as to maintain the same average energy density in the medium and thus a constant Chapman-Jouguet (CJ) detonation velocity. A one-step Arrhenius rate governs the rate of energy release in the reactive zones. The resulting dynamics of a detonation propagating in such systems with one-dimensional layers and two-dimensional squares are simulated using a Godunov-type finite-volume scheme. The resulting wave dynamics are analyzed by computing the average wave velocity and one-dimensional averaged wave structure. In the case of sufficiently inhomogeneous media wherein the spacing between reactive zones is greater than the inherent reaction zone length, average wave speeds significantly greater than the corresponding CJ speed of the homogenized medium are obtained. If the shock transit time between reactive zones is less than the reaction time scale, then the classical CJ detonation velocity is recovered. The spatiotemporal averaged structure of the waves in these systems is analyzed via a Favre-averaging technique, with terms associated with the thermal and mechanical fluctuations being explicitly computed. The analysis of the averaged wave structure identifies the super-CJ detonations as weak detonations owing to the existence of mechanical nonequilibrium at the effective sonic point embedded within the wave structure. The correspondence of the super-CJ behavior identified in this study with real detonation phenomena that may be observed in experiments is discussed.

  12. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.

    PubMed

    Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  13. Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation

    NASA Astrophysics Data System (ADS)

    Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf

    2017-01-01

    We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.

  14. Nordic Sea Level - Analysis of PSMSL RLR Tide Gauge data

    NASA Astrophysics Data System (ADS)

    Knudsen, Per; Andersen, Ole

    2015-04-01

    Tide gauge data from the Nordic region covering a period of time from 1920 to 2000 are evaluated. 63 stations having RLR data for at least 40 years have been used. Each tide gauge data record was averaged to annual averages after the monthly average seasonal anomalies were removed. Some stations lack data, especially before around 1950. Hence, to compute representative sea level trends for the 1920-2000 period a procedure for filling in estimated sea level values in the voids, is needed. To fill in voids in the tide gauge data records a reconstruction method was applied that utilizes EOF.s in an iterative manner. Subsequently the trends were computed. The estimated trends range from about -8 mm/year to 2 mm/year reflecting both post-glacial uplift and sea level rise. An evaluation of the first EOFs show that the first EOF clearly describes the trends in the time series. EOF #2 and #3 describe differences in the inter-annual sea level variability with-in the Baltic Sea and differences between the Baltic and the North Atlantic / Norwegian seas, respectively.

  15. Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques.

    PubMed

    Aquino, Arturo; Gegundez-Arias, Manuel Emilio; Marin, Diego

    2010-11-01

    Optic disc (OD) detection is an important step in developing systems for automated diagnosis of various serious ophthalmic pathologies. This paper presents a new template-based methodology for segmenting the OD from digital retinal images. This methodology uses morphological and edge detection techniques followed by the Circular Hough Transform to obtain a circular OD boundary approximation. It requires a pixel located within the OD as initial information. For this purpose, a location methodology based on a voting-type algorithm is also proposed. The algorithms were evaluated on the 1200 images of the publicly available MESSIDOR database. The location procedure succeeded in 99% of cases, taking an average computational time of 1.67 s. with a standard deviation of 0.14 s. On the other hand, the segmentation algorithm rendered an average common area overlapping between automated segmentations and true OD regions of 86%. The average computational time was 5.69 s with a standard deviation of 0.54 s. Moreover, a discussion on advantages and disadvantages of the models more generally used for OD segmentation is also presented in this paper.

  16. Implementation and validation of a wake model for low-speed forward flight

    NASA Technical Reports Server (NTRS)

    Komerath, Narayanan M.; Schreiber, Olivier A.

    1987-01-01

    The computer implementation and calculations of the induced velocities produced by a wake model consisting of a trailing vortex system defined from a prescribed time averaged downwash distribution are detailed. Induced velocities are computed by approximating each spiral turn by a pair of large straight vortex segments positioned at critical points relative to where the induced velocity is required. A remainder term for the rest of the spiral is added. This approach results in decreased computation time compared to classical models where each spiral turn is broken down in small straight vortex segments. The model includes features such a harmonic variation of circulation, downwash outside of the blade and/or outside the tip path plane, blade bound vorticity induced velocity with harmonic variation of circulation and time averaging. The influence of various options and parameters on the results are investigated and results are compared to experimental field measurements with which, a resonable agreement is obtained. The capabilities of the model as well as its extension possibilities are studied. The performance of the model in predicting the recently-acquired NASA Langley Inflow data base for a four-bladed rotor is compared to that of the Scully Free Wake code, a well-established program which requires much greater computational resources. It is found that the two codes predict the experimental data with essentially the same accuracy, and show the same trends.

  17. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  18. Pornography consumption, sexual experiences, lifestyles, and self-rated health among male adolescents in Sweden.

    PubMed

    Mattebo, Magdalena; Tydén, Tanja; Häggström-Nordin, Elisabet; Nilsson, Kent W; Larsson, Margareta

    2013-09-01

    To describe patterns of pornography use among high school boys and to investigate differences between frequent, average, and nonfrequent users of pornography with respect to sexual experiences, lifestyles, and self-rated health. A population-based classroom survey among 16-year-old boys (n = 477), from 53 randomly selected high school classes in 2 towns in mid-Sweden. Almost all boys, 96% (n = 453), had watched pornography. Frequent users of pornography (everyday) (10%, n = 47) differed from average users (63%, n = 292) and nonfrequent users (27%, n = 126). Frequent users versus average users and nonfrequent users had more sexual experiences, such as one night stands (45, 32, 25%, respectively) and sex with friends more than 10 times (13, 10, 2%). A higher proportion of frequent users spent more than 10 straight hours at the computer several times a week (32, 5, 8%) and reported more relationship problems with peers (38, 22, 21%), truancy at least once a week (11, 6, 5%), obesity (13, 3, 3%), use of oral tobacco (36, 29, 20%), and use of alcohol (77, 70, 52%) versus average and nonfrequent users. One third of frequent users watched more pornography than they actually wanted. There were no differences between the groups regarding physical and psychological self-rated health. The boys, defined as frequent users of pornography, were more sexually experienced, spent more time at the computer, and reported an unhealthier lifestyle compared with average and nonfrequent users. No differences regarding self-rated health were detected even though obesity was twice as common among frequent users.

  19. Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary.

    PubMed

    Petscher, Yaacov; Mitchell, Alison M; Foorman, Barbara R

    2015-01-01

    A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed.

  20. Improving the Reliability of Student Scores from Speeded Assessments: An Illustration of Conditional Item Response Theory Using a Computer-Administered Measure of Vocabulary

    PubMed Central

    Petscher, Yaacov; Mitchell, Alison M.; Foorman, Barbara R.

    2016-01-01

    A growing body of literature suggests that response latency, the amount of time it takes an individual to respond to an item, may be an important factor to consider when using assessment data to estimate the ability of an individual. Considering that tests of passage and list fluency are being adapted to a computer administration format, it is possible that accounting for individual differences in response times may be an increasingly feasible option to strengthen the precision of individual scores. The present research evaluated the differential reliability of scores when using classical test theory and item response theory as compared to a conditional item response model which includes response time as an item parameter. Results indicated that the precision of student ability scores increased by an average of 5 % when using the conditional item response model, with greater improvements for those who were average or high ability. Implications for measurement models of speeded assessments are discussed. PMID:27721568

  1. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE PAGES

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    2017-02-20

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  2. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    NASA Astrophysics Data System (ADS)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  3. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  4. A Biophysico-computational Perspective of Breast Cancer Pathogenesis and Treatment Response

    DTIC Science & Technology

    2006-03-01

    of Breast Cancer Pathogenesis and Treatment Response PRINCIPAL INVESTIGATOR: Valerie M. Weaver Ph.D. CONTRACTING...burden for this collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing...Biophysico-computational Perspective of Breast Cancer Pathogenesis and 5a. CONTRACT NUMBER Treatment Response 5b. GRANT NUMBER W81XWH-05-1-0330 5c

  5. Intercomparison of Recent Anomaly Time-Series of OLR as Observed by CERES and Computed Using AIRS Products

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Molnar, Gyula; Iredell, Lena; Loeb, Norman G.

    2011-01-01

    This paper compares recent spatial and temporal anomaly time series of OLR as observed by CERES and computed based on AIRS retrieved surface and atmospheric geophysical parameters over the 7 year time period September 2002 through February 2010. This time period is marked by a substantial decrease of OLR, on the order of +/-0.1 W/sq m/yr, averaged over the globe, and very large spatial variations of changes in OLR in the tropics, with local values ranging from -2.8 W/sq m/yr to +3.1 W/sq m/yr. Global and Tropical OLR both began to decrease significantly at the onset of a strong La Ni a in mid-2007. Late 2009 is characterized by a strong El Ni o, with a corresponding change in sign of both Tropical and Global OLR anomalies. The spatial patterns of the 7 year short term changes in AIRS and CERES OLR have a spatial correlation of 0.97 and slopes of the linear least squares fits of anomaly time series averaged over different spatial regions agree on the order of +/-0.01 W/sq m/yr. This essentially perfect agreement of OLR anomaly time series derived from observations by two different instruments, determined in totally independent and different manners, implies that both sets of results must be highly stable. This agreement also validates the anomaly time series of the AIRS derived products used to compute OLR and furthermore indicates that anomaly time series of AIRS derived products can be used to explain the factors contributing to anomaly time series of OLR.

  6. HPC on Competitive Cloud Resources

    NASA Astrophysics Data System (ADS)

    Bientinesi, Paolo; Iakymchuk, Roman; Napper, Jeff

    Computing as a utility has reached the mainstream. Scientists can now easily rent time on large commercial clusters that can be expanded and reduced on-demand in real-time. However, current commercial cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of the web applications that have been the focus of cloud computing vendors. In this chapter we demonstrate through empirical evaluation the computational efficiency of high-performance numerical applications in a commercial cloud environment when resources are shared under high contention. Using the Linpack benchmark as a case study, we show that cache utilization becomes highly unpredictable and similarly affects computation time. For some problems, not only is it more efficient to underutilize resources, but the solution can be reached sooner in realtime (wall-time). We also show that the smallest, cheapest (64-bit) instance on the studied environment is the best for price to performance ration. In light of the high-contention we witness, we believe that alternative definitions of efficiency for commercial cloud environments should be introduced where strong performance guarantees do not exist. Concepts like average, expected performance and execution time, expected cost to completion, and variance measures--traditionally ignored in the high-performance computing context--now should complement or even substitute the standard definitions of efficiency.

  7. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH ANALYTIC ELEMENT GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    The objective of this research is to test the utility of simple functions of spatially integrated and temporally averaged ground water residence times in shallow "groundwatersheds" with field observations and more detailed computer simulations. The residence time of water in the...

  8. Time Orientation and Human Performance

    DTIC Science & Technology

    2004-06-01

    Work with Computing Systems 2004. H.M. Khalid, M.G. Helander, A.W. Yeo (Editors) . Kuala Lumpur: Damai Sciences. 1 Time Orientation and Human...Multi-tasking. 1 . Introduction With increased globalization, understanding the various cultures and people’s attitudes and behaviours is crucial...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching

  9. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  10. Simulation of multistage turbine flows

    NASA Technical Reports Server (NTRS)

    Adamczyk, John J.; Mulac, Richard A.

    1987-01-01

    A flow model has been developed for analyzing multistage turbomachinery flows. This model, referred to as the average passage flow model, describes the time-averaged flow field with a typical passage of a blade row embedded within a multistage configuration. Computer resource requirements, supporting empirical modeling, formulation code development, and multitasking and storage are discussed. Illustrations from simulations of the space shuttle main engine (SSME) fuel turbine performed to date are given.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, M.A.

    Three major issues to be dealt with over the next ten years in the exposure assessment field are: consistency in terminology, the impact of computer technology on the choice of data and modeling, and conceptual issues such as the use of time-weighted averages.

  12. Can time-averaged flow boundary conditions be used to meet the clinical timeline for Fontan surgical planning?

    PubMed

    Wei, Zhenglun Alan; Trusty, Phillip M; Tree, Mike; Haggerty, Christopher M; Tang, Elaine; Fogel, Mark; Yoganathan, Ajit P

    2017-01-04

    Cardiovascular simulations have great potential as a clinical tool for planning and evaluating patient-specific treatment strategies for those suffering from congenital heart diseases, specifically Fontan patients. However, several bottlenecks have delayed wider deployment of the simulations for clinical use; the main obstacle is simulation cost. Currently, time-averaged clinical flow measurements are utilized as numerical boundary conditions (BCs) in order to reduce the computational power and time needed to offer surgical planning within a clinical time frame. Nevertheless, pulsatile blood flow is observed in vivo, and its significant impact on numerical simulations has been demonstrated. Therefore, it is imperative to carry out a comprehensive study analyzing the sensitivity of using time-averaged BCs. In this study, sensitivity is evaluated based on the discrepancies between hemodynamic metrics calculated using time-averaged and pulsatile BCs; smaller discrepancies indicate less sensitivity. The current study incorporates a comparison between 3D patient-specific CFD simulations using both the time-averaged and pulsatile BCs for 101 Fontan patients. The sensitivity analysis involves two clinically important hemodynamic metrics: hepatic flow distribution (HFD) and indexed power loss (iPL). Paired demographic group comparisons revealed that HFD sensitivity is significantly different between single and bilateral superior vena cava cohorts but no other demographic discrepancies were observed for HFD or iPL. Multivariate regression analyses show that the best predictors for sensitivity involve flow pulsatilities, time-averaged flow rates, and geometric characteristics of the Fontan connection. These predictors provide patient-specific guidelines to determine the effectiveness of analyzing patient-specific surgical options with time-averaged BCs within a clinical time frame. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. 20 CFR 404.232 - Computing your average monthly wage under the guaranteed alternative.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your average monthly wage under the... OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Guaranteed Alternative for People Reaching Age 62 After 1978 But Before 1984 § 404.232 Computing your average monthly...

  14. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  15. Talking with the alien: interaction with computers in the GP consultation.

    PubMed

    Dowell, Anthony; Stubbe, Maria; Scott-Dowell, Kathy; Macdonald, Lindsay; Dew, Kevin

    2013-01-01

    This study examines New Zealand GPs' interaction with computers in routine consultations. Twenty-eight video-recorded consultations from 10 GPs were analysed in micro-detail to explore: (i) how doctors divide their time and attention between computer and patient; (ii) the different roles ascribed to the computer; and (iii) how computer use influences the interactional flow of the consultation. All GPs engaged with the computer in some way for at least 20% of each consultation, and on average spent 12% of time totally focussed on the computer. Patterns of use varied; most GPs inputted all or most notes during the consultation, but a few set aside dedicated time afterwards. The computer acted as an additional participant enacting roles like information repository and legitimiser of decisions. Computer use also altered some of the normal 'rules of engagement' between doctor and patient. Long silences and turning away interrupted the smooth flow of conversation, but various 'multitasking' strategies allowed GPs to remain engaged with patients during episodes of computer use (e.g. signposting, online commentary, verbalising while typing, social chat). Conclusions were that use of computers has many benefits but also significantly influences the fine detail of the GP consultation. Doctors must consciously develop strategies to manage this impact.

  16. A low computation cost method for seizure prediction.

    PubMed

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  18. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  19. Computational and experimental studies of LEBUs at high device Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Bertelrud, Arild; Watson, R. D.

    1988-01-01

    The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.

  20. Subgrid or Reynolds stress-modeling for three-dimensional turbulence computations

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.

    1975-01-01

    A review is given of recent advances in two distinct computational methods for evaluating turbulence fields, namely, statistical Reynolds stress modeling and turbulence simulation, where large eddies are followed in time. It is shown that evaluation of the mean Reynolds stresses, rather than use of a scalar eddy viscosity, permits an explanation of streamline curvature effects found in several experiments. Turbulence simulation, with a new volume averaging technique and third-order accurate finite-difference computing is shown to predict the decay of isotropic turbulence in incompressible flow with rather modest computer storage requirements, even at Reynolds numbers of aerodynamic interest.

  1. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  2. Computer versus lecture: a comparison of two methods of teaching oral medication administration in a nursing skills laboratory.

    PubMed

    Jeffries, P R

    2001-10-01

    The purpose of this study was to compare the effectiveness of both an interactive, multimedia CD-ROM and a traditional lecture for teaching oral medication administration to nursing students. A randomized pretest/posttest experimental design was used. Forty-two junior baccalaureate nursing students beginning their fundamentals nursing course were recruited for this study at a large university in the midwestern United States. The students ranged in age from 19 to 45. Seventy-three percent reported having average computer skills and experience, while 15% reported poor to below average skills. Two methods were compared for teaching oral medication administration--a scripted lecture with black and white overhead transparencies, in addition to an 18-minute videotape on medication administration, and an interactive, multimedia CD-ROM program, covering the same content. There were no significant (p < .05) baseline differences between the computer and lecture groups by education or computer skills. Results showed significant differences between the two groups in cognitive gains and student satisfaction (p = .01), with the computer group demonstrating higher student satisfaction and more cognitive gains than the lecture group. The groups were similar in their ability to demonstrate the skill correctly. Importantly, time on task using the CD-ROM was less, with 96% of the learners completing the program in 2 hours or less, compared to 3 hours of class time for the lecture group.

  3. Excessive computer game playing among Norwegian adults: self-reported consequences of playing and association with mental health problems.

    PubMed

    Wenzel, H G; Bakken, I J; Johansson, A; Götestam, K G; Øren, Anita

    2009-12-01

    Computer games are the most advanced form of gaming. For most people, the playing is an uncomplicated leisure activity; however, for a minority the gaming becomes excessive and is associated with negative consequences. The aim of the present study was to investigate computer game-playing behaviour in the general adult Norwegian population, and to explore mental health problems and self-reported consequences of playing. The survey includes 3,405 adults 16 to 74 years old (Norway 2007, response rate 35.3%). Overall, 65.5% of the respondents reported having ever played computer games (16-29 years, 93.9%; 30-39 years, 85.0%; 40-59 years, 56.2%; 60-74 years, 25.7%). Among 2,170 players, 89.8% reported playing less than 1 hr. as a daily average over the last month, 5.0% played 1-2 hr. daily, 3.1% played 2-4 hr. daily, and 2.2% reported playing > 4 hr. daily. The strongest risk factor for playing > 4 hr. daily was being an online player, followed by male gender, and single marital status. Reported negative consequences of computer game playing increased strongly with average daily playing time. Furthermore, prevalence of self-reported sleeping problems, depression, suicide ideations, anxiety, obsessions/ compulsions, and alcohol/substance abuse increased with increasing playing time. This study showed that adult populations should also be included in research on computer game-playing behaviour and its consequences.

  4. Radiological risk assessment of cosmic radiation at aviation altitudes (a trip from Houston Intercontinental Airport to Lagos International Airport).

    PubMed

    Enyinna, Paschal Ikenna

    2016-01-01

    Radiological risk parameters associated with aircrew members traveling from Houston Intercontinental Airport to Lagos International Airport have been computed using computer software called EPCARD (version 3.2). The mean annual effective dose of radiation was computed to be 2.94 mSv/year. This result is above the standard permissible limit of 1 mSv/year set for the public and pregnant aircrew members but below the limit set for occupationally exposed workers. The Risk of cancer mortality and excess career time cancer risk computed ranged from 3.5 × 10(-5) to 24.5 × 10(-5) (with average of 14.7 × 10(-5)) and 7 × 10(-4) to 49 × 10(-4) (with average of 29.4 × 10(-4)). Passengers and aircrew members should be aware of the extra cosmic radiation doses taken in during flights. All aircraft operators should monitor radiation doses incurred during aviation trips.

  5. Radiological risk assessment of cosmic radiation at aviation altitudes (a trip from Houston Intercontinental Airport to Lagos International Airport)

    PubMed Central

    Enyinna, Paschal Ikenna

    2016-01-01

    Radiological risk parameters associated with aircrew members traveling from Houston Intercontinental Airport to Lagos International Airport have been computed using computer software called EPCARD (version 3.2). The mean annual effective dose of radiation was computed to be 2.94 mSv/year. This result is above the standard permissible limit of 1 mSv/year set for the public and pregnant aircrew members but below the limit set for occupationally exposed workers. The Risk of cancer mortality and excess career time cancer risk computed ranged from 3.5 × 10−5 to 24.5 × 10−5 (with average of 14.7 × 10−5) and 7 × 10−4 to 49 × 10−4 (with average of 29.4 × 10−4). Passengers and aircrew members should be aware of the extra cosmic radiation doses taken in during flights. All aircraft operators should monitor radiation doses incurred during aviation trips. PMID:27651568

  6. Fast Algorithms for Mining Co-evolving Time Series

    DTIC Science & Technology

    2011-09-01

    Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical

  7. Numerical simulation of turbulence in the presence of shear

    NASA Technical Reports Server (NTRS)

    Shaanan, S.; Ferziger, J. H.; Reynolds, W. C.

    1975-01-01

    The numerical calculations are presented of the large eddy structure of turbulent flows, by use of the averaged Navier-Stokes equations, where averages are taken over spatial regions small compared to the size of the computational grid. The subgrid components of motion are modeled by a local eddy-viscosity model. A new finite-difference scheme is proposed to represent the nonlinear average advective term which has fourth-order accuracy. This scheme exhibits several advantages over existing schemes with regard to the following: (1) the scheme is compact as it extends only one point away in each direction from the point to which it is applied; (2) it gives better resolution for high wave-number waves in the solution of Poisson equation, and (3) it reduces programming complexity and computation time. Examples worked out in detail are the decay of isotropic turbulence, homogeneous turbulent shear flow, and homogeneous turbulent shear flow with system rotation.

  8. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  9. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  10. Intra- and Inter-Fractional Variation Prediction of Lung Tumors Using Fuzzy Deep Learning

    PubMed Central

    Park, Seonyeong; Lee, Suk Jin; Weiss, Elisabeth

    2016-01-01

    Tumor movements should be accurately predicted to improve delivery accuracy and reduce unnecessary radiation exposure to healthy tissue during radiotherapy. The tumor movements pertaining to respiration are divided into intra-fractional variation occurring in a single treatment session and inter-fractional variation arising between different sessions. Most studies of patients’ respiration movements deal with intra-fractional variation. Previous studies on inter-fractional variation are hardly mathematized and cannot predict movements well due to inconstant variation. Moreover, the computation time of the prediction should be reduced. To overcome these limitations, we propose a new predictor for intra- and inter-fractional data variation, called intra- and inter-fraction fuzzy deep learning (IIFDL), where FDL, equipped with breathing clustering, predicts the movement accurately and decreases the computation time. Through the experimental results, we validated that the IIFDL improved root-mean-square error (RMSE) by 29.98% and prediction overshoot by 70.93%, compared with existing methods. The results also showed that the IIFDL enhanced the average RMSE and overshoot by 59.73% and 83.27%, respectively. In addition, the average computation time of IIFDL was 1.54 ms for both intra- and inter-fractional variation, which was much smaller than the existing methods. Therefore, the proposed IIFDL might achieve real-time estimation as well as better tracking techniques in radiotherapy. PMID:27170914

  11. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  12. Mobility in hospital work: towards a pervasive computing hospital environment.

    PubMed

    Morán, Elisa B; Tentori, Monica; González, Víctor M; Favela, Jesus; Martínez-Garcia, Ana I

    2007-01-01

    Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains.

  13. Detailed T1-Weighted Profiles from the Human Cortex Measured in Vivo at 3 Tesla MRI.

    PubMed

    Ferguson, Bart; Petridou, Natalia; Fracasso, Alessio; van den Heuvel, Martijn P; Brouwer, Rachel M; Hulshoff Pol, Hilleke E; Kahn, René S; Mandl, René C W

    2018-04-01

    Studies into cortical thickness in psychiatric diseases based on T1-weighted MRI frequently report on aberrations in the cerebral cortex. Due to limitations in image resolution for studies conducted at conventional MRI field strengths (e.g. 3 Tesla (T)) this information cannot be used to establish which of the cortical layers may be implicated. Here we propose a new analysis method that computes one high-resolution average cortical profile per brain region extracting myeloarchitectural information from T1-weighted MRI scans that are routinely acquired at a conventional field strength. To assess this new method, we acquired standard T1-weighted scans at 3 T and compared them with state-of-the-art ultra-high resolution T1-weighted scans optimised for intracortical myelin contrast acquired at 7 T. Average cortical profiles were computed for seven different brain regions. Besides a qualitative comparison between the 3 T scans, 7 T scans, and results from literature, we tested if the results from dynamic time warping-based clustering are similar for the cortical profiles computed from 7 T and 3 T data. In addition, we quantitatively compared cortical profiles computed for V1, V2 and V7 for both 7 T and 3 T data using a priori information on their relative myelin concentration. Although qualitative comparisons show that at an individual level average profiles computed for 7 T have more pronounced features than 3 T profiles the results from the quantitative analyses suggest that average cortical profiles computed from T1-weighted scans acquired at 3 T indeed contain myeloarchitectural information similar to profiles computed from the scans acquired at 7 T. The proposed method therefore provides a step forward to study cortical myeloarchitecture in vivo at conventional magnetic field strength both in health and disease.

  14. Neuron Design in Neuromorphic Computing Systems and Its Application in Wireless Communications

    DTIC Science & Technology

    2017-03-01

    0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...for data representation using hardware spike timing dependent encoding for neuromorphic processors; (b) explore the applications of neuromorphic...envisioned architecture will serve as the foundation for unprecedented capabilities in real- time applications such as the MIMO channel estimation that

  15. The effect of data structures on INGRES performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creighton, J.R.

    1987-01-01

    Computer experiments were conducted to determine the effect of using Heap, ISAM, Hash and B-tree data structures for INGRES relations. Average times for retrieve, append and update were determined for searches by unique key and non-key data. The experiments were conducted on relations of approximately 1000 tuples of 332 byte width. Multiple operations were performed, where appropriate, to obtain average times. Simple models of the data structures are presented and shown to be consistent with experimental results. The models can be used to predict performance, and to select the appropriate data structure for various applications.

  16. Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain

    NASA Astrophysics Data System (ADS)

    Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme

    2009-02-01

    Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.

  17. Acceleration of high resolution temperature based optimization for hyperthermia treatment planning using element grouping.

    PubMed

    Kok, H P; de Greef, M; Bel, A; Crezee, J

    2009-08-01

    In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.

  18. TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, X

    2016-06-15

    Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less

  19. Less Daily Computer Use is Related to Smaller Hippocampal Volumes in Cognitively Intact Elderly.

    PubMed

    Silbert, Lisa C; Dodge, Hiroko H; Lahna, David; Promjunyakul, Nutta-On; Austin, Daniel; Mattek, Nora; Erten-Lyons, Deniz; Kaye, Jeffrey A

    2016-01-01

    Computer use is becoming a common activity in the daily life of older individuals and declines over time in those with mild cognitive impairment (MCI). The relationship between daily computer use (DCU) and imaging markers of neurodegeneration is unknown. The objective of this study was to examine the relationship between average DCU and volumetric markers of neurodegeneration on brain MRI. Cognitively intact volunteers enrolled in the Intelligent Systems for Assessing Aging Change study underwent MRI. Total in-home computer use per day was calculated using mouse movement detection and averaged over a one-month period surrounding the MRI. Spearman's rank order correlation (univariate analysis) and linear regression models (multivariate analysis) examined hippocampal, gray matter (GM), white matter hyperintensity (WMH), and ventricular cerebral spinal fluid (vCSF) volumes in relation to DCU. A voxel-based morphometry analysis identified relationships between regional GM density and DCU. Twenty-seven cognitively intact participants used their computer for 51.3 minutes per day on average. Less DCU was associated with smaller hippocampal volumes (r = 0.48, p = 0.01), but not total GM, WMH, or vCSF volumes. After adjusting for age, education, and gender, less DCU remained associated with smaller hippocampal volume (p = 0.01). Voxel-wise analysis demonstrated that less daily computer use was associated with decreased GM density in the bilateral hippocampi and temporal lobes. Less daily computer use is associated with smaller brain volume in regions that are integral to memory function and known to be involved early with Alzheimer's pathology and conversion to dementia. Continuous monitoring of daily computer use may detect signs of preclinical neurodegeneration in older individuals at risk for dementia.

  20. Performance comparison analysis library communication cluster system using merge sort

    NASA Astrophysics Data System (ADS)

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  1. A Comparison of Alternative Approaches to the Analysis of Interrupted Time-Series.

    ERIC Educational Resources Information Center

    Harrop, John W.; Velicer, Wayne F.

    1985-01-01

    Computer generated data representative of 16 Auto Regressive Integrated Moving Averages (ARIMA) models were used to compare the results of interrupted time-series analysis using: (1) the known model identification, (2) an assumed (l,0,0) model, and (3) an assumed (3,0,0) model as an approximation to the General Transformation approach. (Author/BW)

  2. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by...) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30... also be done manually, if the results differ significantly from the computer derived averages. (a...

  3. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by...) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30... also be done manually, if the results differ significantly from the computer derived averages. (a...

  4. 47 CFR 1.959 - Computation of average terrain elevation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 1.959 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by...) of this chapter, average terrain elevation must be calculated by computer using elevations from a 30... also be done manually, if the results differ significantly from the computer derived averages. (a...

  5. Validity of questionnaire self‐reports on computer, mouse and keyboard usage during a four‐week period

    PubMed Central

    Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid

    2007-01-01

    Objective To examine the validity and potential biases in self‐reports of computer, mouse and keyboard usage times, compared with objective recordings. Methods A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one‐year follow‐up study from 2000–1 of musculoskeletal outcomes among Danish computer workers. Results Self‐reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self‐reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self‐reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self‐reports in a systematic way, but the effects were modest and sometimes in different directions. Conclusion Self‐reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self‐reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates. PMID:17387136

  6. Validity of questionnaire self-reports on computer, mouse and keyboard usage during a four-week period.

    PubMed

    Mikkelsen, Sigurd; Vilstrup, Imogen; Lassen, Christina Funch; Kryger, Ann Isabel; Thomsen, Jane Frølund; Andersen, Johan Hviid

    2007-08-01

    To examine the validity and potential biases in self-reports of computer, mouse and keyboard usage times, compared with objective recordings. A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one-year follow-up study from 2000-1 of musculoskeletal outcomes among Danish computer workers. Self-reports on computer, mouse and keyboard usage times were positively associated with objectively measured activity, but the validity was low. Self-reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self-reports overestimated usage times. Overestimation was large at low levels and declined with increasing levels of objectively measured activity. Mouse usage time proportion was an exception with a near 1:1 relation. Variability in objectively measured activity, arm pain, gender and age influenced self-reports in a systematic way, but the effects were modest and sometimes in different directions. Self-reported durations of computer activities are positively associated with objective measures but they are quite inaccurate. Studies using self-reports to establish relations between computer work times and musculoskeletal pain could be biased and lead to falsely increased or decreased risk estimates.

  7. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  8. Time-Accurate Computations of Isolated Circular Synthetic Jets in Crossflow

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Schaeffler, N. W.; Milanovic, I. M.; Zaman, K. B. M. Q.

    2007-01-01

    Results from unsteady Reynolds-averaged Navier-Stokes computations are described for two different synthetic jet flows issuing into a turbulent boundary layer crossflow through a circular orifice. In one case the jet effect is mostly contained within the boundary layer, while in the other case the jet effect extends beyond the boundary layer edge. Both cases have momentum flux ratios less than 2. Several numerical parameters are investigated, and some lessons learned regarding the CFD methods for computing these types of flow fields are summarized. Results in both cases are compared to experiment.

  9. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2016-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an Autoregressive Moving Average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. shape sensing, fiber optic strain sensor, system equivalent reduction and expansion process.

  10. Embedding medical student computer tutorials into a busy emergency department.

    PubMed

    Pusic, Martin V; Pachev, George S; MacDonald, Wendy A

    2007-02-01

    To explore medical students' use of computer tutorials embedded in a busy clinical setting; to demonstrate that such tutorials can increase knowledge gain over and above that attributable to the clinical rotation itself. Six tutorials were installed on a computer placed in a central area in an emergency department. Each tutorial was made up of between 33 and 85 screens of information that include text, graphics, animations, and questions. They were designed to be brief (10 minutes), focused, interactive, and immediately relevant. The authors evaluated the intervention using quantitative research methods, including usage tracking, surveys of faculty and students, and a randomized pretest-posttest study. Over 46 weeks, 95 medical students used the tutorials 544 times, for an overall average of 1.7 times a day. The median time spent on completed tutorials was 11 minutes (average [SD], 14 [+/-12] minutes). Seventy-four students completed the randomized study. They completed 65% of the assigned tutorials, resulting in improved examination scores compared with the control (effect size, 0.39; 95% confidence interval = 0.15 to 0.62). Students were positively disposed to the tutorials, ranking them as "valuable." Fifty-four percent preferred the tutorials to small group teaching sessions with a preceptor. The faculty was also positive about the tutorials, although they did not appear to integrate the tutorials directly into their teaching. Medical students on rotation in a busy clinical setting can and will use appropriately presented computer tutorials. The tutorials are effective in raising examination scores.

  11. Gateway Portal

    DTIC Science & Technology

    2004-03-01

    using standard Internet technologies with no additional client software required. Furthermore, using a portable...Wilkerson Computational and Information Sciences Directorate, ARL Approved for public release... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  12. The Solar Swan Dive.

    ERIC Educational Resources Information Center

    Dilsaver, John S.; Siler, Joseph R.

    1991-01-01

    Solutions for a problem in which the time necessary for an object to fall into the sun from the average distance from the earth to the sun are presented. Both calculus- and noncalculus-based solutions are presented. A sample computer solution is included. (CW)

  13. Computation of discharge using the index-velocity method in tidally affected areas

    USGS Publications Warehouse

    Ruhl, Catherine A.; Simpson, Michael R.

    2005-01-01

    Computation of a discharge time-series in a tidally affected area is a two-step process. First, the cross-sectional area is computed on the basis of measured water levels and the mean cross-sectional velocity is computed on the basis of the measured index velocity. Then discharge is calculated as the product of the area and mean velocity. Daily mean discharge is computed as the daily average of the low-pass filtered discharge. The Sacramento-San Joaquin River Delta and San Francisco Bay, California, is an area that is strongly influenced by the tides, and therefore is used as an example of how this methodology is used.

  14. An advanced analysis and modelling the air pollutant concentration temporal dynamics in atmosphere of the industrial cities: Odessa city

    NASA Astrophysics Data System (ADS)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Ternovsky, V. B.; Serga, I. N.; Bykowszczenko, N.

    2017-10-01

    Results of analysis and modelling the air pollutant (dioxide of nitrogen) concentration temporal dynamics in atmosphere of the industrial city Odessa are presented for the first time and based on computing by nonlinear methods of the chaos and dynamical systems theories. A chaotic behaviour is discovered and investigated. To reconstruct the corresponding strange chaotic attractor, the time delay and embedding dimension are computed. The former is determined by the methods of autocorrelation function and average mutual information, and the latter is calculated by means of correlation dimension method and algorithm of false nearest neighbours. It is shown that low-dimensional chaos exists in the nitrogen dioxide concentration time series under investigation. Further, the Lyapunov’s exponents spectrum, Kaplan-Yorke dimension and Kolmogorov entropy are computed.

  15. TRIO: Burst Buffer Based I/O Orchestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Oral, H Sarp; Pritchard, Michael

    The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less

  16. 5 CFR 550.707 - Computation of severance pay fund.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... pay for standby duty regularly varies throughout the year, compute the average standby duty premium...), compute the weekly average percentage, and multiply that percentage by the weekly scheduled rate of pay in... hours in a pay status (excluding overtime hours) and multiply that average by the hourly rate of basic...

  17. Real-time control system for adaptive resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flath, L; An, J; Brase, J

    2000-07-24

    Sustained operation of high average power solid-state lasers currently requires an adaptive resonator to produce the optimal beam quality. We describe the architecture of a real-time adaptive control system for correcting intra-cavity aberrations in a heat capacity laser. Image data collected from a wavefront sensor are processed and used to control phase with a high-spatial-resolution deformable mirror. Our controller takes advantage of recent developments in low-cost, high-performance processor technology. A desktop-based computational engine and object-oriented software architecture replaces the high-cost rack-mount embedded computers of previous systems.

  18. A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.

    PubMed

    Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei

    2013-05-30

    Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Neural pulse frequency modulation of an exponentially correlated Gaussian process

    NASA Technical Reports Server (NTRS)

    Hutchinson, C. E.; Chon, Y.-T.

    1976-01-01

    The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.

  20. Nonintrusive performance measurement of a gas turbine engine in real time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeSilva, Upul P.; Claussen, Heiko

    Performance of a gas turbine engine is monitored by computing a mass flow rate through the engine. Acoustic time-of-flight measurements are taken between acoustic transmitters and receivers in the flow path of the engine. The measurements are processed to determine average speeds of sound and gas flow velocities along those lines-of-sound. A volumetric flow rate in the flow path is computed using the gas flow velocities together with a representation of the flow path geometry. A gas density in the flow path is computed using the speeds of sound and a measured static pressure. The mass flow rate is calculatedmore » from the gas density and the volumetric flow rate.« less

  1. The time resolution of the St Petersburg paradox

    PubMed Central

    Peters, Ole

    2011-01-01

    A resolution of the St Petersburg paradox is presented. In contrast to the standard resolution, utility is not required. Instead, the time-average performance of the lottery is computed. The final result can be phrased mathematically identically to Daniel Bernoulli's resolution, which uses logarithmic utility, but is derived using a conceptually different argument. The advantage of the time resolution is the elimination of arbitrary utility functions. PMID:22042904

  2. Cascading Oscillators in Decoding Speech: Reflection of a Cortical Computation Principle

    DTIC Science & Technology

    2016-09-06

    Combining an experimental paradigm based on Ghitza and Greenberg (2009) for speech with the approach of Farbood et al. (2013) to timing in key...Fuglsang, 2015). A model was developed which uses modulation spectrograms to construct an oscillating time - series synchronized with the slowly varying...estimated to average 1 hour per response, including the time for reviewing instructions, searching data sources, gathering and maintaining the data

  3. Susceptible-infected-susceptible epidemics on networks with general infection and cure times.

    PubMed

    Cator, E; van de Bovenkamp, R; Van Mieghem, P

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  4. Susceptible-infected-susceptible epidemics on networks with general infection and cure times

    NASA Astrophysics Data System (ADS)

    Cator, E.; van de Bovenkamp, R.; Van Mieghem, P.

    2013-06-01

    The classical, continuous-time susceptible-infected-susceptible (SIS) Markov epidemic model on an arbitrary network is extended to incorporate infection and curing or recovery times each characterized by a general distribution (rather than an exponential distribution as in Markov processes). This extension, called the generalized SIS (GSIS) model, is believed to have a much larger applicability to real-world epidemics (such as information spread in online social networks, real diseases, malware spread in computer networks, etc.) that likely do not feature exponential times. While the exact governing equations for the GSIS model are difficult to deduce due to their non-Markovian nature, accurate mean-field equations are derived that resemble our previous N-intertwined mean-field approximation (NIMFA) and so allow us to transfer the whole analytic machinery of the NIMFA to the GSIS model. In particular, we establish the criterion to compute the epidemic threshold in the GSIS model. Moreover, we show that the average number of infection attempts during a recovery time is the more natural key parameter, instead of the effective infection rate in the classical, continuous-time SIS Markov model. The relative simplicity of our mean-field results enables us to treat more general types of SIS epidemics, while offering an easier key parameter to measure the average activity of those general viral agents.

  5. On simulating flow with multiple time scales using a method of averages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L.G.

    1997-12-31

    The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less

  6. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  7. The SPINDLE Disruption-Tolerant Networking System

    DTIC Science & Technology

    2007-11-01

    average availability ( AA ). The AA metric attempts to measure the average fraction of time in the near future that the link will be available for use...Each link’s AA is epidemically disseminated to all nodes. Path costs are computed using the topology learned through this dissemination, with cost of a...link l set to (1 − AA (l)) + c (a small constant factor that makes routing favor fewer number of hops when all links have AA of 1). Additional details

  8. SIMULATING ATMOSPHERIC EXPOSURE IN A NATIONAL RISK ASSESSMENT USING AN INNOVATIVE METEOROLOGICAL SAMPLING SCHEME

    EPA Science Inventory

    Multimedia risk assessments require the temporal integration of atmospheric concentration and deposition with other media modules. However, providing an extended time series of estimates is computationally expensive. An alternative approach is to substitute long-term average a...

  9. Near real-time digital holographic microscope based on GPU parallel computing

    NASA Astrophysics Data System (ADS)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  10. Design and performance of limestone drains to increase pH and remove metals from acidic mine drainage, Chapter 2

    USGS Publications Warehouse

    Cravotta,, Charles A.; Watzlaf, George R.

    2002-01-01

    Data on the construction characteristics and the composition of influent and effluent at 13 underground, limestone-filled drains in Pennsylvania and Maryland are reported to evaluate the design and performance of limestone drains for the attenuation of acidity and dissolved metals in acidic mine drainage. On the basis of the initial mass of limestone, dimensions of the drains, and average flow rates, the initial porosity and average detention time for each drain were computed. Calculated porosity ranged from 0.12 to 0.50 with corresponding detention times at average flow from 1.3 to 33 h. The effectiveness of treatment was dependent on influent chemistry, detention time, and limestone purity. At two sites where influent contained elevated dissolved Al (>5 mg/liter), drain performance declined rapidly; elsewhere the drains consistently produced near-neutral effluent, even when influent contained small concentrations of dissolved Fe^+ (<5 mg/liter). Rates of limestone dissolution computed on the basis of average long-term Ca ion flux normalized by initial mass and purity of limestone at each of the drains ranged from 0.008 to 0.079 year-1. Data for alkalinity concentration and flux during 11-day closed-container tests using an initial mass of 4kg crushed limestone and a solution volume of 2.3 liter yielded dissolution rate constants that were comparable to these long-term field rates. An analytical method is proposed using closed-container test data to evaluate long-term performance (longevity) or to estimate the mass of limestone needed for a limestone treatment. This method condisers flow rate, influent alkalinity, steady-state alkalinity of effluent, and desired effluent alkalinity or detention time at a future time(s) and aplies first-order rate laws for limestone dissolution (continuous) and production of alkalinity (bounded).

  11. Discrete element analysis is a valid method for computing joint contact stress in the hip before and after acetabular fracture.

    PubMed

    Townsend, Kevin C; Thomas-Aitken, Holly D; Rudert, M James; Kern, Andrew M; Willey, Michael C; Anderson, Donald D; Goetz, Jessica E

    2018-01-23

    Evaluation of abnormalities in joint contact stress that develop after inaccurate reduction of an acetabular fracture may provide a potential means for predicting the risk of developing post-traumatic osteoarthritis. Discrete element analysis (DEA) is a computational technique for calculating intra-articular contact stress distributions in a fraction of the time required to obtain the same information using the more commonly employed finite element analysis technique. The goal of this work was to validate the accuracy of DEA-computed contact stress against physical measurements of contact stress made in cadaveric hips using Tekscan sensors. Four static loading tests in a variety of poses from heel-strike to toe-off were performed in two different cadaveric hip specimens with the acetabulum intact and again with an intentionally malreduced posterior wall acetabular fracture. DEA-computed contact stress was compared on a point-by-point basis to stress measured from the physical experiments. There was good agreement between computed and measured contact stress over the entire contact area (correlation coefficients ranged from 0.88 to 0.99). DEA-computed peak contact stress was within an average of 0.5 MPa (range 0.2-0.8 MPa) of the Tekscan peak stress for intact hips, and within an average of 0.6 MPa (range 0-1.6 MPa) for fractured cases. DEA-computed contact areas were within an average of 33% of the Tekscan-measured areas (range: 1.4-60%). These results indicate that the DEA methodology is a valid method for accurately estimating contact stress in both intact and fractured hips. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Numerical Simulation of Forced and Free-to-Roll Delta-Wing Motions

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Schiff, Lewis B.

    1996-01-01

    The three-dimensional, Reynolds-averaged, Navier-Stokes (RANS) equations are used to numerically simulate nonsteady vortical flow about a 65-deg sweep delta wing at 30-deg angle of attack. Two large-amplitude, high-rate, forced-roll motions, and a damped free-to-roll motion are presented. The free-to-roll motion is computed by coupling the time-dependent RANS equations to the flight dynamic equation of motion. The computed results are in good agreement with the forces, moments, and roll-angle time histories. Vortex breakdown is present in each case. Significant time lags in the vortex breakdown motions relative to the body motions strongly influence the dynamic forces and moments.

  13. Finite element concepts in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.

  14. OceanXtremes: Scalable Anomaly Detection in Oceanographic Time-Series

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Armstrong, E. M.; Chin, T. M.; Gill, K. M.; Greguska, F. R., III; Huang, T.; Jacob, J. C.; Quach, N.

    2016-12-01

    The oceanographic community must meet the challenge to rapidly identify features and anomalies in complex and voluminous observations to further science and improve decision support. Given this data-intensive reality, we are developing an anomaly detection system, called OceanXtremes, powered by an intelligent, elastic Cloud-based analytic service backend that enables execution of domain-specific, multi-scale anomaly and feature detection algorithms across the entire archive of 15 to 30-year ocean science datasets.Our parallel analytics engine is extending the NEXUS system and exploits multiple open-source technologies: Apache Cassandra as a distributed spatial "tile" cache, Apache Spark for in-memory parallel computation, and Apache Solr for spatial search and storing pre-computed tile statistics and other metadata. OceanXtremes provides these key capabilities: Parallel generation (Spark on a compute cluster) of 15 to 30-year Ocean Climatologies (e.g. sea surface temperature or SST) in hours or overnight, using simple pixel averages or customizable Gaussian-weighted "smoothing" over latitude, longitude, and time; Parallel pre-computation, tiling, and caching of anomaly fields (daily variables minus a chosen climatology) with pre-computed tile statistics; Parallel detection (over the time-series of tiles) of anomalies or phenomena by regional area-averages exceeding a specified threshold (e.g. high SST in El Nino or SST "blob" regions), or more complex, custom data mining algorithms; Shared discovery and exploration of ocean phenomena and anomalies (facet search using Solr), along with unexpected correlations between key measured variables; Scalable execution for all capabilities on a hybrid Cloud, using our on-premise OpenStack Cloud cluster or at Amazon. The key idea is that the parallel data-mining operations will be run "near" the ocean data archives (a local "network" hop) so that we can efficiently access the thousands of files making up a three decade time-series. The presentation will cover the architecture of OceanXtremes, parallelization of the climatology computation and anomaly detection algorithms using Spark, example results for SST and other time-series, and parallel performance metrics.

  15. Benchmarking hardware architecture candidates for the NFIRAOS real-time controller

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre

    2014-07-01

    As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.

  16. 26 CFR 1.163-10T - Qualified residence interest (temporary).

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... general. (ii)Example. (g)Selection of method. (h)Average balance. (1)Average balance defined. (2)Average balance reported by lender. (3)Average balance computed on a daily basis. (i)In general. (ii)Example. (4)Average balance computed using the interest rate. (i)In general. (ii)Points and prepaid interest. (iii...

  17. The relationship between playing computer or video games with mental health and social relationships among students in guidance schools, Kermanshah.

    PubMed

    Reshadat, S; Ghasemi, S R; Ahmadian, M; RajabiGilan, N

    2014-01-09

    Computer or video games are a popular recreational activity and playing them may constitute a large part of leisure time. This cross-sectional study aimed to evaluate the relationship between playing computer or video games with mental health and social relationships among students in guidance schools in Kermanshah, Islamic Republic of Iran, in 2012. Our total sample was 573 students and our tool was the General Health Questionnaire (GHQ-28) and social relationships questionnaires. Survey respondents reported spending an average of 71.07 (SD 72.1) min/day on computer or video games. There was a significant relationship between time spent playing games and general mental health (P < 0.04) and depression (P < 0.03). There was also a significant difference between playing and not playing computer or video games with social relationships and their subscales, including trans-local relationships (P < 0.0001) and association relationships (P < 0.01) among all participants. There was also a significant relationship between social relationships and time spent playing games (P < 0.02) and its dimensions, except for family relationships.

  18. Computational Study of Axisymmetric Off-Design Nozzle Flows

    NASA Technical Reports Server (NTRS)

    DalBello, Teryn; Georgiadis, Nicholas; Yoder, Dennis; Keith, Theo

    2003-01-01

    Computational Fluid Dynamics (CFD) analyses of axisymmetric circular-arc boattail nozzles operating off-design at transonic Mach numbers have been completed. These computations span the very difficult transonic flight regime with shock-induced separations and strong adverse pressure gradients. External afterbody and internal nozzle pressure distributions computed with the Wind code are compared with experimental data. A range of turbulence models were examined, including the Explicit Algebraic Stress model. Computations have been completed at freestream Mach numbers of 0.9 and 1.2, and nozzle pressure ratios (NPR) of 4 and 6. Calculations completed with variable time-stepping (steady-state) did not converge to a true steady-state solution. Calculations obtained using constant timestepping (timeaccurate) indicate less variations in flow properties compared with steady-state solutions. This failure to converge to a steady-state solution was the result of using variable time-stepping with large-scale separations present in the flow. Nevertheless, time-averaged boattail surface pressure coefficient and internal nozzle pressures show reasonable agreement with experimental data. The SST turbulence model demonstrates the best overall agreement with experimental data.

  19. Computation of Asteroid Proper Elements on the Grid

    NASA Astrophysics Data System (ADS)

    Novakovic, B.; Balaz, A.; Knezevic, Z.; Potocnik, M.

    2009-12-01

    A procedure of gridification of the computation of asteroid proper orbital elements is described. The need to speed up the time consuming computations and make them more efficient is justified by the large increase of observational data expected from the next generation all sky surveys. We give the basic notion of proper elements and of the contemporary theories and methods used to compute them for different populations of objects. Proper elements for nearly 70,000 asteroids are derived since the beginning of use of the Grid infrastructure for the purpose. The average time for the catalogs update is significantly shortened with respect to the time needed with stand-alone workstations. We also present basics of the Grid computing, the concepts of Grid middleware and its Workload management system. The practical steps we undertook to efficiently gridify our application are described in full detail. We present the results of a comprehensive testing of the performance of different Grid sites, and offer some practical conclusions based on the benchmark results and on our experience. Finally, we propose some possibilities for the future work.

  20. PAB3D: Its History in the Use of Turbulence Models in the Simulation of Jet and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Hunter, Craig A.; Deere, Karen A.; Massey, Steven J.; Elmiligui, Alaa

    2006-01-01

    This is a review paper for PAB3D s history in the implementation of turbulence models for simulating jet and nozzle flows. We describe different turbulence models used in the simulation of subsonic and supersonic jet and nozzle flows. The time-averaged simulations use modified linear or nonlinear two-equation models to account for supersonic flow as well as high temperature mixing. Two multiscale-type turbulence models are used for unsteady flow simulations. These models require modifications to the Reynolds Averaged Navier-Stokes (RANS) equations. The first scheme is a hybrid RANS/LES model utilizing the two-equation (k-epsilon) model with a RANS/LES transition function, dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier-Stokes (PANS) formulation. All of these models are implemented in the three-dimensional Navier-Stokes code PAB3D. This paper discusses computational methods, code implementation, computed results for a wide range of nozzle configurations at various operating conditions, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions.

  1. Computational imaging of light in flight

    NASA Astrophysics Data System (ADS)

    Hullin, Matthias B.

    2014-10-01

    Many computer vision tasks are hindered by image formation itself, a process that is governed by the so-called plenoptic integral. By averaging light falling into the lens over space, angle, wavelength and time, a great deal of information is irreversibly lost. The emerging idea of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. To address this challenge, we introduced a family of computational imaging techniques operating on standard time-of-flight image sensors, for the first time allowing the user to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.

  2. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  3. Analytic computation of average energy of neutrons inducing fission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Alexander Rich

    2016-08-12

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  4. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/cm2 for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/cm2, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8- species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  5. Viking Afterbody Heating Computations and Comparisons to Flight Data

    NASA Technical Reports Server (NTRS)

    Edquist, Karl T.; Wright, Michael J.; Allen, Gary A., Jr.

    2006-01-01

    Computational fluid dynamics predictions of Viking Lander 1 entry vehicle afterbody heating are compared to flight data. The analysis includes a derivation of heat flux from temperature data at two base cover locations, as well as a discussion of available reconstructed entry trajectories. Based on the raw temperature-time history data, convective heat flux is derived to be 0.63-1.10 W/sq cm for the aluminum base cover at the time of thermocouple failure. Peak heat flux at the fiberglass base cover thermocouple is estimated to be 0.54-0.76 W/sq cm, occurring 16 seconds after peak stagnation point heat flux. Navier-Stokes computational solutions are obtained with two separate codes using an 8-species Mars gas model in chemical and thermal non-equilibrium. Flowfield solutions using local time-stepping did not result in converged heating at either thermocouple location. A global time-stepping approach improved the computational stability, but steady state heat flux was not reached for either base cover location. Both thermocouple locations lie within a separated flow region of the base cover that is likely unsteady. Heat flux computations averaged over the solution history are generally below the flight data and do not vary smoothly over time for both base cover locations. Possible reasons for the mismatch between flight data and flowfield solutions include underestimated conduction effects and limitations of the computational methods.

  6. An effective chaos-geometric computational approach to analysis and prediction of evolutionary dynamics of the environmental systems: Atmospheric pollution dynamics

    NASA Astrophysics Data System (ADS)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Bunyakova, Yu Ya; Florko, T. A.; Agayar, E. V.; Solyanikova, E. P.

    2017-10-01

    The present paper concerns the results of computational studying dynamics of the atmospheric pollutants (dioxide of nitrogen, sulphur etc) concentrations in an atmosphere of the industrial cities (Odessa) by using the dynamical systems and chaos theory methods. A chaotic behaviour in the nitrogen dioxide and sulphurous anhydride concentration time series at several sites of the Odessa city is numerically investigated. As usually, to reconstruct the corresponding attractor, the time delay and embedding dimension are needed. The former is determined by the methods of autocorrelation function and average mutual information, and the latter is calculated by means of a correlation dimension method and algorithm of false nearest neighbours. Further, the Lyapunov’s exponents spectrum, Kaplan-Yorke dimension and Kolmogorov entropy are computed. It has been found an existence of a low-D chaos in the time series of the atmospheric pollutants concentrations.

  7. Daily and Long Term Variations of Out-Door Gamma Dose Rate in Khorasan Province, Iran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toossi, M. T. Bahreyni; Bayani, SH.

    2008-08-07

    In Iran before 1996, only a few hot spots had been identified, no systematic study had been envisaged. Since then preparation of out-door environmental gamma radiation map of Iran was defined as a long term goal in our center, at the same time simultaneous monitoring of outdoor gamma level in Khorasan was also proposed. A Rados area monitoring system (AAM-90) including 10 intelligent RD-02 detector and all associated components were purchased. From 2003 gradually seven stations have been setup in Khorasan. For all seven stations monthly average and one hour daily average on four time intervals have been computed. Statisticallymore » no significant differences have been observed. This is also true for monthly averages. The overall average dose rate for present seven stations varies from 0.11 {mu}Sv{center_dot}h{sup -1} for Ferdows, to 0.04 {mu}Sv{center_dot}h{sup -1} for Dargaz. Based on our data, 50 minutes sample in any time interval is an accurate sample size to estimate out door Gamma dose rate.« less

  8. Partially-Averaged Navier-Stokes (PANS) approach for study of fluid flow and heat transfer characteristics in Czochralski melt

    NASA Astrophysics Data System (ADS)

    Verma, Sudeep; Dewan, Anupam

    2018-01-01

    The Partially-Averaged Navier-Stokes (PANS) approach has been applied for the first time to model turbulent flow and heat transfer in an ideal Czochralski set up with the realistic boundary conditions. This method provides variable level of resolution ranging from the Reynolds-Averaged Navier-Stokes (RANS) modelling to Direct Numerical Simulation (DNS) based on the filter control parameter. For the present case, a low-Re PANS model has been developed for Czochralski melt flow, which includes the effect of coriolis, centrifugal, buoyant and surface tension induced forces. The aim of the present study is to assess improvement in results on switching to PANS modelling from unsteady RANS (URANS) approach on the same computational mesh. The PANS computed results were found to be in good agreement with the reported experimental, DNS and Large Eddy Simulation (LES) data. A clear improvement in computational accuracy is observed in switching from the URANS approach to the PANS methodology. The computed results further improved with a reduction in the PANS filter width. Further the capability of the PANS model to capture key characteristics of the Czochralski crystal growth is also highlighted. It was observed that the PANS model was able to resolve the three-dimensional turbulent nature of the melt, characteristic flow structures arising due to flow instabilities and generation of thermal plumes and vortices in the Czochralski melt.

  9. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    DTIC Science & Technology

    2017-03-20

    computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and

  10. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  11. The turbulent mean-flow, Reynolds-stress, and heat flux equations in mass-averaged dependent variables

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Rose, W. C.

    1973-01-01

    The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.; Gross, R.; Goble, W

    The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less

  13. Linear optical quantum computing in a single spatial mode.

    PubMed

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  14. Possibilities of forecasting hypercholesterinemia in pilots

    NASA Technical Reports Server (NTRS)

    Vivilov, P.

    1980-01-01

    The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.

  15. What Makes Industries Strategic

    DTIC Science & Technology

    1990-01-01

    1988, America’s dollar GNP per employee fell below the average of the next six largest market econo- mies for the first time in this century (chart...manufacturing value added divided by full-time equivalent employees (with and without SIC 35, which contains computers). Chart 2. Productivity in...been available in English. Employees at Convex, a mini-supercomputer maker, had to learn the Japanese alphabet before they realized the opportunity

  16. Logistic characteristics of phonon transport in silicon thin film: the S-curve

    NASA Astrophysics Data System (ADS)

    Yilbas, B. S.; Mansoor, S. Bin

    2013-10-01

    The logistic characteristics of the averaged heat flux are investigated across the thin film incorporating the S-curve. Temporal behaviour of the heat flux vector is computed using the Boltzmann transport equation. The dispersion relations are introduced to account for the frequency dependent phonon transport across the film. The influence of film width on the characteristics of the averaged heat flux is also examined. It is found that temporal behaviour of the averaged heat flux follows the S-curve. The S-curve characteristics change for different film widths. The time to reach 95% steady value of the averaged heat flux is short for the film with small widths, which is attributed to the ballistic behaviour of phonons in the film.

  17. Optical near-field analysis of spherical metals: Application of the FDTD method combined with the ADE method.

    PubMed

    Yamaguchi, Takashi; Hinata, Takashi

    2007-09-03

    The time-average energy density of the optical near-field generated around a metallic sphere is computed using the finite-difference time-domain method. To check the accuracy, the numerical results are compared with the rigorous solutions by Mie theory. The Lorentz-Drude model, which is coupled with Maxwell's equation via motion equations of an electron, is applied to simulate the dispersion relation of metallic materials. The distributions of the optical near-filed generated around a metallic hemisphere and a metallic spheroid are also computed, and strong optical near-fields are obtained at the rim of them.

  18. [Computer-supported patient history: a workplace analysis].

    PubMed

    Schubiger, G; Weber, D; Winiker, H; Desgrandchamps, D; Imahorn, P

    1995-04-29

    Since 1991, an extensive computer network has been developed and implemented at the Cantonal Hospital of Lucerne. The medical applications include computer aided management of patient charts, medical correspondence, and compilation of diagnosis statistics according to the ICD-9 code. In 1992, the system was introduced as a pilot project in the departments of pediatrics and pediatric surgery of the Lucerne Children's Hospital. This new system has been prospectively evaluated using a workplace analysis. The time taken to complete patient charts and surgical reports was recorded for 14 days before and after the introduction of the computerized system. This analysis was performed for both physicians and secretarial staff. The time delay between the discharge of the patient and the mailing of the discharge letter to the family doctor was also recorded. By conventional means, the average time for the physician to generate a patient chart (26 minutes, n = 119) was slightly lower than the time needed with the computer system (28 minutes, n = 177). However, for a discharge letter, the time needed by the physician was reduced by one third with the computer system and by more than one half for the secretarial staff (32 and 66 minutes conventionally; 22 and 24 minutes respectively with the computer system; p < 0.0001). The time required for the generation of surgical reports was reduced from 17 to 13 minutes per patient and the processing time by secretaries from 37 to 14 minutes. The time delay between the discharge of the patient and the mailing of the discharge letter was reduced by 50% from 7.6 to 3.9 days.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Turbulent transport measurements with a laser Doppler velocimeter.

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time-average spectrum contains information only for times shorter than the Lagrangian-integral time scale of the turbulence. To examine the long-time behavior, one must use either extremely small scattering angles, much-longer-wavelength radiation, or a different mode of signal analysis, e.g., FM detection.

  20. Computer-enhanced robotic telesurgery minimizes esophageal perforation during Heller myotomy.

    PubMed

    Melvin, W Scott; Dundon, John M; Talamini, Mark; Horgan, Santiago

    2005-10-01

    Laparoscopic Heller myotomy has emerged as the treatment of choice for achalasia. However, intraoperative esophageal perforation remains a significant complication. Computer-enhanced operative techniques have the potential to improve outcomes for certain operative procedures. Robotic, computer-enhanced laparoscopic telemanipulators using 3-dimensional magnified imaging and motion scaling are designed uniquely to facilitate certain operations requiring fine-tissue manipulation. We hypothesized that computer-enhanced robotic Heller myotomy would reduce intraoperative complications compared with laparoscopic techniques. All patients undergoing an operation for achalasia at 3 institutions with a robotic surgery system (DaVinci; Intuitive Surgical Corporation, Sunnyvale, Calif) were followed-up prospectively. Demographics, perioperative course, complications, and hospital stay were recorded. Follow-up evaluation was obtained via a standardized symptom survey, office visits, and medical records. Data were compared with preoperative symptoms using a Mann-Whitney U test, and operating times were compared using the ANOVA test. Between August 2000 and August 2004 there were 104 patients who underwent a robotic Heller myotomy with partial fundoplicaton. There were 53 women and 51 men. All patients were symptomatic. The operative time was 140.55 minutes overall, but improved from 162.63 minutes to 113.50 minutes from 2000-2002 to 2003-2004 (P = .0001). There were no esophageal perforations. There were 8 minor complications and 1 patient required conversion to an open operation. Sixty-six (62.3%) patients were discharged on the first postoperative day and the average hospital stay was 1.5 days. A symptom survey was completed in 79 of 104 patients (76%) at follow-up evaluation. Symptoms improved in all patients with an average follow-up symptom score of 0.48 compared with 5.0 before the operation (P = .0001). Forty-three of the 79 patients from whom follow-up data were collected had a minimum follow-up period of 1 year. The follow-up period averaged 16 months. No patients required reoperation. Computer-enhanced robotic laparoscopic techniques provide a clear advantage over standard laparoscopy for the operative treatment of achalasia. We have shown in this large series that Heller myotomy can be completed using this technology without esophageal perforation. The application of computer-enhanced operative techniques appears to provide superior outcomes in selected procedures.

  1. The Computer as a Teaching Aid for Eleventh Grade Mathematics: A Comparison Study.

    ERIC Educational Resources Information Center

    Kieren, Thomas Ervin

    To determine the effect of learning computer programming and the use of a computer on mathematical achievement of eleventh grade students, for each of two years, average and above average students were randomly assigned to an experimental and control group. The experimental group wrote computer programs and used the output from the computer in…

  2. Computer usage and task-switching during resident's working day: Disruptive or not?

    PubMed

    Méan, Marie; Garnier, Antoine; Wenger, Nathalie; Castioni, Julien; Waeber, Gérard; Marques-Vidal, Pedro

    2017-01-01

    Recent implementation of electronic health records (EHR) has dramatically changed medical ward organization. While residents in general internal medicine use EHR systems half of their working time, whether computer usage impacts residents' workflow remains uncertain. We aimed to observe the frequency of task-switches occurring during resident's work and to assess whether computer usage was associated with task-switching. In a large Swiss academic university hospital, we conducted, between May 26 and July 24, 2015 a time-motion study to assess how residents in general internal medicine organize their working day. We observed 49 day and 17 evening shifts of 36 residents, amounting to 697 working hours. During day shifts, residents spent 5.4 hours using a computer (mean total working time: 11.6 hours per day). On average, residents switched 15 times per hour from a task to another. Task-switching peaked between 8:00-9:00 and 16:00-17:00. Task-switching was not associated with resident's characteristics and no association was found between task-switching and extra hours (Spearman r = 0.220, p = 0.137 for day and r = 0.483, p = 0.058 for evening shifts). Computer usage occurred more frequently at the beginning or ends of day shifts and was associated with decreased overall task-switching. Task-switching occurs very frequently during resident's working day. Despite the fact that residents used a computer half of their working time, computer usage was associated with decreased task-switching. Whether frequent task-switches and computer usage impact the quality of patient care and resident's work must be evaluated in further studies.

  3. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  4. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures

    DOE PAGES

    Aziz, H. M. Abdul; Ukkusuri, Satish V.

    2017-06-29

    We present that EPA-MOVES (Motor Vehicle Emission Simulator) is often integrated with traffic simulators to assess emission levels of large-scale urban networks with signalized intersections. High variations in speed profiles exist in the context of congested urban networks with signalized intersections. The traditional average-speed-based emission estimation technique with EPA-MOVES provides faster execution while underestimates the emissions in most cases because of ignoring the speed variation at congested networks with signalized intersections. In contrast, the atomic second-by-second speed profile (i.e., the trajectory of each vehicle)-based technique provides accurate emissions at the cost of excessive computational power and time. We addressed thismore » issue by developing a novel method to determine the link-driving-schedules (LDSs) for the EPA-MOVES tool. Our research developed a hierarchical clustering technique with dynamic time warping similarity measures (HC-DTW) to find the LDS for EPA-MOVES that is capable of producing emission estimates better than the average-speed-based technique with execution time faster than the atomic speed profile approach. We applied the HC-DTW on a sample data from a signalized corridor and found that HC-DTW can significantly reduce computational time without compromising the accuracy. The developed technique in this research can substantially contribute to the EPA-MOVES-based emission estimation process for large-scale urban transportation network by reducing the computational time with reasonably accurate estimates. This method is highly appropriate for transportation networks with higher variation in speed such as signalized intersections. Lastly, experimental results show error difference ranging from 2% to 8% for most pollutants except PM 10.« less

  5. Aeroacoustic Simulation of a Nose Landing Gear in an Open Jet Facility Using FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.; Carlson, Jan-Renee

    2012-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida s open-jet acoustic facility known as UFAFF. The unstructured-grid flow solver, FUN3D, developed at NASA Langley Research center is used to compute the unsteady flow field for this configuration. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions compare favorably with the measured data. Unsteady flowfield data obtained from the FUN3D code are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the sound pressure levels at microphones placed in the farfield. Significant improvement in predicted noise levels is obtained when the flowfield data from the open jet UFAFF simulations is used as compared to the case using flowfield data from the closed-wall BART configuration.

  6. Automated Quantification of Pneumothorax in CT

    PubMed Central

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  7. A Big Data Approach to Analyzing Market Volatility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng; Bethel, E. Wes; Gu, Ming

    2013-06-05

    Understanding the microstructure of the financial market requires the processing of a vast amount of data related to individual trades, and sometimes even multiple levels of quotes. Analyzing such a large volume of data requires tremendous computing power that is not easily available to financial academics and regulators. Fortunately, public funded High Performance Computing (HPC) power is widely available at the National Laboratories in the US. In this paper we demonstrate that the HPC resource and the techniques for data-intensive sciences can be used to greatly accelerate the computation of an early warning indicator called Volume-synchronized Probability of Informed tradingmore » (VPIN). The test data used in this study contains five and a half year's worth of trading data for about 100 most liquid futures contracts, includes about 3 billion trades, and takes 140GB as text files. By using (1) a more efficient file format for storing the trading records, (2) more effective data structures and algorithms, and (3) parallelizing the computations, we are able to explore 16,000 different ways of computing VPIN in less than 20 hours on a 32-core IBM DataPlex machine. Our test demonstrates that a modest computer is sufficient to monitor a vast number of trading activities in real-time – an ability that could be valuable to regulators. Our test results also confirm that VPIN is a strong predictor of liquidity-induced volatility. With appropriate parameter choices, the false positive rates are about 7% averaged over all the futures contracts in the test data set. More specifically, when VPIN values rise above a threshold (CDF > 0.99), the volatility in the subsequent time windows is higher than the average in 93% of the cases.« less

  8. Moving toward climate-informed agricultural decision support - can we use PRISM data for more than just monthly averages?

    USDA-ARS?s Scientific Manuscript database

    Decision support systems/models for agriculture are varied in target application and complexity, ranging from simple worksheets to near real-time forecast systems requiring significant computational and manpower resources. Until recently, most such decision support systems have been constructed with...

  9. Mid-1974 Population Estimates for Nonmetropolitan Communities in Arizona.

    ERIC Educational Resources Information Center

    Scott, Harold; Williams, Valerie C.

    Rural Arizona population estimates were determined for 67 communities by computing a ratio of 1970 population to a 1970 population indicator and then multiplying the resultant persons per indicator times the 1974 value of the specific indicator. The indicators employed were: average daily elementary school enrollment (Arizona Department of…

  10. New York Bight Study. Report 1. Hydrodynamic Modeling

    DTIC Science & Technology

    1994-08-01

    function of time. Values of these parameters, averaged daily, were computed from meteorological data recorded at the John F. Kennedy ( JFK ) Airport for...Island Sound "exchange coefficient values were obtained as before from meteorological data collected at the JFK Airport . They are shown in Figures 62-63

  11. Relation between Video Game Addiction and Interfamily Relationships on Primary School Students

    ERIC Educational Resources Information Center

    Zorbaz, Selen Demirtas; Ulas, Ozlem; Kizildag, Seval

    2015-01-01

    This study seeks to analyze whether or not the following three variables of "Discouraging Family Relations," "Supportive Family Relations," "Total Time Spent on the Computer," and "Grade Point Average (GPA)" predict elementary school students' video game addiction rates, and whether or not there exists a…

  12. Rectal temperature-based death time estimation in infants.

    PubMed

    Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato

    2016-03-01

    In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Determining GPS average performance metrics

    NASA Technical Reports Server (NTRS)

    Moore, G. V.

    1995-01-01

    Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.

  14. A Lagrangian dynamic subgrid-scale model turbulence

    NASA Technical Reports Server (NTRS)

    Meneveau, C.; Lund, T. S.; Cabot, W.

    1994-01-01

    A new formulation of the dynamic subgrid-scale model is tested in which the error associated with the Germano identity is minimized over flow pathlines rather than over directions of statistical homogeneity. This procedure allows the application of the dynamic model with averaging to flows in complex geometries that do not possess homogeneous directions. The characteristic Lagrangian time scale over which the averaging is performed is chosen such that the model is purely dissipative, guaranteeing numerical stability when coupled with the Smagorinsky model. The formulation is tested successfully in forced and decaying isotropic turbulence and in fully developed and transitional channel flow. In homogeneous flows, the results are similar to those of the volume-averaged dynamic model, while in channel flow, the predictions are superior to those of the plane-averaged dynamic model. The relationship between the averaged terms in the model and vortical structures (worms) that appear in the LES is investigated. Computational overhead is kept small (about 10 percent above the CPU requirements of the volume or plane-averaged dynamic model) by using an approximate scheme to advance the Lagrangian tracking through first-order Euler time integration and linear interpolation in space.

  15. Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.

    PubMed

    Zhang, Sheng; Li, Chiang-Shan R

    2017-11-01

    As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p < 10 -6 , corrected, 49% of voxels on average overlapped among subdivisions. Compared with seed-region analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.

  16. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  17. 12 CFR 563c.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... fiscal year is at least 10 percent lower than the average of the income for the last five fiscal years, such average income should be substituted for purposes of the computation. Any loss years should be omitted for purposes of computing average income. ...

  18. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  19. Evaluation of a modified Fitts law brain-computer interface target acquisition task in able and motor disabled individuals

    NASA Astrophysics Data System (ADS)

    Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.

    2009-10-01

    A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.

  20. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  1. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  2. Two-dimensional Euler and Navier-Stokes Time accurate simulations of fan rotor flows

    NASA Technical Reports Server (NTRS)

    Boretti, A. A.

    1990-01-01

    Two numerical methods are presented which describe the unsteady flow field in the blade-to-blade plane of an axial fan rotor. These methods solve the compressible, time-dependent, Euler and the compressible, turbulent, time-dependent, Navier-Stokes conservation equations for mass, momentum, and energy. The Navier-Stokes equations are written in Favre-averaged form and are closed with an approximate two-equation turbulence model with low Reynolds number and compressibility effects included. The unsteady aerodynamic component is obtained by superposing inflow or outflow unsteadiness to the steady conditions through time-dependent boundary conditions. The integration in space is performed by using a finite volume scheme, and the integration in time is performed by using k-stage Runge-Kutta schemes, k = 2,5. The numerical integration algorithm allows the reduction of the computational cost of an unsteady simulation involving high frequency disturbances in both CPU time and memory requirements. Less than 200 sec of CPU time are required to advance the Euler equations in a computational grid made up of about 2000 grid during 10,000 time steps on a CRAY Y-MP computer, with a required memory of less than 0.3 megawords.

  3. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  4. Computational aerodynamics development and outlook /Dryden Lecture in Research for 1979/

    NASA Technical Reports Server (NTRS)

    Chapman, D. R.

    1979-01-01

    Some past developments and current examples of computational aerodynamics are briefly reviewed. An assessment is made of the requirements on future computer memory and speed imposed by advanced numerical simulations, giving emphasis to the Reynolds averaged Navier-Stokes equations and to turbulent eddy simulations. Experimental scales of turbulence structure are used to determine the mesh spacings required to adequately resolve turbulent energy and shear. Assessment also is made of the changing market environment for developing future large computers, and of the projections of micro-electronics memory and logic technology that affect future computer capability. From the two assessments, estimates are formed of the future time scale in which various advanced types of aerodynamic flow simulations could become feasible. Areas of research judged especially relevant to future developments are noted.

  5. Rubidium frequency standard test program for NAVSTAR GPS

    NASA Technical Reports Server (NTRS)

    Koide, F.; Dederich, D. J.

    1978-01-01

    Test data of the RFS Program in the Production phase and computer automation are presented, as an essential element in the evaluation of the RFS performance in a simulated spacecraft environment. Typical production test data will be discussed for stabilities from 1 to 100,000 seconds averaging time and simulated time error accumulation test. Also, design considerations in developing the RFS test systems for the acceptance test in production are discussed.

  6. Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement

    DTIC Science & Technology

    2015-01-07

    measured by the preprocessing time, computer memory space, and average query time. In many search procedures for the number of points np of a data set, a...analytic expression for the radiative flux density is possible by the commonly accepted local thermal equilibrium ( LTE ) approximation. A semi...Vol. 227, pp. 9463-9476, 2008. 10. Galvez, M., Ray-Tracing model for radiation transport in three-dimensional LTE system, App. Physics, Vol. 38

  7. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation

    PubMed Central

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033

  8. Time-Shift Correlation Algorithm for P300 Event Related Potential Brain-Computer Interface Implementation.

    PubMed

    Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien

    2016-01-01

    A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.

  9. Distribution of tunnelling times for quantum electron transport.

    PubMed

    Rudge, Samuel L; Kosov, Daniel S

    2016-03-28

    In electron transport, the tunnelling time is the time taken for an electron to tunnel out of a system after it has tunnelled in. We define the tunnelling time distribution for quantum processes in a dissipative environment and develop a practical approach for calculating it, where the environment is described by the general Markovian master equation. We illustrate the theory by using the rate equation to compute the tunnelling time distribution for electron transport through a molecular junction. The tunnelling time distribution is exponential, which indicates that Markovian quantum tunnelling is a Poissonian statistical process. The tunnelling time distribution is used not only to study the quantum statistics of tunnelling along the average electric current but also to analyse extreme quantum events where an electron jumps against the applied voltage bias. The average tunnelling time shows distinctly different temperature dependence for p- and n-type molecular junctions and therefore provides a sensitive tool to probe the alignment of molecular orbitals relative to the electrode Fermi energy.

  10. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    PubMed

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  11. THE EFFECTS OF MAINTENANCE ACTIONS ON THE PFDavg OF SPRING OPERATED PRESSURE RELIEF VALVES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.; Gross, R.

    2014-04-01

    The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less

  12. The Effects of Maintenance Actions on the PFDavg of Spring Operated Pressure Relief Valves

    DOE PAGES

    Harris, S.; Gross, R.; Goble, W; ...

    2015-12-01

    The safety integrity level (SIL) of equipment used in safety instrumented functions is determined by the average probability of failure on demand (PFDavg) computed at the time of periodic inspection and maintenance, i.e., the time of proof testing. The computation of PFDavg is generally based solely on predictions or estimates of the assumed constant failure rate of the equipment. However, PFDavg is also affected by maintenance actions (or lack thereof) taken by the end user. This paper shows how maintenance actions can affect the PFDavg of spring operated pressure relief valves (SOPRV) and how these maintenance actions may be accountedmore » for in the computation of the PFDavg metric. The method provides a means for quantifying the effects of changes in maintenance practices and shows how these changes impact plant safety.« less

  13. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  14. Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques

    NASA Technical Reports Server (NTRS)

    Gridley, D.

    1982-01-01

    A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.

  15. Property Grids for the Kansas High Plains Aquifer from Water Well Drillers' Logs

    NASA Astrophysics Data System (ADS)

    Bohling, G.; Adkins-Heljeson, D.; Wilson, B. B.

    2017-12-01

    Like a number of state and provincial geological agencies, the Kansas Geological Survey hosts a database of water well drillers' logs, containing the records of sediments and lithologies characterized during drilling. At the moment, the KGS database contains records associated with over 90,000 wells statewide. Over 60,000 of these wells are within the High Plains aquifer (HPA) in Kansas, with the corresponding logs containing descriptions of over 500,000 individual depth intervals. We will present grids of hydrogeological properties for the Kansas HPA developed from this extensive, but highly qualitative, data resource. The process of converting the logs into quantitative form consists of first translating the vast number of unique (and often idiosyncratic) sediment descriptions into a fairly comprehensive set of standardized lithology codes and then mapping the standardized lithologies into a smaller number of property categories. A grid is superimposed on the region and the proportion of each property category is computed within each grid cell, with category proportions in empty grid cells computed by interpolation. Grids of properties such as hydraulic conductivity and specific yield are then computed based on the category proportion grids and category-specific property values. A two-dimensional grid is employed for this large-scale, regional application, with category proportions averaged between two surfaces, such as bedrock and the water table at a particular time (to estimate transmissivity at that time) or water tables at two different times (to estimate specific yield over the intervening time period). We have employed a sequence of water tables for different years, based on annual measurements from an extensive network of wells, providing an assessment of temporal variations in the vertically averaged aquifer properties resulting from water level variations (primarily declines) over time.

  16. Feasibility of school-based computer-assisted robotic gaming technology for upper limb rehabilitation of children with cerebral palsy.

    PubMed

    Preston, Nick; Weightman, Andrew; Gallagher, Justin; Holt, Raymond; Clarke, Michael; Mon-Williams, Mark; Levesley, Martin; Bhakta, Bipinchandra

    2016-01-01

    We investigated the feasibility of using computer-assisted arm rehabilitation (CAAR) computer games in schools. Outcomes were children's preference for single player or dual player mode, and changes in arm activity and kinematics. Nine boys and two girls with cerebral palsy (6-12 years, mean 9 years) played assistive technology computer games in single-user mode or with school friends in an AB-BA design. Preference was determined by recording the time spent playing each mode and by qualitative feedback. We used the ABILHAND-kids and Canadian Occupational Performance Measure to evaluate activity limitation, and a portable laptop-based device to capture arm kinematics. No difference was recorded between single-user and dual-user modes (median daily use 9.27 versus 11.2 min, p = 0.214). Children reported dual-user mode was preferable. There were no changes in activity limitation (ABILHAND-kids, p = 0.424; COPM, p = 0.484) but we found significant improvements in hand speed (p = 0.028), smoothness (p = 0.005) and accuracy (p = 0.007). School timetables prohibit extensive use of rehabilitation technology but there is potential for its short-term use to supplement a rehabilitation program. The restricted access to the rehabilitation games was sufficient to improve arm kinematics but not arm activity. Implications for Rehabilitation School premises and teaching staff present no obstacles to the installation of rehabilitation gaming technology. Twelve minutes per day is the average amount of time that the school time table permits children to use rehabilitation gaming equipment (without disruption to academic attendance). The use of rehabilitation gaming technology for an average of 12 minutes daily does not appear to benefit children's functional performance, but there are improvements in the kinematics of children's upper limb.

  17. Computer-assisted surgery of the paranasal sinuses: technical and clinical experience with 368 patients, using the Vector Vision Compact system.

    PubMed

    Stelter, K; Andratschke, M; Leunig, A; Hagedorn, H

    2006-12-01

    This paper presents our experience with a navigation system for functional endoscopic sinus surgery. In this study, we took particular note of the surgical indications and risks and the measurement precision and preparation time required, and we present one brief case report as an example. Between 2000 and 2004, we performed functional endoscopic sinus surgery on 368 patients at the Ludwig Maximilians University, Munich, Germany. We used the Vector Vision Compact system (BrainLAB) with laser registration. The indications for surgery ranged from severe nasal polyps and chronic sinusitis to malignant tumours of the paranasal sinuses and skull base. The time needed for data preparation was less than five minutes. The time required for preparation and patient registration depended on the method used and the experience of the user. In the later cases, it took 11 minutes on average, using Z-Touch registration. The clinical plausibility test produced an average deviation of 1.3 mm. The complications of system use comprised one intra-operative re-registration (18 per cent) and one complete failure (5 per cent). Despite the assistance of an accurate working computer, the anterior ethmoidal artery was incised in one case. However, in all 368 cases, we experienced no cerebrospinal fluid leaks, optic nerve lesions, retrobulbar haematomas or intracerebral bleeding. There were no deaths. From our experience with computer-guided surgical procedures, we conclude that computer-guided navigational systems are so accurate that the risk of misleading the surgeon is minimal. In the future, their use in certain specialized procedures will be not only sensible but mandatory. We recommend their use not only in difficult surgical situations but also in routine procedures and for surgical training.

  18. Cost analysis for computer supported multiple-choice paper examinations

    PubMed Central

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Introduction: Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Methods: Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. Results: The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. Discussion: For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam. PMID:22205913

  19. Cost analysis for computer supported multiple-choice paper examinations.

    PubMed

    Mandel, Alexander; Hörnlein, Alexander; Ifland, Marianus; Lüneburg, Edeltraud; Deckert, Jürgen; Puppe, Frank

    2011-01-01

    Multiple-choice-examinations are still fundamental for assessment in medical degree programs. In addition to content related research, the optimization of the technical procedure is an important question. Medical examiners face three options: paper-based examinations with or without computer support or completely electronic examinations. Critical aspects are the effort for formatting, the logistic effort during the actual examination, quality, promptness and effort of the correction, the time for making the documents available for inspection by the students, and the statistical analysis of the examination results. Since three semesters a computer program for input and formatting of MC-questions in medical and other paper-based examinations is used and continuously improved at Wuerzburg University. In the winter semester (WS) 2009/10 eleven, in the summer semester (SS) 2010 twelve and in WS 2010/11 thirteen medical examinations were accomplished with the program and automatically evaluated. For the last two semesters the remaining manual workload was recorded. The cost of the formatting and the subsequent analysis including adjustments of the analysis of an average examination with about 140 participants and about 35 questions was 5-7 hours for exams without complications in the winter semester 2009/2010, about 2 hours in SS 2010 and about 1.5 hours in the winter semester 2010/11. Including exams with complications, the average time was about 3 hours per exam in SS 2010 and 2.67 hours for the WS 10/11. For conventional multiple-choice exams the computer-based formatting and evaluation of paper-based exams offers a significant time reduction for lecturers in comparison with the manual correction of paper-based exams and compared to purely electronically conducted exams it needs a much simpler technological infrastructure and fewer staff during the exam.

  20. HiFiVS Modeling of Flow Diverter Deployment Enables Hemodynamic Characterization of Complex Intracranial Aneurysm Cases

    PubMed Central

    Xiang, Jianping; Damiano, Robert J.; Lin, Ning; Snyder, Kenneth V.; Siddiqui, Adnan H.; Levy, Elad I.; Meng, Hui

    2016-01-01

    Object Flow diversion via Pipeline Embolization Device (PED) represents the most recent advancement in endovascular therapy of intracranial aneurysms. This exploratory study aims at a proof of concept for an advanced device-modeling tool in conjunction with computational fluid dynamics (CFD) to evaluate flow modification effects by PED in real treatment cases. Methods We performed computational modeling of three PED-treated complex aneurysm cases. Case I had a fusiform vertebral aneurysm treated with a single PED. Case II had a giant internal carotid artery (ICA) aneurysm treated with 2 PEDs. Case III consisted of two tandem ICA aneurysms (a and b) treated by a single PED. Our recently developed high fidelity virtual stenting (HiFiVS) technique was used to recapitulate the clinical deployment process of PEDs in silico for these three cases. Pre- and post-treatment aneurysmal hemodynamics using CFD simulation was analyzed. Changes in aneurysmal flow velocity, inflow rate, and wall shear stress (WSS) (quantifying flow reduction) and turnover time (quantifying stasis) were calculated and compared with clinical outcome. Results In Case I (occluded within the first 3 months), the aneurysm experienced the most drastic aneurysmal flow reduction after PED placement, where the aneurysmal average velocity, inflow rate and average WSS was decreased by 76.3%, 82.5% and 74.0%, respectively, while the turnover time was increased to 572.1% of its pre-treatment value. In Case II (occluded at 6 months), aneurysmal average velocity, inflow rate and average WSS were decreased by 39.4%, 38.6%, and 59.1%, respectively, and turnover time increased to 163.0%. In Case III, Aneurysm III-a (occluded at 6 months) experienced decrease by 38.0%, 28.4%, and 50.9% in aneurysmal average velocity, inflow rate and average WSS, respectively and increase to 139.6% in turnover time, which was quite similar to Aneurysm II. Surprisingly, the adjacent Aneurysm III-b experienced more substantial flow reduction (decrease by 77.7%, 53.0%, and 84.4% in average velocity, inflow rate and average WSS, respectively, and increase to 213.0% in turnover time) than Aneurysm III-a, which qualitatively agreed with angiographic observation at 3-month follow-up. However, Aneurysm III-b remained patent at both 6 months and 9 months. A closer examination of the vascular anatomy of Case III revealed blood draining to the ophthalmic artery off Aneurysm III-b, which may have prevented its complete thrombosis. Conclusion This proof-of-concept study demonstrates that HiFiVE modeling of flow diverter deployment enables detailed characterization of hemodynamic alteration by PED placement. Post-treatment aneurysmal flow reduction may be correlated with aneurysm occlusion outcome. However, predicting aneurysm treatment outcome by flow diverters also requires consideration of other factors including vascular anatomy. PMID:26090829

  1. Time diary and questionnaire assessment of factors associated with academic and personal success among university undergraduates.

    PubMed

    George, Darren; Dixon, Sinikka; Stansal, Emory; Gelb, Shannon Lund; Pheri, Tabitha

    2008-01-01

    A sample of 231 students attending a private liberal arts university in central Alberta, Canada, completed a 5-day time diary and a 71-item questionnaire assessing the influence of personal, cognitive, and attitudinal factors on success. The authors used 3 success measures: cumulative grade point average (GPA), Personal Success--each participant's rating of congruence between stated goals and progress toward those goals--and Total Success--a measure that weighted GPA and Personal Success equally. The greatest predictors of GPA were time-management skills, intelligence, time spent studying, computer ownership, less time spent in passive leisure, and a healthy diet. Predictors of Personal Success scores were clearly defined goals, overall health, personal spirituality, and time-management skills. Predictors of Total Success scores were clearly defined goals, time-management skills, less time spent in passive leisure, healthy diet, waking up early, computer ownership, and less time spent sleeping. Results suggest alternatives to traditional predictors of academic success.

  2. An algorithm for the Italian atomic time scale

    NASA Technical Reports Server (NTRS)

    Cordara, F.; Vizio, G.; Tavella, P.; Pettiti, V.

    1994-01-01

    During the past twenty years, the time scale at the IEN has been realized by a commercial cesium clock, selected from an ensemble of five, whose rate has been continuously steered towards UTC to maintain a long term agreement within 3 x 10(exp -13). A time scale algorithm, suitable for a small clock ensemble and capable of improving the medium and long term stability of the IEN time scale, has been recently designed taking care of reducing the effects of the seasonal variations and the sudden frequency anomalies of the single cesium clocks. The new time scale, TA(IEN), is obtained as a weighted average of the clock ensemble computed once a day from the time comparisons between the local reference UTC(IEN) and the single clocks. It is foreseen to include in the computation also ten cesium clocks maintained in other Italian laboratories to further improve its reliability and its long term stability. To implement this algorithm, a personal computer program in Quick Basic has been prepared and it has been tested at the IEN time and frequency laboratory. Results obtained using this algorithm on the real clocks data relative to a period of about two years are presented.

  3. Optimal control of lift/drag ratios on a rotating cylinder

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Burns, John A.

    1992-01-01

    We present the numerical solution to a problem of maximizing the lift to drag ratio by rotating a circular cylinder in a two-dimensional viscous incompressible flow. This problem is viewed as a test case for the newly developing theoretical and computational methods for control of fluid dynamic systems. We show that the time averaged lift to drag ratio for a fixed finite-time interval achieves its maximum value at an optimal rotation rate that depends on the time interval.

  4. A portable data-logging system for industrial hygiene personal chlorine monitoring.

    PubMed

    Langhorst, M L; Illes, S P

    1986-02-01

    The combination of suitable portable sensors or instruments with small microprocessor-based data-logger units has made it possible to obtain detailed monitoring data for many health and environmental applications. Following data acquisition in field use, the logged data may be transferred to a desk-top personal computer for complete flexibility in manipulation of data and formating of results. A system has been assembled from commercial components and demonstrated for chlorine personal monitoring applications. The system consists of personal chlorine sensors, a Metrosonics data-logger and reader unit, and an Apple II Plus personal computer. The computer software was developed to handle sensor calibration, data evaluation and reduction, report formating and long-term storage of raw data on a disk. This system makes it possible to generate time-concentration profiles, evaluate dose above a threshold, quantitate short-term excursions and summarize time-weighted average (TWA) results. Field data from plant trials demonstrated feasibility of use, ruggedness and reliability. No significant differences were found between the time-weighted average chlorine concentrations determined by the sensor/logger system and two other methods: the sulfamic acid bubbler reference method and the 3M Poroplastic diffusional dosimeter. The sensor/data-logger system, however, provided far more information than the other two methods in terms of peak excursions, TWAs and exposure doses. For industrial hygiene applications, the system allows better definition of employee exposures, particularly for chemicals with acute as well as chronic health effects.(ABSTRACT TRUNCATED AT 250 WORDS)

  5. Extending computer technology to hospice research: interactive pentablet measurement of symptoms by hospice cancer patients in their homes.

    PubMed

    Wilkie, Diana J; Kim, Young Ok; Suarez, Marie L; Dauw, Colleen M; Stapleton, Stephen J; Gorman, Geraldine; Storfjell, Judith; Zhao, Zhongsheng

    2009-07-01

    We aimed to determine the acceptability and feasibility of a pentablet-based software program, PAINReportIt-Plus, as a means for patients with cancer in home hospice to report their symptoms and differences in acceptability by demographic variables. Of the 131 participants (mean age = 59 +/- 13, 58% women, 48.1% African American), 44% had never used a computer, but all participants easily used the computerized tool and reported an average computer acceptability score of 10.3 +/- 1.8, indicating high acceptability. Participants required an average of 19.1 +/- 9.5 minutes to complete the pain section, 9.8 +/- 6.5 minutes for the medication section, and 4.8 +/- 2.3 minutes for the symptom section. The acceptability scores were not statistically different by demographic variables but time to complete the tool differed by racial/ethnic groups. Our findings demonstrate that terminally ill patients with cancer are willing and able to utilize computer pentablet technology to record and describe their pain and other symptoms. Visibility of pain and distress is the first step necessary for the hospice team to develop a care plan for improving control of noxious symptoms.

  6. A model for closing the inviscid form of the average-passage equation system

    NASA Technical Reports Server (NTRS)

    Adamczyk, J. J.; Mulac, R. A.; Celestina, M. L.

    1985-01-01

    A mathematical model is proposed for closing or mathematically completing the system of equations which describes the time average flow field through the blade passages of multistage turbomachinery. These equations referred to as the average passage equation system govern a conceptual model which has proven useful in turbomachinery aerodynamic design and analysis. The closure model is developed so as to insure a consistency between these equations and the axisymmetric through flow equations. The closure model was incorporated into a computer code for use in simulating the flow field about a high speed counter rotating propeller and a high speed fan stage. Results from these simulations are presented.

  7. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  8. Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.

  9. A satellite snow depth multi-year average derived from SSM/I for the high latitude regions

    USGS Publications Warehouse

    Biancamaria, S.; Mognard, N.M.; Boone, A.; Grippa, M.; Josberger, E.G.

    2008-01-01

    The hydrological cycle for high latitude regions is inherently linked with the seasonal snowpack. Thus, accurately monitoring the snow depth and the associated aerial coverage are critical issues for monitoring the global climate system. Passive microwave satellite measurements provide an optimal means to monitor the snowpack over the arctic region. While the temporal evolution of snow extent can be observed globally from microwave radiometers, the determination of the corresponding snow depth is more difficult. A dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from Special Sensor Microwave/Imager (SSM/I) brightness temperatures and was validated over the U.S. Great Plains and Western Siberia. The purpose of this study is to assess the dynamic algorithm performance over the entire high latitude (land) region by computing a snow depth multi-year field for the time period 1987-1995. This multi-year average is compared to the Global Soil Wetness Project-Phase2 (GSWP2) snow depth computed from several state-of-the-art land surface schemes and averaged over the same time period. The multi-year average obtained by the dynamic algorithm is in good agreement with the GSWP2 snow depth field (the correlation coefficient for January is 0.55). The static algorithm, which assumes a constant snow grain size in space and time does not correlate with the GSWP2 snow depth field (the correlation coefficient with GSWP2 data for January is - 0.03), but exhibits a very high anti-correlation with the NCEP average January air temperature field (correlation coefficient - 0.77), the deepest satellite snow pack being located in the coldest regions, where the snow grain size may be significantly larger than the average value used in the static algorithm. The dynamic algorithm performs better over Eurasia (with a correlation coefficient with GSWP2 snow depth equal to 0.65) than over North America (where the correlation coefficient decreases to 0.29). ?? 2007 Elsevier Inc. All rights reserved.

  10. Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems

    DOEpatents

    Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.

    2002-01-01

    An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.

  11. Wireless Cloud Computing on Guided Missile Destroyers: A Business Case Analysis

    DTIC Science & Technology

    2013-06-01

    Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...to the Office of Management and Budget, Paperwork Reduction Project (0704–0188) Washington DC 20503. 1 . AGENCY USE ONLY (Leave blank) 2. REPORT...INTRODUCTION........................................................................................................ 1   A.  OVERVIEW

  12. Childhood Obesity: A Growing Phenomenon for Physical Educators

    ERIC Educational Resources Information Center

    Green, Gregory; Reese, Shirley A.

    2006-01-01

    The greatest health risk facing children today is obesity. The prevalence of childhood obesity in the United States has risen dramatically in the past several decades. Because children on the average spend up to five or six hours a day involved in sedentary activities, including excessive time watching television, using the computer and playing…

  13. Trends in Media Use

    ERIC Educational Resources Information Center

    Roberts, Donald F.; Foehr, Ulla G.

    2008-01-01

    American youth are awash in media. They have television sets in their bedrooms, personal computers in their family rooms, and digital music players and cell phones in their backpacks. They spend more time with media than any single activity other than sleeping, with the average American eight- to eighteen-year-old reporting more than six hours of…

  14. Salary Compression: A Time-Series Ratio Analysis of ARL Position Classifications

    ERIC Educational Resources Information Center

    Seaman, Scott

    2007-01-01

    Although salary compression has previously been identified in such professional schools as engineering, business, and computer science, there is now evidence of salary compression among Association of Research Libraries members. Using salary data from the "ARL Annual Salary Survey", this study analyzes average annual salaries from 1994-1995…

  15. 40 CFR 61.67 - Emission tests.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... being tested is operating at the maximum production rate at which the equipment will be operated and... cross section. The sample is to be extracted at a rate proportional to the gas velocity at the sampling... apply. The average is to be computed on a time weighted basis. (iii) For gas streams containing more...

  16. Obesity and Breast Cancer

    DTIC Science & Technology

    2005-07-01

    serum INS, IGF-I and binding proteins, triglycerides, HDL - cholesterol , total and free steroids, sex hormone binding globulin, adiponectin, leptin, and...collection of information is estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources...Bioinformatics, Biostatistics, Computer Science, Digital Mammography, Magnetic Resonance Imaging, Tissue Arrays, Gene Polymorphisms , Animal Models, Clinical

  17. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  18. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  19. Space lab system analysis

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Rives, T. B.

    1987-01-01

    An analytical analysis of the HOSC Generic Peripheral processing system was conducted. The results are summarized and they indicate that the maximum delay in performing screen change requests should be less than 2.5 sec., occurring for a slow VAX host to video screen I/O rate of 50 KBps. This delay is due to the average I/O rate from the video terminals to their host computer. Software structure of the main computers and the host computers will have greater impact on screen change or refresh response times. The HOSC data system model was updated by a newly coded PASCAL based simulation program which was installed on the HOSC VAX system. This model is described and documented. Suggestions are offered to fine tune the performance of the ETERNET interconnection network. Suggestions for using the Nutcracker by Excelan to trace itinerate packets which appear on the network from time to time were offered in discussions with the HOSC personnel. Several visits to the HOSC facility were to install and demonstrate the simulation model.

  20. Structured Overlapping Grid Simulations of Contra-rotating Open Rotor Noise

    NASA Technical Reports Server (NTRS)

    Housman, Jeffrey A.; Kiris, Cetin C.

    2015-01-01

    Computational simulations using structured overlapping grids with the Launch Ascent and Vehicle Aerodynamics (LAVA) solver framework are presented for predicting tonal noise generated by a contra-rotating open rotor (CROR) propulsion system. A coupled Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) numerical approach is applied. Three-dimensional time-accurate hybrid Reynolds Averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) CFD simulations are performed in the inertial frame, including dynamic moving grids, using a higher-order accurate finite difference discretization on structured overlapping grids. A higher-order accurate free-stream preserving metric discretization with discrete enforcement of the Geometric Conservation Law (GCL) on moving curvilinear grids is used to create an accurate, efficient, and stable numerical scheme. The aeroacoustic analysis is based on a permeable surface Ffowcs Williams-Hawkings (FW-H) approach, evaluated in the frequency domain. A time-step sensitivity study was performed using only the forward row of blades to determine an adequate time-step. The numerical approach is validated against existing wind tunnel measurements.

  1. Evaluating the impact of computer-generated rounding reports on physician workflow in the nursing home: a feasibility time-motion study.

    PubMed

    Thorpe-Jamison, Patrice T; Culley, Colleen M; Perera, Subashan; Handler, Steven M

    2013-05-01

    To determine the feasibility and impact of a computer-generated rounding report on physician rounding time and perceived barriers to providing clinical care in the nursing home (NH) setting. Three NHs located in Pittsburgh, PA. Ten attending NH physicians. Time-motion method to record the time taken to gather data (pre-rounding), to evaluate patients (rounding), and document their findings/develop an assessment and plan (post-rounding). Additionally, surveys were used to determine the physicians' perception of barriers to providing optimal clinical care, as well as physician satisfaction before and after the use of a computer-generated rounding report. Ten physicians were observed during half-day sessions both before and 4 weeks after they were introduced to a computer-generated rounding report. A total of 69 distinct patients were evaluated during the 20 physician observation sessions. Each physician evaluated, on average, four patients before implementation and three patients after implementation. The observations showed a significant increase (P = .03) in the pre-rounding time, and no significant difference in the rounding (P = .09) or post-rounding times (P = .29). Physicians reported that information was more accessible (P = .03) following the implementation of the computer-generated rounding report. Most (80%) physicians stated that they would prefer to use the computer-generated rounding report rather than the paper-based process. The present study provides preliminary data suggesting that the use of a computer-generated rounding report can decrease some perceived barriers to providing optimal care in the NH. Although the rounding report did not improve rounding time efficiency, most NH physicians would prefer to use the computer-generated report rather than the current paper-based process. Improving the accuracy and harmonization of medication information with the electronic medication administration record and rounding reports, as well as improving facility network speeds might improve the effectiveness of this technology. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  2. The Stagger-grid: A grid of 3D stellar atmosphere models. II. Horizontal and temporal averaging and spectral line formation

    NASA Astrophysics Data System (ADS)

    Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.

    2013-12-01

    Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net

  3. Cone-beam computed tomography fusion and navigation for real-time positron emission tomography-guided biopsies and ablations: a feasibility study.

    PubMed

    Abi-Jaoudeh, Nadine; Mielekamp, Peter; Noordhoek, Niels; Venkatesan, Aradhana M; Millo, Corina; Radaelli, Alessandro; Carelsen, Bart; Wood, Bradford J

    2012-06-01

    To describe a novel technique for multimodality positron emission tomography (PET) fusion-guided interventions that combines cone-beam computed tomography (CT) with PET/CT before the procedure. Subjects were selected among patients scheduled for a biopsy or ablation procedure. The lesions were not visible with conventional imaging methods or did not have uniform uptake on PET. Clinical success was defined by adequate histopathologic specimens for molecular profiling or diagnosis and by lack of enhancement on follow-up imaging for ablation procedures. Time to target (time elapsed between the completion of the initial cone-beam CT scan and first tissue sample or treatment), total procedure time (time from the moment the patient was on the table until the patient was off the table), and number of times the needle was repositioned were recorded. Seven patients underwent eight procedures (two ablations and six biopsies). Registration and procedures were completed successfully in all cases. Clinical success was achieved in all biopsy procedures and in one of the two ablation procedures. The needle was repositioned once in one biopsy procedure only. On average, the time to target was 38 minutes (range 13-54 min). Total procedure time was 95 minutes (range 51-240 min, which includes composite ablation). On average, fluoroscopy time was 2.5 minutes (range 1.3-6.2 min). An integrated cone-beam CT software platform can enable PET-guided biopsies and ablation procedures without the need for additional specialized hardware. Copyright © 2012 SIR. Published by Elsevier Inc. All rights reserved.

  4. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  5. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  6. N2O eddy covariance fluxes: From field measurements to flux calculation

    NASA Astrophysics Data System (ADS)

    Lognoul, Margaux; Debacq, Alain; Heinesch, Bernard; Aubinet, Marc

    2017-04-01

    From March to October 2016, we performed eddy covariance measurements in a sugar beet crop at the Lonzée Terrestrial Observatory (LTO, candidate ICOS site) in Belgium. N2O and H2O atmospheric concentrations were measured at 10 Hz using a quantum-cascade laser spectrometer (Aerodyne Research, Inc.) and combined to wind speed 3D components measured with a sonic anemometer (Gill HS-50). Flux computation was carried out using the EddyPro Software (LI-COR) with a focus on adaptations needed for tracers like N2O. Data filtering and quality control were performed according to Vickers and Mahrt (1997) and Mauder and Foken (2004). The flags were adapted to N2O time series. In this presentation, different computation steps will be presented. More specifically: 1) Considering that a large proportion of N2O fluxes are small (within ± 0.5 nmol m-2 s-1), the classical stationarity test might lead to excessive data filtering and in such case, some searchers have chosen to use the running mean (RM) as a detrend method over block averaging (BA) and to filter data otherwise. For our dataset, BA mean fluxes combined to the stationarity test did not significantly differ from RM fluxes when the averaging window was 300s or larger, but were significantly larger otherwise, suggesting that significant eddies occurred at the 5-min timescale and that they were not accounted for with a shorter averaging window. 2) The determination of time-lag in the case of N2O fluxes can become tricky for two reasons : (1) the signal amplitude can differ from one time period to the next, making it difficult to use the method of covariance maximization and (2) an additional clock drift can appear if the spectrometer is not logging on the same computer than the anemometer. In our case, the N2O signal was strong enough to solve both problems and to perform time-lag compensation according to the covariance maximization, with a default value equal to the mode of the lag distribution. The automatic time-lag optimization suggested by EddyPro was not used as it gave inconsistent values for our dataset. 3) The effect of high frequency spectral correction was also investigated by comparing different in-situ methods to evaluate how using spectra or co-spectra - averaged or not - can affect results. Finally, a preliminary analysis of N2O fluxes dynamics will be presented.

  7. Cartesian-Grid Simulations of a Canard-Controlled Missile with a Free-Spinning Tail

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The proposed paper presents a series of simulations of a geometrically complex, canard-controlled, supersonic missile with free-spinning tail fins. Time-dependent simulations were performed using an inviscid Cartesian-grid-based method with results compared to both experimental data and high-resolution Navier-Stokes computations. At fixed free stream conditions and canard deflections, the tail spin rate was iteratively determined such that the net rolling moment on the empennage is zero. This rate corresponds to the time-asymptotic rate of the free-to-spin fin system. After obtaining spin-averaged aerodynamic coefficients for the missile, the investigation seeks a fixed-tail approximation to the spin-averaged aerodynamic coefficients, and examines the validity of this approximation over a variety of freestream conditions.

  8. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  9. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less

  10. Computer Activities for Persons With Dementia.

    PubMed

    Tak, Sunghee H; Zhang, Hongmei; Patel, Hetal; Hong, Song Hee

    2015-06-01

    The study examined participant's experience and individual characteristics during a 7-week computer activity program for persons with dementia. The descriptive study with mixed methods design collected 612 observational logs of computer sessions from 27 study participants, including individual interviews before and after the program. Quantitative data analysis included descriptive statistics, correlational coefficients, t-test, and chi-square. Content analysis was used to analyze qualitative data. Each participant averaged 23 sessions and 591min for 7 weeks. Computer activities included slide shows with music, games, internet use, and emailing. On average, they had a high score of intensity in engagement per session. Women attended significantly more sessions than men. Higher education level was associated with a higher number of different activities used per session and more time spent on online games. Older participants felt more tired. Feeling tired was significantly correlated with a higher number of weeks with only one session attendance per week. More anticholinergic medications taken by participants were significantly associated with a higher percentage of sessions with disengagement. The findings were significant at p < .05. Qualitative content analysis indicated tailoring computer activities appropriate to individual's needs and functioning is critical. All participants needed technical assistance. A framework for tailoring computer activities may provide guidance on developing and maintaining treatment fidelity of tailored computer activity interventions among persons with dementia. Practice guidelines and education protocols may assist caregivers and service providers to integrate computer activities into homes and aging services settings. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Effective Alphas and Mixing for Disks with Gravitational Instabilities: Convergence Testing in Global 3D Simulations

    NASA Astrophysics Data System (ADS)

    Michael, Scott A.; Steiman-Cameron, T.; Durisen, R.; Boley, A.

    2008-05-01

    Using 3D simulations of a cooling disk undergoing gravitational instabilities (GIs), we compute the effective Shakura and Sunyaev (1973) alphas due to gravitational torques and compare them to predictions from an analytic local theory for thin disks by Gammie (2001). Our goal is to determine how accurately a locally defined alpha can characterize mass and angular momentum transport by GIs in disks. Cases are considered both with cooling by an imposed constant global cooling time (Mejia et al. 2005) and with realistic radiative transfer (Boley et al. 2007). Grid spacing in the azimuthal direction is varied to investigate how the computed alpha is affected by numerical resolution. The azimuthal direction is particularly important, because higher resolution in azimuth allows GI power to spread to higher-order (multi-armed) modes that behave more locally. We find that, in many important respects, the transport of mass and angular momentum by GIs is an intrinsically global phenomenon. Effective alphas are variable on a dynamic time scale over global spatial scales. Nevertheless, preliminary results at the highest resolutions for an imposed cooling time show that our computed alphas, though systematically higher, tend on average to follow Gammie's prediction to within perhaps a factor of two. Our computed alphas include only gravitational stresses, while in Gammie's treatment the effective alpha is due equally to hydrodynamic (Reynolds) and gravitational stresses. So Gammie's prediction may significantly underestimate the true average stresses in a GI-active disk. Our effective alphas appear to be reasonably well converged for 256 and 512 azimuthal zones. We also have a high-resolution simulation under way to test the extent of radial mixing by GIs of gas and its entrained dust for comparison with Stardust observations. Results will be presented if available at the time of the meeting.

  12. Turbulent transport measurements with a laser Doppler velocimeter

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Angus, J. C.; Dunning, J. W., Jr.

    1972-01-01

    The power spectrum of phototube current from a laser Doppler velocimeter operating in the heterodyne mode has been computed. The spectrum is obtained in terms of the space time correlation function of the fluid. The spectral width and shape predicted by the theory are in agreement with experiment. For normal operating parameters the time average spectrum contains information only for times shorter than the Lagrangian integral time scale of the turbulence. To examine the long time behavior, one must use either extremely small scattering angles, much longer wavelength radiation or a different mode of signal analysis, e.g., FM detection.

  13. A Simplified Model for Detonation Based Pressure-Gain Combustors

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    2010-01-01

    A time-dependent model is presented which simulates the essential physics of a detonative or otherwise constant volume, pressure-gain combustor for gas turbine applications. The model utilizes simple, global thermodynamic relations to determine an assumed instantaneous and uniform post-combustion state in one of many envisioned tubes comprising the device. A simple, second order, non-upwinding computational fluid dynamic algorithm is then used to compute the (continuous) flowfield properties during the blowdown and refill stages of the periodic cycle which each tube undergoes. The exhausted flow is averaged to provide mixed total pressure and enthalpy which may be used as a cycle performance metric for benefits analysis. The simplicity of the model allows for nearly instantaneous results when implemented on a personal computer. The results compare favorably with higher resolution numerical codes which are more difficult to configure, and more time consuming to operate.

  14. GRAMPS: An Automated Ambulatory Geriatric Record

    PubMed Central

    Hammond, Kenric W.; King, Carol A.; Date, Vishvanath V.; Prather, Robert J.; Loo, Lawrence; Siddiqui, Khwaja

    1988-01-01

    GRAMPS (Geriatric Record and Multidisciplinary Planning System) is an interactive MUMPS system developed for VA outpatient use. It allows physicians to effectively document care in problem-oriented format with structured narrative and free text, eliminating handwritten input. We evaluated the system in a one-year controlled cohort study. When the computer, was used, appointment times averaged 8.2 minutes longer (32.6 vs. 24.4 minutes) compared to control visits with the same physicians. Computer use was associated with better quality of care as measured in the management of a common problem, hypertension, as well as decreased overall costs of care. When a faster computer was installed, data entry times improved, suggesting that slower processing had accounted for a substantial portion of the observed difference in appointment lengths. The GRAMPS system was well-accepted by providers. The modular design used in GRAMPS has been extended to medical-care applications in Nursing and Mental Health.

  15. Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama

    2001-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.

  16. Computer/Mobile Device Screen Time of Children and Their Eye Care Behavior: The Roles of Risk Perception and Parenting.

    PubMed

    Chang, Fong-Ching; Chiu, Chiung-Hui; Chen, Ping-Hung; Miao, Nae-Fang; Chiang, Jeng-Tung; Chuang, Hung-Yi

    2018-03-01

    This study assessed the computer/mobile device screen time and eye care behavior of children and examined the roles of risk perception and parental practices. Data were obtained from a sample of 2,454 child-parent dyads recruited from 30 primary schools in Taipei city and New Taipei city, Taiwan, in 2016. Self-administered questionnaires were collected from students and parents. Fifth-grade students spend more time on new media (computer/smartphone/tablet: 16 hours a week) than on traditional media (television: 10 hours a week). The average daily screen time (3.5 hours) for these children exceeded the American Academy of Pediatrics recommendations (≤2 hours). Multivariate analysis results showed that after controlling for demographic factors, the parents with higher levels of risk perception and parental efficacy were more likely to mediate their child's eye care behavior. Children who reported lower academic performance, who were from non-intact families, reported lower levels of risk perception of mobile device use, had parents who spent more time using computers and mobile devices, and had lower levels of parental mediation were more likely to spend more time using computers and mobile devices; whereas children who reported higher academic performance, higher levels of risk perception, and higher levels of parental mediation were more likely to engage in higher levels of eye care behavior. Risk perception by children and parental practices are associated with the amount of screen time that children regularly engage in and their level of eye care behavior.

  17. 7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...

  18. 7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...

  19. 7 CFR 993.159 - Payments for services performed with respect to reserve tonnage prunes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... overhead costs, which include those for supervision, indirect labor, fuel, power and water, taxes and... tonnage prunes. The Committee will compute the average industry cost for holding reserve pool prunes by... choose to exclude the high and low data in computing an industry average. The industry average costs may...

  20. Evaluation of transit-time and electromagnetic flow measurement in a chronically instrumented nonhuman primate model.

    PubMed

    Koenig, S C; Reister, C A; Schaub, J; Swope, R D; Ewert, D; Fanton, J W

    1996-01-01

    The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.

  1. Evaluation of transit-time and electromagnetic flow measurement in a chronically instrumented nonhuman primate model

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Reister, C. A.; Schaub, J.; Swope, R. D.; Ewert, D.; Fanton, J. W.; Convertino, V. A. (Principal Investigator)

    1996-01-01

    The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.

  2. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.

    PubMed

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-05-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.

  3. Effect of solar proton events in 1978 and 1979 on the odd nitrogen abundance in the middle atmosphere

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Meade, Paul E.

    1988-01-01

    Daily average solar proton flux data for 1978 and 1979 are used in a proton energy degradation scheme to derive ion pair production rates and atomic nitrogen production rates. The latter are computed in a form suitable for inclusion in an atmopheric, two-dimensional, time-dependent photochemical model. Odd nitrogen distributions are computed from the model, including atomic nitrogen production from solar protons, and are compared with baseline distributions. The comparisons show that the average effect of the solar protons in 1978 and 1979 was to cause changes in odd nitrogen only above 10 mbar and at latitudes only above about 50 deg in both hemispheres. The influence of the solar proton-produced odd nitrogen on the local abundance of odd nitrogen depends primarily on the background odd nitrogen abundance as well as the altitude and season.

  4. User's Guide for ERB 7 Matrix. Volume 1: Experiment Description and Quality Control Report for Year 1

    NASA Technical Reports Server (NTRS)

    Tighe, R. J.; Shen, M. Y. H.

    1984-01-01

    The Nimbus 7 ERB MATRIX Tape is a computer program in which radiances and irradiances are converted into fluxes which are used to compute the basic scientific output parameters, emitted flux, albedo, and net radiation. They are spatially averaged and presented as time averages over one-day, six-day, and monthly periods. MATRIX data for the period November 16, 1978 through October 31, 1979 are presented. Described are the Earth Radiation Budget experiment, the Science Quality Control Report, Items checked by the MATRIX Science Quality Control Program, and Science Quality Control Data Analysis Report. Additional material from the detailed scientific quality control of the tapes which may be very useful to a user of the MATRIX tapes is included. Known errors and data problems and some suggestions on how to use the data for further climatologic and atmospheric physics studies are also discussed.

  5. Application of a time-magnitude prediction model for earthquakes

    NASA Astrophysics Data System (ADS)

    An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He

    2007-06-01

    In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.

  6. Analysis of a dual-reflector antenna system using physical optics and digital computers

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1972-01-01

    The application of physical-optics diffraction theory to a deployable dual-reflector geometry is discussed. The methods employed are not restricted to the Conical-Gregorian antenna, but apply in a general way to dual and even multiple reflector systems. Complex vector wave methods are used in the Fresnel and Fraunhofer regions of the reflectors. Field amplitude, phase, polarization data, and time average Poynting vectors are obtained via an IBM 360/91 digital computer. Focal region characteristics are plotted with the aid of a CalComp plotter. Comparison between the GSFC Huygens wavelet approach, JPL measurements, and JPL computer results based on the near field spherical wave expansion method are made wherever possible.

  7. The Effort to Reduce a Muscle Fatigue Through Gymnastics Relaxation and Ergonomic Approach for Computer Users in Central Building State University of Medan

    NASA Astrophysics Data System (ADS)

    Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman

    2018-03-01

    Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers solutions such as the prototype of Back-Holder, monitoring filter, and design a proper relaxation exercise to reduce the fatigue level. Furthermore, in in order to maximize the exercise itself, a manual book will be given to employees whom regularly work in front of computers at Unimed’s Administration Center

  8. CFD Analysis and Design Optimization Using Parallel Computers

    NASA Technical Reports Server (NTRS)

    Martinelli, Luigi; Alonso, Juan Jose; Jameson, Antony; Reuther, James

    1997-01-01

    A versatile and efficient multi-block method is presented for the simulation of both steady and unsteady flow, as well as aerodynamic design optimization of complete aircraft configurations. The compressible Euler and Reynolds Averaged Navier-Stokes (RANS) equations are discretized using a high resolution scheme on body-fitted structured meshes. An efficient multigrid implicit scheme is implemented for time-accurate flow calculations. Optimum aerodynamic shape design is achieved at very low cost using an adjoint formulation. The method is implemented on parallel computing systems using the MPI message passing interface standard to ensure portability. The results demonstrate that, by combining highly efficient algorithms with parallel computing, it is possible to perform detailed steady and unsteady analysis as well as automatic design for complex configurations using the present generation of parallel computers.

  9. Applying mathematical modeling to create job rotation schedules for minimizing occupational noise exposure.

    PubMed

    Tharmmaphornphilas, Wipawee; Green, Benjamin; Carnahan, Brian J; Norman, Bryan A

    2003-01-01

    This research developed worker schedules by using administrative controls and a computer programming model to reduce the likelihood of worker hearing loss. By rotating the workers through different jobs during the day it was possible to reduce their exposure to hazardous noise levels. Computer simulations were made based on data collected in a real setting. Worker schedules currently used at the site are compared with proposed worker schedules from the computer simulations. For the worker assignment plans found by the computer model, the authors calculate a significant decrease in time-weighted average (TWA) sound level exposure. The maximum daily dose that any worker is exposed to is reduced by 58.8%, and the maximum TWA value for the workers is reduced by 3.8 dB from the current schedule.

  10. Critical Anatomy Relative to the Sacral Suture: A Postoperative Imaging Study After Robotic Sacrocolpopexy.

    PubMed

    Crisp, Catrina C; Herfel, Charles V; Pauls, Rachel N; Westermann, Lauren B; Kleeman, Steven D

    2016-01-01

    This study aimed to characterize pertinent anatomy relative to the sacral suture placed at time of robotic sacrocolpopexy using postoperative computed tomography and magnetic resonance imaging. A vascular clip was placed at the base of the sacral suture at the time of robotic sacrocolpopexy. Six weeks postoperatively, subjects returned for a computed tomography scan and magnetic resonance imaging. Ten subjects completed the study. The middle sacral artery and vein coursed midline or to the left of midline in all the subjects. The left common iliac vein was an average of 26 mm from the sacral suture. To the right of the suture, the right common iliac artery was 18 mm away. Following the right common iliac artery to its bifurcation, the right internal iliac was on average 10 mm from the suture. The bifurcations of the inferior vena cava and the aorta were 33 mm and 54 mm further cephalad, respectively.The right ureter, on average, was 18 mm from the suture. The thickness of the anterior longitudinal ligament was 2 mm.The mean angle of descent of the sacrum was 70 degrees. Lastly, we found that 70% of the time, a vertebral body was directly below the suture; the disc was noted in 30%. We describe critical anatomy surrounding the sacral suture placed during robotic sacrocolpopexy. Proximity of both vascular and urologic structures within 10 to 18 mm, as well as anterior ligament thickness of only 2 mm highlights the importance of adequate exposure, careful dissection, and surgeon expertise.

  11. Parallel Hough Transform-Based Straight Line Detection and Its FPGA Implementation in Embedded Vision

    PubMed Central

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-01-01

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness. PMID:23867746

  12. Parallel Hough Transform-based straight line detection and its FPGA implementation in embedded vision.

    PubMed

    Lu, Xiaofeng; Song, Li; Shen, Sumin; He, Kang; Yu, Songyu; Ling, Nam

    2013-07-17

    Hough Transform has been widely used for straight line detection in low-definition and still images, but it suffers from execution time and resource requirements. Field Programmable Gate Arrays (FPGA) provide a competitive alternative for hardware acceleration to reap tremendous computing performance. In this paper, we propose a novel parallel Hough Transform (PHT) and FPGA architecture-associated framework for real-time straight line detection in high-definition videos. A resource-optimized Canny edge detection method with enhanced non-maximum suppression conditions is presented to suppress most possible false edges and obtain more accurate candidate edge pixels for subsequent accelerated computation. Then, a novel PHT algorithm exploiting spatial angle-level parallelism is proposed to upgrade computational accuracy by improving the minimum computational step. Moreover, the FPGA based multi-level pipelined PHT architecture optimized by spatial parallelism ensures real-time computation for 1,024 × 768 resolution videos without any off-chip memory consumption. This framework is evaluated on ALTERA DE2-115 FPGA evaluation platform at a maximum frequency of 200 MHz, and it can calculate straight line parameters in 15.59 ms on the average for one frame. Qualitative and quantitative evaluation results have validated the system performance regarding data throughput, memory bandwidth, resource, speed and robustness.

  13. Multiprocessing on supercomputers for computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Mehta, Unmeel B.

    1990-01-01

    Very little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPs or more) in computational aerodynamics to significantly improve turnaround time. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, the improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) through multi-tasking is applied via a strategy which requires relatively minor modifications to an existing code for a single processor. Essentially, this approach maps the available memory to multiple processors, exploiting the C-FORTRAN-Unix interface. The existing single processor code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor. As a demonstration of this approach, a Multiple Processor Multiple Grid (MPMG) code is developed. It is capable of using nine processors, and can be easily extended to a larger number of processors. This code solves the three-dimensional, Reynolds averaged, thin-layer and slender-layer Navier-Stokes equations with an implicit, approximately factored and diagonalized method. The solver is applied to generic oblique-wing aircraft problem on a four processor Cray-2 computer. A tricubic interpolation scheme is developed to increase the accuracy of coupling of overlapped grids. For the oblique-wing aircraft problem, a speedup of two in elapsed (turnaround) time is observed in a saturated time-sharing environment.

  14. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  15. Unenhanced Cone Beam Computed Tomography and Fusion Imaging in Direct Percutaneous Sac Injection for Treatment of Type II Endoleak: Technical Note

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrafiello, Gianpaolo, E-mail: gcarraf@gmail.com; Ierardi, Anna Maria; Radaelli, Alessandro

    AimTo evaluate safety, feasibility, technical success, and clinical success of direct percutaneous sac injection (DPSI) for the treatment of type II endoleaks (T2EL) using anatomical landmarks on cone beam computed tomography (CBCT) and fusion imaging (FI).Materials and MethodsEight patients with T2EL were treated with DPSI using CBCT as imaging guidance. Anatomical landmarks on unenhanced CBCT were used for referencing T2EL location in the first five patients, while FI between unenhanced CBCT and pre-procedural computed tomography angiography (CTA) was used in the remaining three patients. Embolization was performed with thrombin, glue, and ethylene–vinyl alcohol copolymer. Technical and clinical success, iodinated contrastmore » utilization, procedural time, fluoroscopy time, and mean radiation dose were registered.ResultsDPSI was technically successful in all patients: the needle was correctly positioned at the first attempt in six patients, while in two of the first five patients the needle was repositioned once. Neither minor nor major complications were registered. Average procedural time was 45 min and the average administered iodinated contrast was 13 ml. Mean radiation dose of the procedure was 60.43 Gy cm{sup 2} and mean fluoroscopy time was 18 min. Clinical success was achieved in all patients (mean follow-up of 36 months): no sign of T2EL was reported in seven patients until last CT follow-up, while it persisted in one patient with stability of sac diameter.ConclusionsDPSI using unenhanced CBCT and FI is feasible and provides the interventional radiologist with an accurate and safe alternative to endovascular treatment with limited iodinated contrast utilization.« less

  16. Classification of functional near-infrared spectroscopy signals corresponding to the right- and left-wrist motor imagery for development of a brain-computer interface.

    PubMed

    Naseer, Noman; Hong, Keum-Shik

    2013-10-11

    This paper presents a study on functional near-infrared spectroscopy (fNIRS) indicating that the hemodynamic responses of the right- and left-wrist motor imageries have distinct patterns that can be classified using a linear classifier for the purpose of developing a brain-computer interface (BCI). Ten healthy participants were instructed to imagine kinesthetically the right- or left-wrist flexion indicated on a computer screen. Signals from the right and left primary motor cortices were acquired simultaneously using a multi-channel continuous-wave fNIRS system. Using two distinct features (the mean and the slope of change in the oxygenated hemoglobin concentration), the linear discriminant analysis classifier was used to classify the right- and left-wrist motor imageries resulting in average classification accuracies of 73.35% and 83.0%, respectively, during the 10s task period. Moreover, when the analysis time was confined to the 2-7s span within the overall 10s task period, the average classification accuracies were improved to 77.56% and 87.28%, respectively. These results demonstrate the feasibility of an fNIRS-based BCI and the enhanced performance of the classifier by removing the initial 2s span and/or the time span after the peak value. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    PubMed

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  18. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  19. 5 CFR 839.1115 - What is an actuarial reduction?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...? An actuarial reduction allows you to receive benefits without having to pay an amount due in a lump sum. OPM reduces your annuity in a way that, on average, allows the Fund to recover the amount of the... have to pay at that time. To compute an actuarial reduction, OPM divides the lump sum amount by the...

  20. 5 CFR 839.1115 - What is an actuarial reduction?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...? An actuarial reduction allows you to receive benefits without having to pay an amount due in a lump sum. OPM reduces your annuity in a way that, on average, allows the Fund to recover the amount of the... have to pay at that time. To compute an actuarial reduction, OPM divides the lump sum amount by the...

  1. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation... 42 Public Health 3 2012-10-01 2012-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...

  2. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation of... 42 Public Health 3 2011-10-01 2011-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...

  3. 42 CFR 417.588 - Computation of adjusted average per capita cost (AAPCC).

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Risk Basis § 417.588 Computation of... 42 Public Health 3 2010-10-01 2010-10-01 false Computation of adjusted average per capita cost (AAPCC). 417.588 Section 417.588 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF...

  4. From standard alpha-stable Lévy motions to horizontal visibility networks: dependence of multifractal and Laplacian spectrum

    NASA Astrophysics Data System (ADS)

    Zou, Hai-Long; Yu, Zu-Guo; Anh, Vo; Ma, Yuan-Lin

    2018-05-01

    In recent years, researchers have proposed several methods to transform time series (such as those of fractional Brownian motion) into complex networks. In this paper, we construct horizontal visibility networks (HVNs) based on the -stable Lévy motion. We aim to study the relations of multifractal and Laplacian spectrum of transformed networks on the parameters and of the -stable Lévy motion. First, we employ the sandbox algorithm to compute the mass exponents and multifractal spectrum to investigate the multifractality of these HVNs. Then we perform least squares fits to find possible relations of the average fractal dimension , the average information dimension and the average correlation dimension against using several methods of model selection. We also investigate possible dependence relations of eigenvalues and energy on , calculated from the Laplacian and normalized Laplacian operators of the constructed HVNs. All of these constructions and estimates will help us to evaluate the validity and usefulness of the mappings between time series and networks, especially between time series of -stable Lévy motions and HVNs.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  6. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  7. Natural convection in a vertical plane channel: DNS results for high Grashof numbers

    NASA Astrophysics Data System (ADS)

    Kiš, P.; Herwig, H.

    2014-07-01

    The turbulent natural convection of a gas ( Pr = 0.71) between two vertical infinite walls at different but constant temperatures is investigated by means of direct numerical simulation for a wide range of Grashof numbers (6.0 × 106 > Gr > 1.0 × 103). The maximum Grashof number is almost one order of magnitude higher than those of computations reported in the literature so far. Results for the turbulent transport equations are presented and compared to previous studies with special attention to the study of Verteegh and Nieuwstadt (Int J Heat Fluid Flow 19:135-149, 1998). All turbulence statistics are available on the TUHH homepage (http://www.tu-harburg.de/tt/dnsdatabase/dbindex.en.html). Accuracy considerations are based on the time averaged balance equations for kinetic and thermal energy. With the second law of thermodynamics Nusselt numbers can be determined by evaluating time averaged wall temperature gradients as well as by a volumetric time averaged integration. Comparing the results of both approaches leads to a direct measure of the physical consistency.

  8. Quantifying Uncertainty from Computational Factors in Simulations of a Model Ballistic System

    DTIC Science & Technology

    2017-08-01

    Comparison of runs 6–9 with the corresponding simulations from the stop time study (Tables 22 and 23) show that the restart series produces...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other authorized...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  9. An iteration algorithm for optimal network flows

    NASA Astrophysics Data System (ADS)

    Woong, C. J.

    1983-09-01

    A packet switching network has the desirable feature of rapidly handling short (bursty) messages of the type often found in computer communication systems. In evaluating packet switching networks, the average time delay per packet is one of the most important measures of performance. The problem of message routing to minimize time delay is analyzed here using two approaches, called "successive saturation' and "max-slack', for various traffic requirement matrices and networks with fixed topology and link capacities.

  10. Medium-induced gluon radiation and colour decoherence beyond the soft approximation

    NASA Astrophysics Data System (ADS)

    Apolinário, Liliana; Armesto, Néstor; Milhano, José Guilherme; Salgado, Carlos A.

    2015-02-01

    We derive the in-medium gluon radiation spectrum off a quark within the path integral formalism at finite energies, including all next-to-eikonal corrections in the propagators of quarks and gluons. Results are computed for finite formation times, including interference with vacuum amplitudes. By rewriting the medium averages in a convenient manner we present the spectrum in terms of dipole cross sections and a colour decoherence parameter with the same physical origin as that found in previous studies of the antenna radiation. This factorisation allows us to present a simple physical picture of the medium-induced radiation for any value of the formation time, that is of interest for a probabilistic implementation of the modified parton shower. Known results are recovered for the particular cases of soft radiation and eikonal quark and for the case of a very long medium, with length much larger than the average formation times for medium-induced radiation. Technical details of the computation of the relevant n-point functions in colour space and of the required path integrals in transverse space are provided. The final result completes the calculation of all finite energy corrections for the radiation off a quark in a QCD medium that exist in the small angle approximation and for a recoilless medium.

  11. MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT

    PubMed Central

    Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong

    2015-01-01

    Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution. PMID:25705725

  12. MaPLE: A MapReduce Pipeline for Lattice-based Evaluation and Its Application to SNOMED CT.

    PubMed

    Zhang, Guo-Qiang; Zhu, Wei; Sun, Mengmeng; Tao, Shiqiang; Bodenreider, Olivier; Cui, Licong

    2014-10-01

    Non-lattice fragments are often indicative of structural anomalies in ontological systems and, as such, represent possible areas of focus for subsequent quality assurance work. However, extracting the non-lattice fragments in large ontological systems is computationally expensive if not prohibitive, using a traditional sequential approach. In this paper we present a general MapReduce pipeline, called MaPLE (MapReduce Pipeline for Lattice-based Evaluation), for extracting non-lattice fragments in large partially ordered sets and demonstrate its applicability in ontology quality assurance. Using MaPLE in a 30-node Hadoop local cloud, we systematically extracted non-lattice fragments in 8 SNOMED CT versions from 2009 to 2014 (each containing over 300k concepts), with an average total computing time of less than 3 hours per version. With dramatically reduced time, MaPLE makes it feasible not only to perform exhaustive structural analysis of large ontological hierarchies, but also to systematically track structural changes between versions. Our change analysis showed that the average change rates on the non-lattice pairs are up to 38.6 times higher than the change rates of the background structure (concept nodes). This demonstrates that fragments around non-lattice pairs exhibit significantly higher rates of change in the process of ontological evolution.

  13. Real-time LMR control parameter generation using advanced adaptive synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, R.W.; Mott, J.E.

    1990-01-01

    The reactor delta T'', the difference between the average core inlet and outlet temperatures, for the liquid-sodium-cooled Experimental Breeder Reactor 2 is empirically synthesized in real time from, a multitude of examples of past reactor operation. The real-time empirical synthesis is based on reactor operation. The real-time empirical synthesis is based on system state analysis (SSA) technology embodied in software on the EBR 2 data acquisition computer. Before the real-time system is put into operation, a selection of reactor plant measurements is made which is predictable over long periods encompassing plant shutdowns, core reconfigurations, core load changes, and plant startups.more » A serial data link to a personal computer containing SSA software allows the rapid verification of the predictability of these plant measurements via graphical means. After the selection is made, the real-time synthesis provides a fault-tolerant estimate of the reactor delta T accurate to {plus}/{minus}1{percent}. 5 refs., 7 figs.« less

  14. Characterization and Computational Modeling of Minor Phases in Alloy LSHR

    NASA Technical Reports Server (NTRS)

    Jou, Herng-Jeng; Olson, Gregory; Gabb, Timothy; Garg, Anita; Miller, Derek

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approach equilibrium. Additional heat treatments were also performed for shorter times, to assess minor phase kinetics in non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their average sizes and total area fractions were determined. CALPHAD thermodynamics databases and PrecipiCalc(TradeMark), a computational precipitation modeling tool, were employed with Ni-base thermodynamics and diffusion databases to model and simulate the phase microstructural evolution observed in the experiments with an objective to identify the model limitations and the directions of model enhancement.

  15. Detached-Eddy Simulations of Separated Flow Around Wings With Ice Accretions: Year One Report

    NASA Technical Reports Server (NTRS)

    Choo, Yung K. (Technical Monitor); Thompson, David; Mogili, Prasad

    2004-01-01

    A computational investigation was performed to assess the effectiveness of Detached-Eddy Simulation (DES) as a tool for predicting icing effects. The AVUS code, a public domain flow solver, was employed to compute solutions for an iced wing configuration using DES and steady Reynolds Averaged Navier-Stokes (RANS) equation methodologies. The configuration was an extruded GLC305/944-ice shape section with a rectangular planform. The model was mounted between two walls so no tip effects were considered. The numerical results were validated by comparison with experimental data for the same configuration. The time-averaged DES computations showed some improvement in lift and drag results near stall when compared to steady RANS results. However, comparisons of the flow field details did not show the level of agreement suggested by the integrated quantities. Based on our results, we believe that DES may prove useful in a limited sense to provide analysis of iced wing configurations when there is significant flow separation, e.g., near stall, where steady RANS computations are demonstrably ineffective. However, more validation is needed to determine what role DES can play as part of an overall icing effects prediction strategy. We conclude the report with an assessment of existing computational tools for application to the iced wing problem and a discussion of issues that merit further study.

  16. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  17. Efficient methods for implementation of multi-level nonrigid mass-preserving image registration on GPUs and multi-threaded CPUs.

    PubMed

    Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long

    2016-04-01

    Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms of runtime. Total registration time reduced runtime to 2.9min on the GPU version, compared to 12.8min on twelve-threaded CPU version and 112.5min on a single-threaded CPU. Furthermore, the GPU implementation discussed in this work can be adapted for use of other cost functions that require calculation of the first derivatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Real-time line matching from stereo images using a nonparametric transform of spatial relations and texture information

    NASA Astrophysics Data System (ADS)

    Park, Jonghee; Yoon, Kuk-Jin

    2015-02-01

    We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.

  19. Effect of Variations in IRU Integration Time Interval On Accuracy of Aqua Attitude Estimation

    NASA Technical Reports Server (NTRS)

    Natanson, G. A.; Tracewell, Dave

    2003-01-01

    During Aqua launch support, attitude analysts noticed several anomalies in Onboard Computer (OBC) rates and in rates computed by the ground Attitude Determination System (ADS). These included: 1) periodic jumps in the OBC pitch rate every 2 minutes; 2) spikes in ADS pitch rate every 4 minutes; 3) close agreement between pitch rates computed by ADS and those derived from telemetered OBC quaternions (in contrast to the step-wise pattern observed for telemetered OBC rates); 4) spikes of +/- 10 milliseconds in telemetered IRU integration time every 4 minutes (despite the fact that telemetered time tags of any two sequential IRU measurements were always 1 second apart from each other). An analysis presented in the paper explains this anomalous behavior by a small average offset of about 0.5 +/- 0.05 microsec in the time interval between two sequential accumulated angle measurements. It is shown that errors in the estimated pitch angle due to neglecting the aforementioned variations in the integration time interval by the OBC is within +/- 2 arcseconds. Ground attitude solutions are found to be accurate enough to see the effect of the variations on the accuracy of the estimated pitch angle.

  20. The Role of Parents and Related Factors on Adolescent Computer Use

    PubMed Central

    Epstein, Jennifer A.

    2012-01-01

    Background Research suggested the importance of parents on their adolescents’ computer activity. Spending too much time on the computer for recreational purposes in particular has been found to be related to areas of public health concern in children/adolescents, including obesity and substance use. Design and Methods The goal of the research was to determine the association between recreational computer use and potentially linked factors (parental monitoring, social influences to use computers including parents, age of first computer use, self-control, and particular internet activities). Participants (aged 13-17 years and residing in the United States) were recruited via the Internet to complete an anonymous survey online using a survey tool. The target sample of 200 participants who completed the survey was achieved. The sample’s average age was 16 and was 63% girls. Results A set of regressions with recreational computer use as dependent variables were run. Conclusions Less parental monitoring, younger age at first computer use, listening or downloading music from the internet more frequently, using the internet for educational purposes less frequently, and parent’s use of the computer for pleasure were related to spending a greater percentage of time on non-school computer use. These findings suggest the importance of parental monitoring and parental computer use on their children’s own computer use, and the influence of some internet activities on adolescent computer use. Finally, programs aimed at parents to help them increase the age when their children start using computers and learn how to place limits on recreational computer use are needed. PMID:25170449

  1. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  2. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  3. Adaptive allocation of decisionmaking responsibility between human and computer in multitask situations

    NASA Technical Reports Server (NTRS)

    Chu, Y.-Y.; Rouse, W. B.

    1979-01-01

    As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.

  4. Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.

    PubMed

    Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha

    2016-08-01

    Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.

  5. Evaluating MRI based vascular wall motion as a biomarker of Fontan hemodynamic performance

    NASA Astrophysics Data System (ADS)

    Menon, Prahlad G.; Hong, Haifa

    2015-03-01

    The Fontan procedure for single-ventricle heart disease involves creation of pathways to divert venous blood from the superior & inferior venacavae (SVC, IVC) directly into the pulmonary arteries (PA), bypassing the right ventricle. For optimal surgical outcomes, venous flow energy loss in the resulting vascular construction must be minimized and ensuring close to equal flow distribution from the Fontan conduit connecting IVC to the left & right PA is paramount. This requires patient-specific hemodynamic evaluation using computational fluid dynamics (CFD) simulations which are often time and resource intensive, limiting applicability for real-time patient management in the clinic. In this study, we report preliminary efforts at identifying a new non-invasive imaging based surrogate for CFD simulated hemodynamics. We establish correlations between computed hemodynamic criteria from CFD modeling and cumulative wall displacement characteristics of the Fontan conduit quantified from cine cardiovascular MRI segmentations over time (i.e. 20 cardiac phases gated from the start of ventricular systole), in 5 unique Fontan surgical connections. To focus our attention on diameter variations while discounting side-to-side swaying motion of the Fontan conduit, the difference between its instantaneous regional expansion and inward contraction (averaged across the conduit) was computed and analyzed. Maximum Fontan conduit-average expansion over the cardiac cycle correlated with the anatomy-specific diametric offset between the axis of the IVC and SVC (r2=0.13, p=0.55) - a known factor correlated with Fontan energy loss and IVC-to-PA flow distribution. Investigation in a larger study cohort is needed to establish stronger statistical correlations.

  6. Computer program for analysis of hemodynamic response to head-up tilt test

    NASA Astrophysics Data System (ADS)

    ŚwiÄ tek, Eliza; Cybulski, Gerard; Koźluk, Edward; PiÄ tkowska, Agnieszka; Niewiadomski, Wiktor

    2014-11-01

    The aim of this work was to create a computer program, written in the MATLAB environment, which enables the visualization and analysis of hemodynamic parameters recorded during a passive tilt test using the CNS Task Force Monitor System. The application was created to help in the assessment of the relationship between the values and dynamics of changes of the selected parameters and the risk of orthostatic syncope. The signal analysis included: R-R intervals (RRI), heart rate (HR), systolic blood pressure (sBP), diastolic blood pressure (dBP), mean blood pressure (mBP), stroke volume (SV), stroke index (SI), cardiac output (CO), cardiac index (CI), total peripheral resistance (TPR), total peripheral resistance index (TPRI), ventricular ejection time (LVET) and thoracic fluid content (TFC). The program enables the user to visualize waveforms for a selected parameter and to perform smoothing with selected moving average parameters. It allows one to construct the graph of means for any range, and the Poincare plot for a selected time range. The program automatically determines the average value of the parameter before tilt, its minimum and maximum value immediately after changing positions and the times of their occurrence. It is possible to correct the automatically detected points manually. For the RR interval, it determines the acceleration index (AI) and the brake index (BI). It is possible to save calculated values to an XLS with a name specified by user. The application has a user-friendly graphical interface and can run on a computer that has no MATLAB software.

  7. Computation of Asteroid Proper Elements: Recent Advances

    NASA Astrophysics Data System (ADS)

    Knežević, Z.

    2017-12-01

    The recent advances in computation of asteroid proper elements are briefly reviewed. Although not representing real breakthroughs in computation and stability assessment of proper elements, these advances can still be considered as important improvements offering solutions to some practical problems encountered in the past. The problem of getting unrealistic values of perihelion frequency for very low eccentricity orbits is solved by computing frequencies using the frequency-modified Fourier transform. The synthetic resonant proper elements adjusted to a given secular resonance helped to prove the existence of Astraea asteroid family. The preliminary assessment of stability with time of proper elements computed by means of the analytical theory provides a good indication of their poorer performance with respect to their synthetic counterparts, and advocates in favor of ceasing their regular maintenance; the final decision should, however, be taken on the basis of more comprehensive and reliable direct estimate of their individual and sample average deviations from constancy.

  8. Correlation based networks of equity returns sampled at different time horizons

    NASA Astrophysics Data System (ADS)

    Tumminello, M.; di Matteo, T.; Aste, T.; Mantegna, R. N.

    2007-01-01

    We investigate the planar maximally filtered graphs of the portfolio of the 300 most capitalized stocks traded at the New York Stock Exchange during the time period 2001 2003. Topological properties such as the average length of shortest paths, the betweenness and the degree are computed on different planar maximally filtered graphs generated by sampling the returns at different time horizons ranging from 5 min up to one trading day. This analysis confirms that the selected stocks compose a hierarchical system progressively structuring as the sampling time horizon increases. Finally, a cluster formation, associated to economic sectors, is quantitatively investigated.

  9. Masked and unmasked error-related potentials during continuous control and feedback

    NASA Astrophysics Data System (ADS)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.

  10. SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure

    PubMed Central

    Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.

    2017-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341

  11. SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.

    PubMed

    Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S

    2017-03-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.

  12. Implementation and evaluation of various demons deformable image registration algorithms on a GPU.

    PubMed

    Gu, Xuejun; Pan, Hubert; Liang, Yun; Castillo, Richard; Yang, Deshan; Choi, Dongju; Castillo, Edward; Majumdar, Amitava; Guerrero, Thomas; Jiang, Steve B

    2010-01-07

    Online adaptive radiation therapy (ART) promises the ability to deliver an optimal treatment in response to daily patient anatomic variation. A major technical barrier for the clinical implementation of online ART is the requirement of rapid image segmentation. Deformable image registration (DIR) has been used as an automated segmentation method to transfer tumor/organ contours from the planning image to daily images. However, the current computational time of DIR is insufficient for online ART. In this work, this issue is addressed by using computer graphics processing units (GPUs). A gray-scale-based DIR algorithm called demons and five of its variants were implemented on GPUs using the compute unified device architecture (CUDA) programming environment. The spatial accuracy of these algorithms was evaluated over five sets of pulmonary 4D CT images with an average size of 256 x 256 x 100 and more than 1100 expert-determined landmark point pairs each. For all the testing scenarios presented in this paper, the GPU-based DIR computation required around 7 to 11 s to yield an average 3D error ranging from 1.5 to 1.8 mm. It is interesting to find out that the original passive force demons algorithms outperform subsequently proposed variants based on the combination of accuracy, efficiency and ease of implementation.

  13. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  14. Tight Bounds for Minimax Grid Matching, with Applications to the Average Case Analysis of Algorithms.

    DTIC Science & Technology

    1986-05-01

    AD-ft?l 552 TIGHT BOUNDS FOR NININAX GRID MATCHING WITH i APPLICATIONS TO THE AVERAGE C.. (U) MASSACHUSETTS INST OF TECH CAMBRIDGE LAS FOR COMPUTER...MASSACHUSETTS LABORATORYFORNSTITUTE OF COMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-298 TIGHT BOUNDS FOR MINIMAX GRID MATCHING, WITH APPLICATIONS TO THE AVERAGE...PERIOD COVERED Tight bounds for minimax grid matching, Interim research with applications to the average case May 1986 analysis of algorithms. 6

  15. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    PubMed

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  16. Mobile GPU-based implementation of automatic analysis method for long-term ECG.

    PubMed

    Fan, Xiaomao; Yao, Qihang; Li, Ye; Chen, Runge; Cai, Yunpeng

    2018-05-03

    Long-term electrocardiogram (ECG) is one of the important diagnostic assistant approaches in capturing intermittent cardiac arrhythmias. Combination of miniaturized wearable holters and healthcare platforms enable people to have their cardiac condition monitored at home. The high computational burden created by concurrent processing of numerous holter data poses a serious challenge to the healthcare platform. An alternative solution is to shift the analysis tasks from healthcare platforms to the mobile computing devices. However, long-term ECG data processing is quite time consuming due to the limited computation power of the mobile central unit processor (CPU). This paper aimed to propose a novel parallel automatic ECG analysis algorithm which exploited the mobile graphics processing unit (GPU) to reduce the response time for processing long-term ECG data. By studying the architecture of the sequential automatic ECG analysis algorithm, we parallelized the time-consuming parts and reorganized the entire pipeline in the parallel algorithm to fully utilize the heterogeneous computing resources of CPU and GPU. The experimental results showed that the average executing time of the proposed algorithm on a clinical long-term ECG dataset (duration 23.0 ± 1.0 h per signal) is 1.215 ± 0.140 s, which achieved an average speedup of 5.81 ± 0.39× without compromising analysis accuracy, comparing with the sequential algorithm. Meanwhile, the battery energy consumption of the automatic ECG analysis algorithm was reduced by 64.16%. Excluding energy consumption from data loading, 79.44% of the energy consumption could be saved, which alleviated the problem of limited battery working hours for mobile devices. The reduction of response time and battery energy consumption in ECG analysis not only bring better quality of experience to holter users, but also make it possible to use mobile devices as ECG terminals for healthcare professions such as physicians and health advisers, enabling them to inspect patient ECG recordings onsite efficiently without the need of a high-quality wide-area network environment.

  17. A Three-Dimensional Statistical Average Skull: Application of Biometric Morphing in Generating Missing Anatomy.

    PubMed

    Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M

    2015-07-01

    The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4  mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.

  18. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    Several comparisons of computational fluid dynamics to wind tunnel test data are shown for the purpose of code validation. The wind tunnel test, 05-CA, uses a 7.66% model of NASA's Multi-Purpose Crew Vehicle in the 11-foot test section of the Ames Unitary Plan Wind tunnel. A variety of freestream conditions over four Mach numbers and three angles of attack are considered. Test data comparisons include time-averaged integrated forces and moments, time-averaged static pressure ports on the surface, and Strouhal Number. The applicability of the US3D code to subsonic and transonic flow over a bluff body is assessed on a comprehensive data set. With close comparison, this work validates US3D for highly separated flows similar to those examined here.

  19. A preliminary evaluation of nearhore extreme sea level and wave models for fringing reef environments

    NASA Astrophysics Data System (ADS)

    Hoeke, R. K.; Reyns, J.; O'Grady, J.; Becker, J. M.; Merrifield, M. A.; Roelvink, J. A.

    2016-02-01

    Oceanic islands are widely perceived as vulnerable to sea level rise and are characterized by steep nearshore topography and fringing reefs. In such settings, near shore dynamics and (non-tidal) water level variability tends to be dominated by wind-wave processes. These processes are highly sensitive to reef morphology and roughness and to regional wave climate. Thus sea level extremes tend to be highly localized and their likelihood can be expected to change in the future (beyond simple extrapolation of sea level rise scenarios): e.g. sea level rise may increase the effective mean depth of reef crests and flats and ocean acidification and/or increased temperatures may lead to changes in reef structure. The problem is sufficiently complex that analytic or numerical approaches are necessary to estimate current hazards and explore potential future changes. In this study, we evaluate the capacity of several analytic/empirical approaches and phase-averaged and phase-resolved numerical models at sites in the insular tropical Pacific. We consider their ability to predict time-averaged wave setup and instantaneous water level exceedance probability (or dynamic wave run-up) as well as computational cost; where possible, we compare the model results with in situ observations from a number of previous studies. Preliminary results indicate analytic approaches are by far the most computationally efficient, but tend to perform poorly when alongshore straight and parallel morphology cannot be assumed. Phase-averaged models tend to perform well with respect to wave setup in such situations, but are unable to predict processes related to individual waves or wave groups, such as infragravity motions or wave run-up. Phase-resolved models tend to perform best, but come at high computational cost, an important consideration when exploring possible future scenarios. A new approach of combining an unstructured computational grid with a quasi-phase averaged approach (i.e. only phase resolving motions below a frequency cutoff) shows promise as a good compromise between computational efficiency and resolving processes such as wave runup and overtopping in more complex bathymetric situations.

  20. 17 CFR 210.8-04 - Financial statements of businesses acquired or to be acquired.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... interests for the most recent fiscal year is at least 10 percent lower than the average of the income for the last five fiscal years, such average income should be substituted for purposes of the computation. Any loss years should be omitted for purposes of computing average income. (c)(1) If none of the...

  1. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  2. Traveltime and longitudinal dispersion in Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.

    1986-01-01

    Twenty-seven measurements of traveltime and longitudinal dispersion in 10 Illinois streams made from 1975 to 1982 provide data needed for estimating traveltime of peak concentration of a conservative solute, traveltime of the leading edge of a solute cloud, peak concentration resulting from injection of a given quantity of solute, and passage time of solute past a given point on a stream. These four variables can be estimated graphically for each stream from distance of travel and either discharge at the downstream end of the reach or flow-duration frequency. From equations developed from field measurements, the traveltime and dispersion characteristics also can be estimated for other unregulated streams in Illinois that have drainage areas less than about 1,500 square miles. For unmeasured streams, traveltime of peak concentration and of the leading edge of the cloud are related to discharge at the downstream end of the reach and to distance of travel. For both measured and unmeasured streams, peak concentration and passage time are best estimated from the relation of each to traveltime. In measured streams, dispersion efficiency is greater than that predicted by Fickian diffusion theory. The rate of decrease in peak concentration with traveltime is about equal to the rate of increase in passage time. Average velocity in a stream reach, given by the velocity of the center of solute mass in that reach, can be estimated from an equation developed from measured values. The equation relates average reach velocity to discharge at the downstream end of the reach. Average reach velocities computed for 9 of the 10 streams from available equations that are based on hydraulic-geometry relations are high relative to measured values. The estimating equation developed from measured velocities provides estimates of average reach velocity that are closer to measured velocities than are those computed using equations developed from hydraulic-geometry relations.

  3. Assessing the effects of manual dexterity and playing computer games on catheter-wire manipulation for inexperienced operators.

    PubMed

    Alsafi, Z; Hameed, Y; Amin, P; Shamsad, S; Raja, U; Alsafi, A; Hamady, M S

    2017-09-01

    To investigate the effect of playing computer games and manual dexterity on catheter-wire manipulation in a mechanical aortic model. Medical student volunteers filled in a preprocedure questionnaire assessing their exposure to computer games. Their manual dexterity was measured using a smartphone game. They were then shown a video clip demonstrating renal artery cannulation and were asked to reproduce this. All attempts were timed. Two-tailed Student's t-test was used to compare continuous data, while Fisher's exact test was used for categorical data. Fifty students aged 18-22 years took part in the study. Forty-six completed the task at an average of 168 seconds (range 103-301 seconds). There was no significant difference in the dexterity score or time to cannulate the renal artery between male and female students. Students who played computer games for >10 hours per week had better dexterity scores than those who did not play computer games: 9.1 versus 10.2 seconds (p=0.0237). Four of 19 students who did not play computer games failed to complete the task, while all of those who played computer games regularly completed the task (p=0.0168). Playing computer games is associated with better manual dexterity and ability to complete a basic interventional radiology task for novices. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. Monte Carlo based method for fluorescence tomographic imaging with lifetime multiplexing using time gates

    PubMed Central

    Chen, Jin; Venugopal, Vivek; Intes, Xavier

    2011-01-01

    Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610

  5. Effect of the time window on the heat-conduction information filtering model

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Song, Wen-Jun; Hou, Lei; Zhang, Yi-Lu; Liu, Jian-Guo

    2014-05-01

    Recommendation systems have been proposed to filter out the potential tastes and preferences of the normal users online, however, the physics of the time window effect on the performance is missing, which is critical for saving the memory and decreasing the computation complexity. In this paper, by gradually expanding the time window, we investigate the impact of the time window on the heat-conduction information filtering model with ten similarity measures. The experimental results on the benchmark dataset Netflix indicate that by only using approximately 11.11% recent rating records, the accuracy could be improved by an average of 33.16% and the diversity could be improved by 30.62%. In addition, the recommendation performance on the dataset MovieLens could be preserved by only considering approximately 10.91% recent records. Under the circumstance of improving the recommendation performance, our discoveries possess significant practical value by largely reducing the computational time and shortening the data storage space.

  6. Time constant determination for electrical equivalent of biological cells

    NASA Astrophysics Data System (ADS)

    Dubey, Ashutosh Kumar; Dutta-Gupta, Shourya; Kumar, Ravi; Tewari, Abhishek; Basu, Bikramjit

    2009-04-01

    The electric field interactions with biological cells are of significant interest in various biophysical and biomedical applications. In order to study such important aspect, it is necessary to evaluate the time constant in order to estimate the response time of living cells in the electric field (E-field). In the present study, the time constant is evaluated by considering the hypothesis of electrical analog of spherical shaped cells and assuming realistic values for capacitance and resistivity properties of cell/nuclear membrane, cytoplasm, and nucleus. In addition, the resistance of cytoplasm and nucleoplasm was computed based on simple geometrical considerations. Importantly, the analysis on the basis of first principles shows that the average values of time constant would be around 2-3 μs, assuming the theoretical capacitance values and the analytically computed resistance values. The implication of our analytical solution has been discussed in reference to the cellular adaptation processes such as atrophy/hypertrophy as well as the variation in electrical transport properties of cellular membrane/cytoplasm/nuclear membrane/nucleoplasm.

  7. Using self-organizing maps to infill missing data in hydro-meteorological time series from the Logone catchment, Lake Chad basin.

    PubMed

    Nkiaka, E; Nawaz, N R; Lovett, J C

    2016-07-01

    Hydro-meteorological data is an important asset that can enhance management of water resources. But existing data often contains gaps, leading to uncertainties and so compromising their use. Although many methods exist for infilling data gaps in hydro-meteorological time series, many of these methods require inputs from neighbouring stations, which are often not available, while other methods are computationally demanding. Computing techniques such as artificial intelligence can be used to address this challenge. Self-organizing maps (SOMs), which are a type of artificial neural network, were used for infilling gaps in a hydro-meteorological time series in a Sudano-Sahel catchment. The coefficients of determination obtained were all above 0.75 and 0.65 while the average topographic error was 0.008 and 0.02 for rainfall and river discharge time series, respectively. These results further indicate that SOMs are a robust and efficient method for infilling missing gaps in hydro-meteorological time series.

  8. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  9. Hybrid EEG-EOG brain-computer interface system for practical machine control.

    PubMed

    Punsawad, Yunyong; Wongsawat, Yodchanan; Parnichkun, Manukid

    2010-01-01

    Practical issues such as accuracy with various subjects, number of sensors, and time for training are important problems of existing brain-computer interface (BCI) systems. In this paper, we propose a hybrid framework for the BCI system that can make machine control more practical. The electrooculogram (EOG) is employed to control the machine in the left and right directions while the electroencephalogram (EEG) is employed to control the forword, no action, and complete stop motions of the machine. By using only 2-channel biosignals, the average classification accuracy of more than 95% can be achieved.

  10. Choice: 36 band feature selection software with applications to multispectral pattern recognition

    NASA Technical Reports Server (NTRS)

    Jones, W. C.

    1973-01-01

    Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.

  11. The Effect of Experimental Variables on Industrial X-Ray Micro-Computed Sensitivity

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Rauser, Richard W.

    2014-01-01

    A study was performed on the effect of experimental variables on radiographic sensitivity (image quality) in x-ray micro-computed tomography images for a high density thin wall metallic cylinder containing micro-EDM holes. Image quality was evaluated in terms of signal-to-noise ratio, flaw detectability, and feature sharpness. The variables included: day-to-day reproducibility, current, integration time, voltage, filtering, number of frame averages, number of projection views, beam width, effective object radius, binning, orientation of sample, acquisition angle range (180deg to 360deg), and directional versus transmission tube.

  12. Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.

    PubMed

    Zheng, Mo; Li, Xiaoxia; Guo, Li

    2013-04-01

    Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. 40 CFR Appendix N to Part 50 - Interpretation of the National Ambient Air Quality Standards for PM2.5

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... midnight to midnight (local standard time) that are used in NAAQS computations. Designated monitors are... accordance with part 58 of this chapter. Design values are the metrics (i.e., statistics) that are compared... (referred to as the “annual standard design value”). If spatial averaging has been approved by EPA for a...

  14. Intelligent Command and Control Demonstration Setup and Presentation Instructions

    DTIC Science & Technology

    2017-12-01

    and Control Demonstration Setup and Presentation Instructions by Laurel C Sadler and Somiya Metu Computational and Information Sciences...0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information . Send

  15. Unitary Transformations in 3 D Vector Representation of Qutrit States

    DTIC Science & Technology

    2018-03-12

    Representation of Qutrit States Vinod K Mishra Computational and Information Sciences Directorate, ARL Approved for public... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this burden estimate or any other aspect

  16. Coexistence of Named Data Networking (NDN) and Software-Defined Networking (SDN)

    DTIC Science & Technology

    2017-09-01

    Networking (NDN) and Software-Defined Networking (SDN) by Vinod Mishra Computational and Information Sciences Directorate, ARL...reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this

  17. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  18. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  19. Real-Time Patient Survey Data During Routine Clinical Activities for Rapid-Cycle Quality Improvement

    PubMed Central

    Jones, Robert E

    2015-01-01

    Background Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. Objective We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. Methods We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Results Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)—8.0, 11.8, 16.8, seconds, respectively. Conclusions This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system. PMID:25768807

  20. Real-time patient survey data during routine clinical activities for rapid-cycle quality improvement.

    PubMed

    Wofford, James Lucius; Campos, Claudia L; Jones, Robert E; Stevens, Sheila F

    2015-03-12

    Surveying patients is increasingly important for evaluating and improving health care delivery, but practical survey strategies during routine care activities have not been available. We examined the feasibility of conducting routine patient surveys in a primary care clinic using commercially available technology (Web-based survey creation, deployment on tablet computers, cloud-based management of survey data) to expedite and enhance several steps in data collection and management for rapid quality improvement cycles. We used a Web-based data management tool (survey creation, deployment on tablet computers, real-time data accumulation and display of survey results) to conduct four patient surveys during routine clinic sessions over a one-month period. Each survey consisted of three questions and focused on a specific patient care domain (dental care, waiting room experience, care access/continuity, Internet connectivity). Of the 727 available patients during clinic survey days, 316 patients (43.4%) attempted the survey, and 293 (40.3%) completed the survey. For the four 3-question surveys, the average time per survey was overall 40.4 seconds, with a range of 5.4 to 20.3 seconds for individual questions. Yes/No questions took less time than multiple choice questions (average 9.6 seconds versus 14.0). Average response time showed no clear pattern by order of questions or by proctor strategy, but monotonically increased with number of words in the question (<20 words, 21-30 words, >30 words)-8.0, 11.8, 16.8, seconds, respectively. This technology-enabled data management system helped capture patient opinions, accelerate turnaround of survey data, with minimal impact on a busy primary care clinic. This new model of patient survey data management is feasible and sustainable in a busy office setting, supports and engages clinicians in the quality improvement process, and harmonizes with the vision of a learning health care system.

  1. Space-time interpolation of satellite winds in the tropics

    NASA Astrophysics Data System (ADS)

    Patoux, Jérôme; Levy, Gad

    2013-09-01

    A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.

  2. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  3. Numerical simulations of the flow with the prescribed displacement of the airfoil and comparison with experiment

    NASA Astrophysics Data System (ADS)

    Řidký, V.; Šidlof, P.; Vlček, V.

    2013-04-01

    The work is devoted to comparing measured data with the results of numerical simulations. As mathematical model was used mathematical model whitout turbulence for incompressible flow In the experiment was observed the behavior of designed NACA0015 airfoil in airflow. For the numerical solution was used OpenFOAM computational package, this is open-source software based on finite volume method. In the numerical solution is prescribed displacement of the airfoil, which corresponds to the experiment. The velocity at a point close to the airfoil surface is compared with the experimental data obtained from interferographic measurements of the velocity field. Numerical solution is computed on a 3D mesh composed of about 1 million ortogonal hexahedron elements. The time step is limited by the Courant number. Parallel computations are run on supercomputers of the CIV at Technical University in Prague (HAL and FOX) and on a computer cluster of the Faculty of Mechatronics of Liberec (HYDRA). Run time is fixed at five periods, the results from the fifth periods and average value for all periods are then be compared with experiment.

  4. An experimental and computational investigation of the flow field about a transonic airfoil in supercritical flow with turbulent boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Okuno, A. F.; Levy, L. L., Jr.; Mcdevitt, J. B.; Seegmiller, H. L.

    1976-01-01

    A combined experimental and computational research program is described for testing and guiding turbulence modeling within regions of separation induced by shock waves incident in turbulent boundary layers. Specifically, studies are made of the separated flow the rear portion of an 18%-thick circular-arc airfoil at zero angle of attack in high Reynolds number supercritical flow. The measurements include distributions of surface static pressure and local skin friction. The instruments employed include highfrequency response pressure cells and a large array of surface hot-wire skin-friction gages. Computations at the experimental flow conditions are made using time-dependent solutions of ensemble-averaged Navier-Stokes equations, plus additional equations for the turbulence modeling.

  5. Fresnel-region fields and antenna noise-temperature calculations for advanced microwave sounding units

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1982-01-01

    A transition from the antenna noise temperature formulation for extended noise sources in the far-field or Fraunhofer-region of an antenna to one of the intermediate near field or Fresnel-region is discussed. The effort is directed toward microwave antenna simulations and high-speed digital computer analysis of radiometric sounding units used to obtain water vapor and temperature profiles of the atmosphere. Fresnel-region fields are compared at various distances from the aperture. The antenna noise temperature contribution of an annular noise source is computed in the Fresnel-region (D squared/16 lambda) for a 13.2 cm diameter offset-paraboloid aperture at 60 GHz. The time-average Poynting vector is used to effect the computation.

  6. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  7. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  8. Large-Amplitude, High-Rate Roll Oscillations of a 65 deg Delta Wing at High Incidence

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Schiff, Lewis B.

    2000-01-01

    The IAR/WL 65 deg delta wing experimental results provide both detail pressure measurements and a wide range of flow conditions covering from simple attached flow, through fully developed vortex and vortex burst flow, up to fully-stalled flow at very high incidence. Thus, the Computational Unsteady Aerodynamics researchers can use it at different level of validating the corresponding code. In this section a range of CFD results are provided for the 65 deg delta wing at selected flow conditions. The time-dependent, three-dimensional, Reynolds-averaged, Navier-Stokes (RANS) equations are used to numerically simulate the unsteady vertical flow. Two sting angles and two large- amplitude, high-rate, forced-roll motions and a damped free-to-roll motion are presented. The free-to-roll motion is computed by coupling the time-dependent RANS equations to the flight dynamic equation of motion. The computed results are compared with experimental pressures, forces, moments and roll angle time history. In addition, surface and off-surface flow particle streaks are also presented.

  9. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  10. BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark

    PubMed Central

    Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung

    2016-01-01

    Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389

  11. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  12. Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers

    NASA Astrophysics Data System (ADS)

    Sendersky, Dmitry

    2000-10-01

    The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.

  13. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    USGS Publications Warehouse

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  14. Acceleration and Velocity Sensing from Measured Strain

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi; Truax, Roger

    2015-01-01

    A simple approach for computing acceleration and velocity of a structure from the strain is proposed in this study. First, deflection and slope of the structure are computed from the strain using a two-step theory. Frequencies of the structure are computed from the time histories of strain using a parameter estimation technique together with an autoregressive moving average model. From deflection, slope, and frequencies of the structure, acceleration and velocity of the structure can be obtained using the proposed approach. Simple harmonic motion is assumed for the acceleration computations, and the central difference equation with a linear autoregressive model is used for the computations of velocity. A cantilevered rectangular wing model is used to validate the simple approach. Quality of the computed deflection, acceleration, and velocity values are independent of the number of fibers. The central difference equation with a linear autoregressive model proposed in this study follows the target response with reasonable accuracy. Therefore, the handicap of the backward difference equation, phase shift, is successfully overcome.

  15. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  16. Computer simulation: A modern day crystal ball?

    NASA Technical Reports Server (NTRS)

    Sham, Michael; Siprelle, Andrew

    1994-01-01

    It has long been the desire of managers to be able to look into the future and predict the outcome of decisions. With the advent of computer simulation and the tremendous capability provided by personal computers, that desire can now be realized. This paper presents an overview of computer simulation and modeling, and discusses the capabilities of Extend. Extend is an iconic-driven Macintosh-based software tool that brings the power of simulation to the average computer user. An example of an Extend based model is presented in the form of the Space Transportation System (STS) Processing Model. The STS Processing Model produces eight shuttle launches per year, yet it takes only about ten minutes to run. In addition, statistical data such as facility utilization, wait times, and processing bottlenecks are produced. The addition or deletion of resources, such as orbiters or facilities, can be easily modeled and their impact analyzed. Through the use of computer simulation, it is possible to look into the future to see the impact of today's decisions.

  17. Use of artificial intelligence to analyze clinical database reduces workload on surgical house staff.

    PubMed

    Grossi, E A; Steinberg, B M; LeBoutillier, M; Coppa, G F; Roses, D F

    1994-08-01

    The current quantity and diversity of hospital clinical, laboratory, and pharmacy records have resulted in a glut of information, which can be overwhelming to house staff. This study was performed to measure the impact of artificial intelligence analysis of such data on the junior surgical house staff's workload, time for direct patient care, and quality of life. A personal computer was interfaced with the hospital computerized patient data system. Artificial intelligence algorithms were applied to retrieve and condense laboratory values, microbiology reports, and medication orders. Unusual laboratory tests were reported without artificial intelligence filtering. A survey of 23 junior house staff showed a requirement for a total of 30.75 man-hours per day, an average of 184.5 minutes per service twice a day for five surgical services each with an average of 40.7 patients, to manually produce a report in contrast to a total of 3.4 man-hours, an average of 20.5 minutes on the same basis (88.9% reduction, p < 0.001), to computer generate and distribute a similarly useful report. Two thirds of the residents reported an increased ability to perform patient care. Current medical practice has created an explosion of information, which is a burden for surgical house staff. Artificial intelligence preprocessing of the hospital database information focuses attention, eliminates superfluous data, and significantly reduces surgical house staff clerical work, allowing more time for education, research, and patient care.

  18. A Well-Tempered Hybrid Method for Solving Challenging Time-Dependent Density Functional Theory (TDDFT) Systems.

    PubMed

    Kasper, Joseph M; Williams-Young, David B; Vecharynski, Eugene; Yang, Chao; Li, Xiaosong

    2018-04-10

    The time-dependent Hartree-Fock (TDHF) and time-dependent density functional theory (TDDFT) equations allow one to probe electronic resonances of a system quickly and inexpensively. However, the iterative solution of the eigenvalue problem can be challenging or impossible to converge, using standard methods such as the Davidson algorithm for spectrally dense regions in the interior of the spectrum, as are common in X-ray absorption spectroscopy (XAS). More robust solvers, such as the generalized preconditioned locally harmonic residual (GPLHR) method, can alleviate this problem, but at the expense of higher average computational cost. A hybrid method is proposed which adapts to the problem in order to maximize computational performance while providing the superior convergence of GPLHR. In addition, a modification to the GPLHR algorithm is proposed to adaptively choose the shift parameter to enforce a convergence of states above a predefined energy threshold.

  19. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  20. Runway Scheduling Using Generalized Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Montoya, Justin; Wood, Zachary; Rathinam, Sivakumar

    2011-01-01

    A generalized dynamic programming method for finding a set of pareto optimal solutions for a runway scheduling problem is introduced. The algorithm generates a set of runway fight sequences that are optimal for both runway throughput and delay. Realistic time-based operational constraints are considered, including miles-in-trail separation, runway crossings, and wake vortex separation. The authors also model divergent runway takeoff operations to allow for reduced wake vortex separation. A modeled Dallas/Fort Worth International airport and three baseline heuristics are used to illustrate preliminary benefits of using the generalized dynamic programming method. Simulated traffic levels ranged from 10 aircraft to 30 aircraft with each test case spanning 15 minutes. The optimal solution shows a 40-70 percent decrease in the expected delay per aircraft over the baseline schedulers. Computational results suggest that the algorithm is promising for real-time application with an average computation time of 4.5 seconds. For even faster computation times, two heuristics are developed. As compared to the optimal, the heuristics are within 5% of the expected delay per aircraft and 1% of the expected number of runway operations per hour ad can be 100x faster.

  1. SU-E-T-614: Plan Averaging for Multi-Criteria Navigation of Step-And-Shoot IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, M; Gao, H; Craft, D

    2015-06-15

    Purpose: Step-and-shoot IMRT is fundamentally discrete in nature, while multi-criteria optimization (MCO) is fundamentally continuous: the MCO planning consists of continuous sliding across the Pareto surface (the set of plans which represent the tradeoffs between organ-at-risk doses and target doses). In order to achieve close to real-time dose display during this sliding, it is desired that averaged plans share many of the same apertures as the pre-computed plans, since dose computation for apertures generated on-the-fly would be expensive. We propose a method to ensure that neighboring plans on a Pareto surface share many apertures. Methods: Our baseline step-and-shoot sequencing methodmore » is that of K. Engel (a method which minimizes the number of segments while guaranteeing the minimum number of monitor units), which we customize to sequence a set of Pareto optimal plans simultaneously. We also add an error tolerance to study the relationship between the number of shared apertures, the total number of apertures needed, and the quality of the fluence map re-creation. Results: We run tests for a 2D Pareto surface trading off rectum and bladder dose versus target coverage for a clinical prostate case. We find that if we enforce exact fluence map recreation, we are not able to achieve much sharing of apertures across plans. The total number of apertures for all seven beams and 4 plans without sharing is 217. With sharing and a 2% error tolerance, this number is reduced to 158 (73%). Conclusion: With the proposed method, total number of apertures can be decreased by 42% (averaging) with no increment of total MU, when an error tolerance of 5% is allowed. With this large amount of sharing, dose computations for averaged plans which occur during Pareto navigation will be much faster, leading to a real-time what-you-see-is-what-you-get Pareto navigation experience. Minghao Guo and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  3. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  4. Comments regarding two upwind methods for solving two-dimensional external flows using unstructured grids

    NASA Technical Reports Server (NTRS)

    Kleb, W. L.

    1994-01-01

    Steady flow over the leading portion of a multicomponent airfoil section is studied using computational fluid dynamics (CFD) employing an unstructured grid. To simplify the problem, only the inviscid terms are retained from the Reynolds-averaged Navier-Stokes equations - leaving the Euler equations. The algorithm is derived using the finite-volume approach, incorporating explicit time-marching of the unsteady Euler equations to a time-asymptotic, steady-state solution. The inviscid fluxes are obtained through either of two approximate Riemann solvers: Roe's flux difference splitting or van Leer's flux vector splitting. Results are presented which contrast the solutions given by the two flux functions as a function of Mach number and grid resolution. Additional information is presented concerning code verification techniques, flow recirculation regions, convergence histories, and computational resources.

  5. An Outcome and Cost Analysis Comparing Single-Level Minimally Invasive Transforaminal Lumbar Interbody Fusion Using Intraoperative Fluoroscopy versus Computed Tomography-Guided Navigation.

    PubMed

    Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S

    2016-10-01

    Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Robust averaging protects decisions from noise in neural computations

    PubMed Central

    Herce Castañón, Santiago; Solomon, Joshua A.; Vandormael, Hildward

    2017-01-01

    An ideal observer will give equivalent weight to sources of information that are equally reliable. However, when averaging visual information, human observers tend to downweight or discount features that are relatively outlying or deviant (‘robust averaging’). Why humans adopt an integration policy that discards important decision information remains unknown. Here, observers were asked to judge the average tilt in a circular array of high-contrast gratings, relative to an orientation boundary defined by a central reference grating. Observers showed robust averaging of orientation, but the extent to which they did so was a positive predictor of their overall performance. Using computational simulations, we show that although robust averaging is suboptimal for a perfect integrator, it paradoxically enhances performance in the presence of “late” noise, i.e. which corrupts decisions during integration. In other words, robust decision strategies increase the brain’s resilience to noise arising in neural computations during decision-making. PMID:28841644

  7. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  8. Unsteady Analysis of Blade and Tip Heat Transfer as Influenced by the Upstream Momentum and Thermal Wakes

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Rigby, David L.; Steinthorsson, Erlendur; Heidmann, James D.; Fabian, John C.

    2008-01-01

    The effect of the upstream wake on the blade heat transfer has been numerically examined. The geometry and the flow conditions of the first stage turbine blade of GE s E3 engine with a tip clearance equal to 2 percent of the span was utilized. Based on numerical calculations of the vane, a set of wake boundary conditions were approximated, which were subsequently imposed upon the downstream blade. This set consisted of the momentum and thermal wakes as well as the variation in modeled turbulence quantities of turbulence intensity and the length scale. Using a one-blade periodic domain, the distributions of unsteady heat transfer rate on the turbine blade and its tip, as affected by the wake, were determined. Such heat transfer coefficient distribution was computed using the wall heat flux and the adiabatic wall temperature to desensitize the heat transfer coefficient to the wall temperature. For the determination of the wall heat flux and the adiabatic wall temperatures, two sets of computations were required. The results were used in a phase-locked manner to compute the unsteady or steady heat transfer coefficients. It has been found that the unsteady wake has some effect on the distribution of the time averaged heat transfer coefficient on the blade and that this distribution is different from the distribution that is obtainable from a steady computation. This difference was found to be as large as 20 percent of the average heat transfer on the blade surface. On the tip surface, this difference is comparatively smaller and can be as large as four percent of the average.

  9. Optimization of Computational Performance and Accuracy in 3-D Transient CFD Model for CFB Hydrodynamics Predictions

    NASA Astrophysics Data System (ADS)

    Rampidis, I.; Nikolopoulos, A.; Koukouzas, N.; Grammelis, P.; Kakaras, E.

    2007-09-01

    This work aims to present a pure 3-D CFD model, accurate and efficient, for the simulation of a pilot scale CFB hydrodynamics. The accuracy of the model was investigated as a function of the numerical parameters, in order to derive an optimum model setup with respect to computational cost. The necessity of the in depth examination of hydrodynamics emerges by the trend to scale up CFBCs. This scale up brings forward numerous design problems and uncertainties, which can be successfully elucidated by CFD techniques. Deriving guidelines for setting a computational efficient model is important as the scale of the CFBs grows fast, while computational power is limited. However, the optimum efficiency matter has not been investigated thoroughly in the literature as authors were more concerned for their models accuracy and validity. The objective of this work is to investigate the parameters that influence the efficiency and accuracy of CFB computational fluid dynamics models, find the optimum set of these parameters and thus establish this technique as a competitive method for the simulation and design of industrial, large scale beds, where the computational cost is otherwise prohibitive. During the tests that were performed in this work, the influence of turbulence modeling approach, time and space density and discretization schemes were investigated on a 1.2 MWth CFB test rig. Using Fourier analysis dominant frequencies were extracted in order to estimate the adequate time period for the averaging of all instantaneous values. The compliance with the experimental measurements was very good. The basic differences between the predictions that arose from the various model setups were pointed out and analyzed. The results showed that a model with high order space discretization schemes when applied on a coarse grid and averaging of the instantaneous scalar values for a 20 sec period, adequately described the transient hydrodynamic behaviour of a pilot CFB while the computational cost was kept low. Flow patterns inside the bed such as the core-annulus flow and the transportation of clusters were at least qualitatively captured.

  10. The Influence of Aircraft Speed Variations on Sensible Heat-Flux Measurements by Different Airborne Systems

    NASA Astrophysics Data System (ADS)

    Martin, Sabrina; Bange, Jens

    2014-01-01

    Crawford et al. (Boundary-Layer Meteorol 66:237-245, 1993) showed that the time average is inappropriate for airborne eddy-covariance flux calculations. The aircraft's ground speed through a turbulent field is not constant. One reason can be a correlation with vertical air motion, so that some types of structures are sampled more densely than others. To avoid this, the time-sampled data are adjusted for the varying ground speed so that the modified estimates are equivalent to spatially-sampled data. A comparison of sensible heat-flux calculations using temporal and spatial averaging methods is presented and discussed. Data of the airborne measurement systems , Helipod and Dornier 128-6 are used for the analysis. These systems vary in size, weight and aerodynamic characteristics, since the is a small unmanned aerial vehicle (UAV), the Helipod a helicopter-borne turbulence probe and the Dornier 128-6 a manned research aircraft. The systematic bias anticipated in covariance computations due to speed variations was neither found when averaging over Dornier, Helipod nor UAV flight legs. However, the random differences between spatial and temporal averaging fluxes were found to be up to 30 % on the individual flight legs.

  11. Investigation of advanced counterrotation blade configuration concepts for high speed turboprop systems. Task 4: Advanced fan section aerodynamic analysis computer program user's manual

    NASA Technical Reports Server (NTRS)

    Crook, Andrew J.; Delaney, Robert A.

    1992-01-01

    The computer program user's manual for the ADPACAPES (Advanced Ducted Propfan Analysis Code-Average Passage Engine Simulation) program is included. The objective of the computer program is development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates at the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes meeting the requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. The efficiency of the solution procedure was shown to be the same as the original analysis.

  12. Computerized histories facilitate patient care in a termination of pregnancy clinic: the use of a small computer to obtain and reproduce patient information.

    PubMed

    Lilford, R J; Bingham, P; Bourne, G L; Chard, T

    1985-04-01

    An inexpensive microcomputer has been programmed to obtain histories from patients attending a pregnancy termination clinic. The system is nurse-interactive; yes/no and multiple-choice questions are answered on the visual display unit by a light pen. Proper nouns and discursive text are typed at the computer keyboard. A neatly formatted summary of the history is then provided by an interfaced printer. The history follows a branching pattern; of the 370 questions included in the program, only 68 are answered in the course of an average history. The program contains numerous error traps and the user may request explanations of questions which are not immediately understood. The system was designed to ensure that no factors of anaesthetic or medical importance would be overlooked in the busy out-patient clinic. The computer provides a much more complete history with an average of 42 more items of information than the pre-existing manual system. This system is demanding of nursing time and possible conversion to a patient-interactive system is discussed. A confidential questionnaire revealed a high degree of consumer acceptance.

  13. Aeroacoustic Simulations of a Nose Landing Gear Using FUN3D on Pointwise Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Rhoads, John; Lockard, David P.

    2015-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise(TradeMark) grid generation software are used for these simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these simulations. Solutions are also presented for a wall function model coupled to the standard turbulence model. Time-averaged and instantaneous solutions obtained on these Pointwise grids are compared with the measured data and previous numerical solutions. The resulting CFD solutions are used as input to a Ffowcs Williams-Hawkings noise propagation code to compute the farfield noise levels in the flyover and sideline directions. The computed noise levels compare well with previous CFD solutions and experimental data.

  14. Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost

    NASA Astrophysics Data System (ADS)

    Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.

    2017-11-01

    A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.

  15. Application of the aeroacoustic analogy to a shrouded, subsonic, radial fan

    NASA Astrophysics Data System (ADS)

    Buccieri, Bryan M.; Richards, Christopher M.

    2016-12-01

    A study was conducted to investigate the predictive capability of computational aeroacoustics with respect to a shrouded, subsonic, radial fan. A three dimensional unsteady fluid dynamics simulation was conducted to produce aerodynamic data used as the acoustic source for an aeroacoustics simulation. Two acoustic models were developed: one modeling the forces on the rotating fan blades as a set of rotating dipoles located at the center of mass of each fan blade and one modeling the forces on the stationary fan shroud as a field of distributed stationary dipoles. Predicted acoustic response was compared to experimental data measured at two operating speeds using three different outlet restrictions. The blade source model predicted overall far field sound power levels within 5 dB averaged over the six different operating conditions while the shroud model predicted overall far field sound power levels within 7 dB averaged over the same conditions. Doubling the density of the computational fluids mesh and using a scale adaptive simulation turbulence model increased broadband noise accuracy. However, computation time doubled and the accuracy of the overall sound power level prediction improved by only 1 dB.

  16. Numerical Prediction Methods (Reynolds-Averaged Navier-Stokes Simulations of Transonic Separated Flows)

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel; Lomax, Harvard

    1981-01-01

    During the past five years, numerous pioneering archival publications have appeared that have presented computer solutions of the mass-weighted, time-averaged Navier-Stokes equations for transonic problems pertinent to the aircraft industry. These solutions have been pathfinders of developments that could evolve into a major new technological capability, namely the computational Navier-Stokes technology, for the aircraft industry. So far these simulations have demonstrated that computational techniques, and computer capabilities have advanced to the point where it is possible to solve forms of the Navier-Stokes equations for transonic research problems. At present there are two major shortcomings of the technology: limited computer speed and memory, and difficulties in turbulence modelling and in computation of complex three-dimensional geometries. These limitations and difficulties are the pacing items of the continuing developments, although the one item that will most likely turn out to be the most crucial to the progress of this technology is turbulence modelling. The objective of this presentation is to discuss the state of the art of this technology and suggest possible future areas of research. We now discuss some of the flow conditions for which the Navier-Stokes equations appear to be required. On an airfoil there are four different types of interaction of a shock wave with a boundary layer: (1) shock-boundary-layer interaction with no separation, (2) shock-induced turbulent separation with immediate reattachment (we refer to this as a shock-induced separation bubble), (3) shock-induced turbulent separation without reattachment, and (4) shock-induced separation bubble with trailing edge separation.

  17. Flavin Charge Transfer Transitions Assist DNA Photolyase Electron Transfer

    NASA Astrophysics Data System (ADS)

    Skourtis, Spiros S.; Prytkova, Tatiana; Beratan, David N.

    2007-12-01

    This contribution describes molecular dynamics, semi-empirical and ab-initio studies of the primary photo-induced electron transfer reaction in DNA photolyase. DNA photolyases are FADH--containing proteins that repair UV-damaged DNA by photo-induced electron transfer. A DNA photolyase recognizes and binds to cyclobutatne pyrimidine dimer lesions of DNA. The protein repairs a bound lesion by transferring an electron to the lesion from FADH-, upon photo-excitation of FADH- with 350-450 nm light. We compute the lowest singlet excited states of FADH- in DNA photolyase using INDO/S configuration interaction, time-dependent density-functional, and time-dependent Hartree-Fock methods. The calculations identify the lowest singlet excited state of FADH- that is populated after photo-excitation and that acts as the electron donor. For this donor state we compute conformationally-averaged tunneling matrix elements to empty electron-acceptor states of a thymine dimer bound to photolyase. The conformational averaging involves different FADH--thymine dimer confromations obtained from molecular dynamics simulations of the solvated protein with a thymine dimer docked in its active site. The tunneling matrix element computations use INDO/S-level Green's function, energy splitting, and Generalized Mulliken-Hush methods. These calculations indicate that photo-excitation of FADH- causes a π→π* charge-transfer transition that shifts electron density to the side of the flavin isoalloxazine ring that is adjacent to the docked thymine dimer. This shift in electron density enhances the FADH--to-dimer electronic coupling, thus inducing rapid electron transfer.

  18. Ground Boundary Conditions for Thermal Convection Over Horizontal Surfaces at High Rayleigh Numbers

    NASA Astrophysics Data System (ADS)

    Hanjalić, K.; Hrebtov, M.

    2016-07-01

    We present "wall functions" for treating the ground boundary conditions in the computation of thermal convection over horizontal surfaces at high Rayleigh numbers using coarse numerical grids. The functions are formulated for an algebraic-flux model closed by transport equations for the turbulence kinetic energy, its dissipation rate and scalar variance, but could also be applied to other turbulence models. The three-equation algebraic-flux model, solved in a T-RANS mode ("Transient" Reynolds-averaged Navier-Stokes, based on triple decomposition), was shown earlier to reproduce well a number of generic buoyancy-driven flows over heated surfaces, albeit by integrating equations up to the wall. Here we show that by using a set of wall functions satisfactory results are found for the ensemble-averaged properties even on a very coarse computational grid. This is illustrated by the computations of the time evolution of a penetrative mixed layer and Rayleigh-Bénard (open-ended, 4:4:1 domain) convection, using 10 × 10 × 100 and 10 × 10 × 20 grids, compared also with finer grids (e.g. 60 × 60 × 100), as well as with one-dimensional treatment using 1 × 1 × 100 and 1 × 1 × 20 nodes. The approach is deemed functional for simulations of a convective boundary layer and mesoscale atmospheric flows, and pollutant transport over realistic complex hilly terrain with heat islands, urban and natural canopies, for diurnal cycles, or subjected to other time and space variations in ground conditions and stratification.

  19. Validation of 3D multimodality roadmapping in interventional neuroradiology

    NASA Astrophysics Data System (ADS)

    Ruijters, Daniel; Homan, Robert; Mielekamp, Peter; van de Haar, Peter; Babic, Drazenko

    2011-08-01

    Three-dimensional multimodality roadmapping is entering clinical routine utilization for neuro-vascular treatment. Its purpose is to navigate intra-arterial and intra-venous endovascular devices through complex vascular anatomy by fusing pre-operative computed tomography (CT) or magnetic resonance (MR) with the live fluoroscopy image. The fused image presents the real-time position of the intra-vascular devices together with the patient's 3D vascular morphology and its soft-tissue context. This paper investigates the effectiveness, accuracy, robustness and computation times of the described methods in order to assess their suitability for the intended clinical purpose: accurate interventional navigation. The mutual information-based 3D-3D registration proved to be of sub-voxel accuracy and yielded an average registration error of 0.515 mm and the live machine-based 2D-3D registration delivered an average error of less than 0.2 mm. The capture range of the image-based 3D-3D registration was investigated to characterize its robustness, and yielded an extent of 35 mm and 25° for >80% of the datasets for registration of 3D rotational angiography (3DRA) with CT, and 15 mm and 20° for >80% of the datasets for registration of 3DRA with MR data. The image-based 3D-3D registration could be computed within 8 s, while applying the machine-based 2D-3D registration only took 1.5 µs, which makes them very suitable for interventional use.

  20. Robust estimation of event-related potentials via particle filter.

    PubMed

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  1. Inputs for subject-specific computational fluid dynamics simulation of blood flow in the mouse aorta.

    PubMed

    Van Doormaal, Mark; Zhou, Yu-Qing; Zhang, Xiaoli; Steinman, David A; Henkelman, R Mark

    2014-10-01

    Mouse models are an important way for exploring relationships between blood hemodynamics and eventual plaque formation. We have developed a mouse model of aortic regurgitation (AR) that produces large changes in plaque burden with charges in hemodynamics [Zhou et al., 2010, "Aortic Regurgitation Dramatically Alters the Distribution of Atherosclerotic Lesions and Enhances Atherogenesis in Mice," Arterioscler. Thromb. Vasc. Biol., 30(6), pp. 1181-1188]. In this paper, we explore the amount of detail needed for realistic computational fluid dynamics (CFD) calculations in this experimental model. The CFD calculations use inputs based on experimental measurements from ultrasound (US), micro computed tomography (CT), and both anatomical magnetic resonance imaging (MRI) and phase contrast MRI (PC-MRI). The adequacy of five different levels of model complexity (a) subject-specific CT data from a single mouse; (b) subject-specific CT centerlines with radii from US; (c) same as (b) but with MRI derived centerlines; (d) average CT centerlines and averaged vessel radius and branching vessels; and (e) same as (d) but with averaged MRI centerlines) is evaluated by demonstrating their impact on relative residence time (RRT) outputs. The paper concludes by demonstrating the necessity of subject-specific geometry and recommends for inputs the use of CT or anatomical MRI for establishing the aortic centerlines, M-mode US for scaling the aortic diameters, and a combination of PC-MRI and Doppler US for estimating the spatial and temporal characteristics of the input wave forms.

  2. Persistent collective trend in stock markets

    NASA Astrophysics Data System (ADS)

    Balogh, Emeric; Simonsen, Ingve; Nagy, Bálint Zs.; Néda, Zoltán

    2010-12-01

    Empirical evidence is given for a significant difference in the collective trend of the share prices during the stock index rising and falling periods. Data on the Dow Jones Industrial Average and its stock components are studied between 1991 and 2008. Pearson-type correlations are computed between the stocks and averaged over stock pairs and time. The results indicate a general trend: whenever the stock index is falling the stock prices are changing in a more correlated manner than in case the stock index is ascending. A thorough statistical analysis of the data shows that the observed difference is significant, suggesting a constant fear factor among stockholders.

  3. Two-dimensional Lagrangian simulation of suspended sediment

    USGS Publications Warehouse

    Schoellhamer, David H.

    1988-01-01

    A two-dimensional laterally averaged model for suspended sediment transport in steady gradually varied flow that is based on the Lagrangian reference frame is presented. The layered Lagrangian transport model (LLTM) for suspended sediment performs laterally averaged concentration. The elevations of nearly horizontal streamlines and the simulation time step are selected to optimize model stability and efficiency. The computational elements are parcels of water that are moved along the streamlines in the Lagrangian sense and are mixed with neighboring parcels. Three applications show that the LLTM can accurately simulate theoretical and empirical nonequilibrium suspended sediment distributions and slug injections of suspended sediment in a laboratory flume.

  4. Estimating Oxygen Needs for Childhood Pneumonia in Developing Country Health Systems: A New Model for Expecting the Unexpected

    PubMed Central

    Bradley, Beverly D.; Howie, Stephen R. C.; Chan, Timothy C. Y.; Cheng, Yu-Ling

    2014-01-01

    Background Planning for the reliable and cost-effective supply of a health service commodity such as medical oxygen requires an understanding of the dynamic need or ‘demand’ for the commodity over time. In developing country health systems, however, collecting longitudinal clinical data for forecasting purposes is very difficult. Furthermore, approaches to estimating demand for supplies based on annual averages can underestimate demand some of the time by missing temporal variability. Methods A discrete event simulation model was developed to estimate variable demand for a health service commodity using the important example of medical oxygen for childhood pneumonia. The model is based on five key factors affecting oxygen demand: annual pneumonia admission rate, hypoxaemia prevalence, degree of seasonality, treatment duration, and oxygen flow rate. These parameters were varied over a wide range of values to generate simulation results for different settings. Total oxygen volume, peak patient load, and hours spent above average-based demand estimates were computed for both low and high seasons. Findings Oxygen demand estimates based on annual average values of demand factors can often severely underestimate actual demand. For scenarios with high hypoxaemia prevalence and degree of seasonality, demand can exceed average levels up to 68% of the time. Even for typical scenarios, demand may exceed three times the average level for several hours per day. Peak patient load is sensitive to hypoxaemia prevalence, whereas time spent at such peak loads is strongly influenced by degree of seasonality. Conclusion A theoretical study is presented whereby a simulation approach to estimating oxygen demand is used to better capture temporal variability compared to standard average-based approaches. This approach provides better grounds for health service planning, including decision-making around technologies for oxygen delivery. Beyond oxygen, this approach is widely applicable to other areas of resource and technology planning in developing country health systems. PMID:24587089

  5. Implementation and evaluation of an efficient secure computation system using ‘R’ for healthcare statistics

    PubMed Central

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-01-01

    Background and objective While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Materials and methods Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software ‘R’ by effectively combining secret-sharing-based secure computation with original computation. Results Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50 000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. Discussion If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using ‘R’ that works interactively while secure computation protocols generally require a significant amount of processing time. Conclusions We propose a secure statistical analysis system using ‘R’ for medical data that effectively integrates secret-sharing-based secure computation and original computation. PMID:24763677

  6. Implementation and evaluation of an efficient secure computation system using 'R' for healthcare statistics.

    PubMed

    Chida, Koji; Morohashi, Gembu; Fuji, Hitoshi; Magata, Fumihiko; Fujimura, Akiko; Hamada, Koki; Ikarashi, Dai; Yamamoto, Ryuichi

    2014-10-01

    While the secondary use of medical data has gained attention, its adoption has been constrained due to protection of patient privacy. Making medical data secure by de-identification can be problematic, especially when the data concerns rare diseases. We require rigorous security management measures. Using secure computation, an approach from cryptography, our system can compute various statistics over encrypted medical records without decrypting them. An issue of secure computation is that the amount of processing time required is immense. We implemented a system that securely computes healthcare statistics from the statistical computing software 'R' by effectively combining secret-sharing-based secure computation with original computation. Testing confirmed that our system could correctly complete computation of average and unbiased variance of approximately 50,000 records of dummy insurance claim data in a little over a second. Computation including conditional expressions and/or comparison of values, for example, t test and median, could also be correctly completed in several tens of seconds to a few minutes. If medical records are simply encrypted, the risk of leaks exists because decryption is usually required during statistical analysis. Our system possesses high-level security because medical records remain in encrypted state even during statistical analysis. Also, our system can securely compute some basic statistics with conditional expressions using 'R' that works interactively while secure computation protocols generally require a significant amount of processing time. We propose a secure statistical analysis system using 'R' for medical data that effectively integrates secret-sharing-based secure computation and original computation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. High-Resolution Computed Tomography and Pulmonary Function Findings of Occupational Arsenic Exposure in Workers.

    PubMed

    Ergün, Recai; Evcik, Ender; Ergün, Dilek; Ergan, Begüm; Özkan, Esin; Gündüz, Özge

    2017-05-05

    The number of studies where non-malignant pulmonary diseases are evaluated after occupational arsenic exposure is very few. To investigate the effects of occupational arsenic exposure on the lung by high-resolution computed tomography and pulmonary function tests. Retrospective cross-sectional study. In this study, 256 workers with suspected respiratory occupational arsenic exposure were included, with an average age of 32.9±7.8 years and an average of 3.5±2.7 working years. Hair and urinary arsenic levels were analysed. High-resolution computed tomography and pulmonary function tests were done. In workers with occupational arsenic exposure, high-resolution computed tomography showed 18.8% pulmonary involvement. In pulmonary involvement, pulmonary nodule was the most frequently seen lesion (64.5%). The other findings of pulmonary involvement were 18.8% diffuse interstitial lung disease, 12.5% bronchiectasis, and 27.1% bullae-emphysema. The mean age of patients with pulmonary involvement was higher and as they smoked more. The pulmonary involvement was 5.2 times higher in patients with skin lesions because of arsenic. Diffusing capacity of lung for carbon monoxide was significantly lower in patients with pulmonary involvement. Besides lung cancer, chronic occupational inhalation of arsenic exposure may cause non-malignant pulmonary findings such as bronchiectasis, pulmonary nodules and diffuse interstitial lung disease. So, in order to detect pulmonary involvement in the early stages, workers who experience occupational arsenic exposure should be followed by diffusion test and high-resolution computed tomography.

  8. Flow Mapping in a Gas-Solid Riser via Computer Automated Radioactive Particle Tracking (CARPT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthanna Al-Dahhan; Milorad P. Dudukovic; Satish Bhusarapu

    2005-06-04

    Statement of the Problem: Developing and disseminating a general and experimentally validated model for turbulent multiphase fluid dynamics suitable for engineering design purposes in industrial scale applications of riser reactors and pneumatic conveying, require collecting reliable data on solids trajectories, velocities ? averaged and instantaneous, solids holdup distribution and solids fluxes in the riser as a function of operating conditions. Such data are currently not available on the same system. Multiphase Fluid Dynamics Research Consortium (MFDRC) was established to address these issues on a chosen example of circulating fluidized bed (CFB) reactor, which is widely used in petroleum and chemicalmore » industry including coal combustion. This project addresses the problem of lacking reliable data to advance CFB technology. Project Objectives: The objective of this project is to advance the understanding of the solids flow pattern and mixing in a well-developed flow region of a gas-solid riser, operated at different gas flow rates and solids loading using the state-of-the-art non-intrusive measurements. This work creates an insight and reliable database for local solids fluid-dynamic quantities in a pilot-plant scale CFB, which can then be used to validate/develop phenomenological models for the riser. This study also attempts to provide benchmark data for validation of Computational Fluid Dynamic (CFD) codes and their current closures. Technical Approach: Non-Invasive Computer Automated Radioactive Particle Tracking (CARPT) technique provides complete Eulerian solids flow field (time average velocity map and various turbulence parameters such as the Reynolds stresses, turbulent kinetic energy, and eddy diffusivities). It also gives directly the Lagrangian information of solids flow and yields the true solids residence time distribution (RTD). Another radiation based technique, Computed Tomography (CT) yields detailed time averaged local holdup profiles at various planes. Together, these two techniques can provide the needed local solids flow dynamic information for the same setup under identical operating conditions, and the data obtained can be used as a benchmark for development, and refinement of the appropriate riser models. For the above reasons these two techniques were implemented in this study on a fully developed section of the riser. To derive the global mixing information in the riser, accurate solids RTD is needed and was obtained by monitoring the entry and exit of a single radioactive tracer. Other global parameters such as Cycle Time Distribution (CTD), overall solids holdup in the riser, solids recycle percentage at the bottom section of the riser were evaluated from different solids travel time distributions. Besides, to measure accurately and in-situ the overall solids mass flux, a novel method was applied.« less

  9. Computational modeling of radiofrequency ablation: evaluation on ex vivo data using ultrasound monitoring

    NASA Astrophysics Data System (ADS)

    Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.

    2017-03-01

    Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).

  10. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  11. [A wireless smart home system based on brain-computer interface of steady state visual evoked potential].

    PubMed

    Zhao, Li; Xing, Xiao; Guo, Xuhong; Liu, Zehua; He, Yang

    2014-10-01

    Brain-computer interface (BCI) system is a system that achieves communication and control among humans and computers and other electronic equipment with the electroencephalogram (EEG) signals. This paper describes the working theory of the wireless smart home system based on the BCI technology. We started to get the steady-state visual evoked potential (SSVEP) using the single chip microcomputer and the visual stimulation which composed by LED lamp to stimulate human eyes. Then, through building the power spectral transformation on the LabVIEW platform, we processed timely those EEG signals under different frequency stimulation so as to transfer them to different instructions. Those instructions could be received by the wireless transceiver equipment to control the household appliances and to achieve the intelligent control towards the specified devices. The experimental results showed that the correct rate for the 10 subjects reached 100%, and the control time of average single device was 4 seconds, thus this design could totally achieve the original purpose of smart home system.

  12. Nodding off or switching off? The use of popular media as a sleep aid in secondary-school children.

    PubMed

    Eggermont, Steven; Van den Bulck, Jan

    2006-01-01

    To describe the use of media as a sleep aid in adolescents and relate this to their sleep routines and feelings of tiredness. A questionnaire about using media as a sleep aid, media presence in bedrooms, time to bed and time out of bed on average weekdays and average weekend days, and questions regarding level of tiredness in the morning, at school, after a day at school and after the weekend was completed by 2546 seventh and 10th grade children in a random sample of 15 schools. Of the adolescents, 36.7% reported watching television to help them fall asleep. In total, 28.2% of the boys and 14.7% of the girls used computer games as a sleep aid. Music was used to fall asleep by 60.2% of the adolescents in this sample. About half of the adolescents read books to fall asleep. Except for reading books, using media as a sleep aid is negatively related to respondents' time to bed on weekdays, their number of hours of sleep per week and their self-reported level of tiredness. Using media as a sleep aid appears to be common practice among adolescents. Those who reported using music, television, and computer games more often as a sleeping aid slept fewer hours and were significantly more tired.

  13. Whole-annulus aeroelasticity analysis of a 17-bladerow WRF compressor using an unstructured Navier Stokes solver

    NASA Astrophysics Data System (ADS)

    Wu, X.; Vahdati, M.; Sayma, A.; Imregun, M.

    2005-03-01

    This paper describes a large-scale aeroelasticity computation for an aero-engine core compressor. The computational domain includes all 17 bladerows, resulting in a mesh with over 68 million points. The Favre-averaged Navier Stokes equations are used to represent the flow in a non-linear time-accurate fashion on unstructured meshes of mixed elements. The structural model of the first two rotor bladerows is based on a standard finite element representation. The fluid mesh is moved at each time step according to the structural motion so that changes in blade aerodynamic damping and flow unsteadiness can be accommodated automatically. An efficient domain decomposition technique, where special care was taken to balance the memory requirement across processors, was developed as part of the work. The calculation was conducted in parallel mode on 128 CPUs of an SGI Origin 3000. Ten vibration cycles were obtained using over 2.2 CPU years, though the elapsed time was a week only. Steady-state flow measurements and predictions were found to be in good agreement. A comparison of the averaged unsteady flow and the steady-state flow revealed some discrepancies. It was concluded that, in due course, the methodology would be adopted by industry to perform routine numerical simulations of the unsteady flow through entire compressor assemblies with vibrating blades not only to minimise engine and rig tests but also to improve performance predictions.

  14. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.

  15. Automated ventilator testing.

    PubMed

    Ghaly, J; Smith, A L

    1994-06-01

    A new era has arrived for the Biomedical Engineering Department at the Royal Women's Hospital in Melbourne. We have developed a system to qualitatively test for intermittent or unconfirmed faults, associated with Bear Cub ventilators. Where previous testing has been inadequate, computer logging is now used to interface the RT200 Timeter Calibration Analyser (TCA) to obtain a real time display of data, which can be stored and graphed. Using Quick Basic version 4.5, it was possible to establish communication between the TCA and an IBM compatible computer, such that meaningful displays of machine performance were produced. From the parameters measured it has been possible to obtain data on Peak Pressure, Inspiratory to Expiratory ratio (I:E ratio) Peak Flow and Rate. Monitoring is not limited to these parameters, though these were selected for our particular needs. These parameters are plotted in two ways: 1. Compressed average versus time, up to 24 hours on one screen 2. Raw data, 36 minutes displayed on each screen. The compressed data gives an overview which allows easy identification of intermittent faults. The uncompressed data confirms that the averaged signal is a realistic representation of the situation. One of the major benefits of this type of data analysis, is that ventilator performance may be monitored over a long period of time without requiring the presence of a service technician. It also allows individual ventilator performance to be graphically compared to other ventilators.

  16. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  17. Propagation of Statistical Noise Through a Two-Qubit Maximum Likelihood Tomography

    DTIC Science & Technology

    2018-04-01

    University Daniel E Jones, Brian T Kirby, and Michael Brodsky Computational and Information Sciences Directorate, ARL Approved for...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this burden estimate or any other

  18. Computational Biomathematics: Toward Optimal Control of Complex Biological Systems

    DTIC Science & Technology

    2016-09-26

    The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to...equations seems daunting. However, we are currently working on parameter estimation methods that show some promise. In this approach, we generate data from

  19. Long-term predictive capability of erosion models

    NASA Technical Reports Server (NTRS)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  20. Simulation of Unsteady Flows Using an Unstructured Navier-Stokes Solver on Moving and Stationary Grids

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Vatsa, Veer N.; Atkins, Harold L.

    2005-01-01

    We apply an unsteady Reynolds-averaged Navier-Stokes (URANS) solver for unstructured grids to unsteady flows on moving and stationary grids. Example problems considered are relevant to active flow control and stability and control. Computational results are presented using the Spalart-Allmaras turbulence model and are compared to experimental data. The effect of grid and time-step refinement are examined.

  1. Coupled-Flow Simulation of HP-LP Turbines Has Resulted in Significant Fuel Savings

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2001-01-01

    Our objective was to create a high-fidelity Navier-Stokes computer simulation of the flow through the turbines of a modern high-bypass-ratio turbofan engine. The simulation would have to capture the aerodynamic interactions between closely coupled high- and low-pressure turbines. A computer simulation of the flow in the GE90 turbofan engine's high-pressure (HP) and low-pressure (LP) turbines was created at GE Aircraft Engines under contract with the NASA Glenn Research Center. The three-dimensional steady-state computer simulation was performed using Glenn's average-passage approach named APNASA. The areas upstream and downstream of each blade row mutually interact with each other during engine operation. The embedded blade row operating conditions are modeled since the average passage equations in APNASA actively include the effects of the adjacent blade rows. The turbine airfoils, platforms, and casing are actively cooled by compressor bleed air. Hot gas leaks around the tips of rotors through labyrinth seals. The flow exiting the high work HP turbines is partially transonic and, therefore, has a strong shock system in the transition region. The simulation was done using 121 processors of a Silicon Graphics Origin 2000 (NAS 02K) cluster at the NASA Ames Research Center, with a parallel efficiency of 87 percent in 15 hr. The typical average-passage analysis mesh size per blade row was 280 by 45 by 55, or approx.700,000 grid points. The total number of blade rows was 18 for a combined HP and LP turbine system including the struts in the transition duct and exit guide vane, which contain 12.6 million grid points. Design cycle turnaround time requirements ran typically from 24 to 48 hr of wall clock time. The number of iterations for convergence was 10,000 at 8.03x10(exp -5) sec/iteration/grid point (NAS O2K). Parallel processing by up to 40 processors is required to meet the design cycle time constraints. This is the first-ever flow simulation of an HP and LP turbine. In addition, it includes the struts in the transition duct and exit guide vanes.

  2. Far field and wavefront characterization of a high-power semiconductor laser for free space optical communications

    NASA Technical Reports Server (NTRS)

    Cornwell, Donald M., Jr.; Saif, Babak N.

    1991-01-01

    The spatial pointing angle and far field beamwidth of a high-power semiconductor laser are characterized as a function of CW power and also as a function of temperature. The time-averaged spatial pointing angle and spatial lobe width were measured under intensity-modulated conditions. The measured pointing deviations are determined to be well within the pointing requirements of the NASA Laser Communications Transceiver (LCT) program. A computer-controlled Mach-Zehnder phase-shifter interferometer is used to characterize the wavefront quality of the laser. The rms phase error over the entire pupil was measured as a function of CW output power. Time-averaged measurements of the wavefront quality are also made under intensity-modulated conditions. The measured rms phase errors are determined to be well within the wavefront quality requirements of the LCT program.

  3. Comprehensive time average digital holographic vibrometry

    NASA Astrophysics Data System (ADS)

    Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan

    2016-12-01

    This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.

  4. Acoustic analysis of speech variables during depression and after improvement.

    PubMed

    Nilsonne, A

    1987-09-01

    Speech recordings were made of 16 depressed patients during depression and after clinical improvement. The recordings were analyzed using a computer program which extracts acoustic parameters from the fundamental frequency contour of the voice. The percent pause time, the standard deviation of the voice fundamental frequency distribution, the standard deviation of the rate of change of the voice fundamental frequency and the average speed of voice change were found to correlate to the clinical state of the patient. The mean fundamental frequency, the total reading time and the average rate of change of the voice fundamental frequency did not differ between the depressed and the improved group. The acoustic measures were more strongly correlated to the clinical state of the patient as measured by global depression scores than to single depressive symptoms such as retardation or agitation.

  5. Frequency domain averaging based experimental evaluation of gear fault without tachometer for fluctuating speed conditions

    NASA Astrophysics Data System (ADS)

    Sharma, Vikas; Parey, Anand

    2017-02-01

    In the purview of fluctuating speeds, gear fault diagnosis is challenging due to dynamic behavior of forces. Various industrial applications employing gearbox which operate under fluctuating speed conditions. For diagnostics of a gearbox, various vibrations based signal processing techniques viz FFT, time synchronous averaging and time-frequency based wavelet transform, etc. are majorly employed. Most of the time, theories about data or computational complexity limits the use of these methods. In order to perform fault diagnosis of a gearbox for fluctuating speeds, frequency domain averaging (FDA) of intrinsic mode functions (IMFs) after their dynamic time warping (DTW) has been done in this paper. This will not only attenuate the effect of fluctuating speeds but will also extract the weak fault feature those masked in vibration signal. Experimentally signals were acquired from Drivetrain Diagnostic Simulator for different gear health conditions i.e., healthy pinion, pinion with tooth crack, chipped tooth and missing tooth and were analyzed for the different fluctuating profiles of speed. Kurtosis was calculated for warped IMFs before DTW and after DTW of the acquired vibration signals. Later on, the application of FDA highlights the fault frequencies present in the FFT of faulty gears. The result suggests that proposed approach is more effective towards the fault diagnosing with fluctuating speed.

  6. Comparison of patient-specific instruments with standard surgical instruments in determining glenoid component position: a randomized prospective clinical trial.

    PubMed

    Hendel, Michael D; Bryan, Jason A; Barsoum, Wael K; Rodriguez, Eric J; Brems, John J; Evans, Peter J; Iannotti, Joseph P

    2012-12-05

    Glenoid component malposition for anatomic shoulder replacement may result in complications. The purpose of this study was to define the efficacy of a new surgical method to place the glenoid component. Thirty-one patients were randomized for glenoid component placement with use of either novel three-dimensional computed tomographic scan planning software combined with patient-specific instrumentation (the glenoid positioning system group), or conventional computed tomographic scan, preoperative planning, and surgical technique, utilizing instruments provided by the implant manufacturer (the standard surgical group). The desired position of the component was determined preoperatively. Postoperatively, a computed tomographic scan was used to define and compare the actual implant location with the preoperative plan. In the standard surgical group, the average preoperative glenoid retroversion was -11.3° (range, -39° to 17°). In the glenoid positioning system group, the average glenoid retroversion was -14.8° (range, -27° to 7°). When the standard surgical group was compared with the glenoid positioning system group, patient-specific instrumentation technology significantly decreased (p < 0.05) the average deviation of implant position for inclination and medial-lateral offset. Overall, the average deviation in version was 6.9° in the standard surgical group and 4.3° in the glenoid positioning system group. The average deviation in inclination was 11.6° in the standard surgical group and 2.9° in the glenoid positioning system group. The greatest benefit of patient-specific instrumentation was observed in patients with retroversion in excess of 16°; the average deviation was 10° in the standard surgical group and 1.2° in the glenoid positioning system group (p < 0.001). Preoperative planning and patient-specific instrumentation use resulted in a significant improvement in the selection and use of the optimal type of implant and a significant reduction in the frequency of malpositioned glenoid implants. Novel three-dimensional preoperative planning, coupled with patient and implant-specific instrumentation, allows the surgeon to better define the preoperative pathology, select the optimal implant design and location, and then accurately execute the plan at the time of surgery.

  7. Multiscale modelling of hydraulic conductivity in vuggy porous media

    PubMed Central

    Daly, K. R.; Roose, T.

    2014-01-01

    Flow in both saturated and non-saturated vuggy porous media, i.e. soil, is inherently multiscale. The complex microporous structure of the soil aggregates and the wider vugs provides a multitude of flow pathways and has received significant attention from the X-ray computed tomography (CT) community with a constant drive to image at higher resolution. Using multiscale homogenization, we derive averaged equations to study the effects of the microscale structure on the macroscopic flow. The averaged model captures the underlying geometry through a series of cell problems and is verified through direct comparison to numerical simulations of the full structure. These methods offer significant reductions in computation time and allow us to perform three-dimensional calculations with complex geometries on a desktop PC. The results show that the surface roughness of the aggregate has a significantly greater effect on the flow than the microstructure within the aggregate. Hence, this is the region in which the resolution of X-ray CT for image-based modelling has the greatest impact. PMID:24511248

  8. Effects of May through July 2015 storm events on suspended sediment loads, sediment trapping efficiency, and storage capacity of John Redmond Reservoir, east-central Kansas

    USGS Publications Warehouse

    Foster, Guy M.

    2016-06-20

    The U.S. Geological Survey, in cooperation with the Kansas Water Office, computed the suspended-sediment inflows and retention in John Redmond Reservoir during May through July 2015. Computations relied upon previously published turbidity-suspended sediment relations at water-quality monitoring sites located upstream and downstream from the reservoir. During the 3-month period, approximately 872,000 tons of sediment entered the reservoir, and 57,000 tons were released through the reservoir outlet. The average monthly trapping efficiency during this period was 93 percent, and monthly averages ranged from 83 to 97 percent. During the study period, an estimated 980 acre-feet of storage was lost, over 2.4 times the design annual sedimentation rate of the reservoir. Storm inflows during the 3-month analysis period reduced reservoir storage in the conservation pool approximately 1.6 percent. This indicates that large inflows, coupled with minimal releases, can have substantial effects on reservoir storage and lifespan.

  9. Blanket activation and afterheat for the Compact Reversed-Field Pinch Reactor

    NASA Astrophysics Data System (ADS)

    Davidson, J. W.; Battat, M. E.

    A detailed assessment has been made of the activation and afterheat for a Compact Reversed-Field Pinch Reactor (CRFPR) blanket using a two-dimensional model that included the limiter, the vacuum ducts, and the manifolds and headers for cooling the limiter and the first and second walls. Region-averaged, multigroup fluxes and prompt gamma-ray/neutron heating rates were calculated using the two-dimensional, discrete-ordinates code TRISM. Activation and depletion calculations were performed with the code FORIG using one-group cross sections generated with the TRISM region-averaged fluxes. Afterheat calculations were performed for regions near the plasma, i.e., the limiter, first wall, etc. assuming a 10-day irradiation. Decay heats were computed for decay periods up to 100 minutes. For the activation calculations, the irradiation period was taken to be one year and blanket activity inventories were computed for decay times to 4 x 10 years. These activities were also calculated as the toxicity-weighted biological hazard potential (BHP).

  10. Neural signatures of attention: insights from decoding population activity patterns.

    PubMed

    Sapountzis, Panagiotis; Gregoriou, Georgia G

    2018-01-01

    Understanding brain function and the computations that individual neurons and neuronal ensembles carry out during cognitive functions is one of the biggest challenges in neuroscientific research. To this end, invasive electrophysiological studies have provided important insights by recording the activity of single neurons in behaving animals. To average out noise, responses are typically averaged across repetitions and across neurons that are usually recorded on different days. However, the brain makes decisions on short time scales based on limited exposure to sensory stimulation by interpreting responses of populations of neurons on a moment to moment basis. Recent studies have employed machine-learning algorithms in attention and other cognitive tasks to decode the information content of distributed activity patterns across neuronal ensembles on a single trial basis. Here, we review results from studies that have used pattern-classification decoding approaches to explore the population representation of cognitive functions. These studies have offered significant insights into population coding mechanisms. Moreover, we discuss how such advances can aid the development of cognitive brain-computer interfaces.

  11. Impact of computer-assisted data collection, evaluation and management on the cancer genetic counselor's time providing patient care.

    PubMed

    Cohen, Stephanie A; McIlvried, Dawn E

    2011-06-01

    Cancer genetic counseling sessions traditionally encompass collecting medical and family history information, evaluating that information for the likelihood of a genetic predisposition for a hereditary cancer syndrome, conveying that information to the patient, offering genetic testing when appropriate, obtaining consent and subsequently documenting the encounter with a clinic note and pedigree. Software programs exist to collect family and medical history information electronically, intending to improve efficiency and simplicity of collecting, managing and storing this data. This study compares the genetic counselor's time spent in cancer genetic counseling tasks in a traditional model and one using computer-assisted data collection, which is then used to generate a pedigree, risk assessment and consult note. Genetic counselor time spent collecting family and medical history and providing face-to-face counseling for a new patient session decreased from an average of 85-69 min when using the computer-assisted data collection. However, there was no statistically significant change in overall genetic counselor time on all aspects of the genetic counseling process, due to an increased amount of time spent generating an electronic pedigree and consult note. Improvements in the computer program's technical design would potentially minimize data manipulation. Certain aspects of this program, such as electronic collection of family history and risk assessment, appear effective in improving cancer genetic counseling efficiency while others, such as generating an electronic pedigree and consult note, do not.

  12. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  13. Zero-block mode decision algorithm for H.264/AVC.

    PubMed

    Lee, Yu-Ming; Lin, Yinyi

    2009-03-01

    In the previous paper , we proposed a zero-block intermode decision algorithm for H.264 video coding based upon the number of zero-blocks of 4 x 4 DCT coefficients between the current macroblock and the co-located macroblock. The proposed algorithm can achieve significant improvement in computation, but the computation performance is limited for high bit-rate coding. To improve computation efficiency, in this paper, we suggest an enhanced zero-block decision algorithm, which uses an early zero-block detection method to compute the number of zero-blocks instead of direct DCT and quantization (DCT/Q) calculation and incorporates two adequate decision methods into semi-stationary and nonstationary regions of a video sequence. In addition, the zero-block decision algorithm is also applied to the intramode prediction in the P frame. The enhanced zero-block decision algorithm brings out a reduction of average 27% of total encoding time compared to the zero-block decision algorithm.

  14. SoftWAXS: a computational tool for modeling wide-angle X-ray solution scattering from biomolecules.

    PubMed

    Bardhan, Jaydeep; Park, Sanghyun; Makowski, Lee

    2009-10-01

    This paper describes a computational approach to estimating wide-angle X-ray solution scattering (WAXS) from proteins, which has been implemented in a computer program called SoftWAXS. The accuracy and efficiency of SoftWAXS are analyzed for analytically solvable model problems as well as for proteins. Key features of the approach include a numerical procedure for performing the required spherical averaging and explicit representation of the solute-solvent boundary and the surface of the hydration layer. These features allow the Fourier transform of the excluded volume and hydration layer to be computed directly and with high accuracy. This approach will allow future investigation of different treatments of the electron density in the hydration shell. Numerical results illustrate the differences between this approach to modeling the excluded volume and a widely used model that treats the excluded-volume function as a sum of Gaussians representing the individual atomic excluded volumes. Comparison of the results obtained here with those from explicit-solvent molecular dynamics clarifies shortcomings inherent to the representation of solvent as a time-averaged electron-density profile. In addition, an assessment is made of how the calculated scattering patterns depend on input parameters such as the solute-atom radii, the width of the hydration shell and the hydration-layer contrast. These results suggest that obtaining predictive calculations of high-resolution WAXS patterns may require sophisticated treatments of solvent.

  15. Using Multiple Endmember Spectral Mixture Analysis of MODIS Data for Computing the Fire Potential Index in Southern California

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Roberts, D. A.

    2007-12-01

    The Fire Potential Index (FPI) is currently the only operationally used wildfire susceptibility index in the United States that incorporates remote sensing data in addition to meteorological information. Its remote sensing component utilizes relative greenness derived from a NDVI time series as a proxy for computing the ratio of live to dead vegetation. This study investigates the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) as a more direct and physically reasonable way of computing the live ratio and applying it for the computation of the FPI. A time series of 16-day reflectance composites of Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to perform the analysis. Endmember selection for green vegetation (GV), non- photosynthetic vegetation (NPV) and soil was performed in two stages. First, a subset of suitable endmembers was selected from an extensive library of reference and image spectra for each class using Endmember Average Root Mean Square Error (EAR), Minimum Average Spectral Angle (MASA) and a count-based technique. Second, the most appropriate endmembers for the specific data set were selected from the subset by running a series of 2-endmember models on representative images and choosing the ones that modeled the majority of pixels. The final set of endmembers was used for running MESMA on southern California MODIS composites from 2000 to 2006. 3- and 4-endmember models were considered. The best model was chosen on a per-pixel basis according to the minimum root mean square error of the models at each level of complexity. Endmember fractions were normalized by the shade endmember to generate realistic fractions of GV and NPV. In order to validate the MESMA-derived GV fractions they were compared against live ratio estimates from RG. A significant spatial and temporal relationship between both measures was found, indicating that GV fraction has the potential to substitute RG in computing the FPI. To further test this hypothesis the live ratio estimates obtained from MESMA were used to compute daily FPI maps for southern California from 2001 to 2006. A validation with historical wildfire data from the MODIS Active Fire product was carried out over the same time period using logistic regression. Initial results show that MESMA-derived GV fraction can be used successfully for generating FPI maps of southern California.

  16. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  17. 14 CFR Appendix A to Part 187 - Methodology for Computation of Fees for Certification Services Performed Outside the United States

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... hours of a U.S. Federal Government employee. This result in the hourly government paid cost of an... average annual leave hours and 1,800 average annual hours available for work for computer manpower...

  18. Average chewing pattern improvements following Disclusion Time reduction.

    PubMed

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p < 0.000004), the time to 50% peak contraction (p < 0.00001), and in the decreased number of silent periods per side (right p < 0.0000002; left p < 0.0000006). Post-ICAGD ACP changes were also significant; the terminal chewing position became closer to centric occlusion (p < 0.002), the maximum and average chewing velocities increased (p < 0.002; p < 0.00005), the opening and closing times, the cycle time, and the occlusal contact time all decreased (p < 0.004-0.0001). The average chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  19. Computer-assisted Behavioral Therapy and Contingency Management for Cannabis Use Disorder

    PubMed Central

    Budney, Alan J.; Stanger, Catherine; Tilford, J. Mick; Scherer, Emily; Brown, Pamela C.; Li, Zhongze; Li, Zhigang; Walker, Denise

    2015-01-01

    Computer-assisted behavioral treatments hold promise for enhancing access to and reducing costs of treatments for substance use disorders. This study assessed the efficacy of a computer-assisted version of an efficacious, multicomponent treatment for cannabis use disorders (CUD), i.e., motivational enhancement therapy, cognitive-behavioral therapy, and abstinence-based contingency-management (MET/CBT/CM). An initial cost comparison was also performed. Seventy-five adult participants, 59% African Americans, seeking treatment for CUD received either, MET only (BRIEF), therapist-delivered MET/CBT/CM (THERAPIST), or computer-delivered MET/CBT/CM (COMPUTER). During treatment, the THERAPIST and COMPUTER conditions engendered longer durations of continuous cannabis abstinence than BRIEF (p < .05), but did not differ from each other. Abstinence rates and reduction in days of use over time were maintained in COMPUTER at least as well as in THERAPIST. COMPUTER averaged approximately $130 (p < .05) less per case than THERAPIST in therapist costs, which offset most of the costs of CM. Results add to promising findings that illustrate potential for computer-assisted delivery methods to enhance access to evidence-based care, reduce costs, and possibly improve outcomes. The observed maintenance effects and the cost findings require replication in larger clinical trials. PMID:25938629

  20. Computational Fluid-Dynamic Analysis after Carotid Endarterectomy: Patch Graft versus Direct Suture Closure.

    PubMed

    Domanin, Maurizio; Buora, Adelaide; Scardulla, Francesco; Guerciotti, Bruno; Forzenigo, Laura; Biondetti, Pietro; Vergara, Christian

    2017-10-01

    Closure technique after carotid endarterectomy (CEA) still remains an issue of debate. Routine use of patch graft (PG) has been advocated to reduce restenosis, stroke, and death, but its protective effect, particularly from late restenosis, is less evident and recent studies call into question this thesis. This study aims to compare PG and direct suture (DS) by means of computational fluid dynamics (CFD). To identify carotid regions with flow recirculation more prone to restenosis development, we analyzed time-averaged oscillatory shear index (OSI) and relative residence time (RRT), that are well-known indices correlated with plaque formation. CFD was performed in 12 patients (13 carotids) who underwent surgery for stenosis >70%, 9 with PG, and 4 with DS. Flow conditions were modeled using patient-specific boundary conditions derived from Doppler ultrasound and geometries from magnetic resonance angiography. Mean value of the spatial averaged OSI resulted 0.07 for PG group and 0.03 for DS group, the percentage of area with OSI above a threshold of 0.2 resulted 10.1% and 3.7%, respectively. The mean of averaged-in-space RRT values was 4.4 1/Pa for PG group and 1.6 1/Pa for DS group, the percentage of area with RRT values above a threshold of 4 1/Pa resulted 22.5% and 6.5%, respectively. Both OSI and RRT values resulted higher when PG was preferred to DS and also areas with disturbed flow resulted wider. The absolute higher values computed by means of CFD were observed when PG was used indiscriminately regardless of carotid diameters. DS does not seem to create negative hemodynamic conditions with potential adverse effects on long-term outcomes, in particular when CEA is performed at the common carotid artery and/or the bulb or when ICA diameter is greater than 5.0 mm. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A general statistical test for correlations in a finite-length time series.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-06-07

    The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.

  2. 26 CFR 1.163-10T - Qualified residence interest (temporary).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... mortgage interest received in trade or business from individuals) reports the average balance of a secured... general. (ii)Example. (g)Selection of method. (h)Average balance. (1)Average balance defined. (2)Average...)Examples. (5)Average balance computed using average of beginning and ending balance. (i)In general. (ii...

  3. Runway exit designs for capacity improvement demonstrations. Phase 2: Computer model development

    NASA Technical Reports Server (NTRS)

    Trani, A. A.; Hobeika, A. G.; Kim, B. J.; Nunna, V.; Zhong, C.

    1992-01-01

    The development is described of a computer simulation/optimization model to: (1) estimate the optimal locations of existing and proposed runway turnoffs; and (2) estimate the geometric design requirements associated with newly developed high speed turnoffs. The model described, named REDIM 2.0, represents a stand alone application to be used by airport planners, designers, and researchers alike to estimate optimal turnoff locations. The main procedures are described in detail which are implemented in the software package and possible applications are illustrated when using 6 major runway scenarios. The main output of the computer program is the estimation of the weighted average runway occupancy time for a user defined aircraft population. Also, the location and geometric characteristics of each turnoff are provided to the user.

  4. Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.

  5. Idle waves in high-performance computing

    NASA Astrophysics Data System (ADS)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  6. A Simple Method for Automated Equilibration Detection in Molecular Simulations.

    PubMed

    Chodera, John D

    2016-04-12

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure and demonstrate its utility on typical molecular simulation data.

  7. A simple method for automated equilibration detection in molecular simulations

    PubMed Central

    Chodera, John D.

    2016-01-01

    Molecular simulations intended to compute equilibrium properties are often initiated from configurations that are highly atypical of equilibrium samples, a practice which can generate a distinct initial transient in mechanical observables computed from the simulation trajectory. Traditional practice in simulation data analysis recommends this initial portion be discarded to equilibration, but no simple, general, and automated procedure for this process exists. Here, we suggest a conceptually simple automated procedure that does not make strict assumptions about the distribution of the observable of interest, in which the equilibration time is chosen to maximize the number of effectively uncorrelated samples in the production timespan used to compute equilibrium averages. We present a simple Python reference implementation of this procedure, and demonstrate its utility on typical molecular simulation data. PMID:26771390

  8. Closed-form solutions of performability. [in computer systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1982-01-01

    It is noted that if computing system performance is degradable then system evaluation must deal simultaneously with aspects of both performance and reliability. One approach is the evaluation of a system's performability which, relative to a specified performance variable Y, generally requires solution of the probability distribution function of Y. The feasibility of closed-form solutions of performability when Y is continuous are examined. In particular, the modeling of a degradable buffer/multiprocessor system is considered whose performance Y is the (normalized) average throughput rate realized during a bounded interval of time. Employing an approximate decomposition of the model, it is shown that a closed-form solution can indeed be obtained.

  9. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    PubMed

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  10. Hot, Hot, Hot Computer Careers.

    ERIC Educational Resources Information Center

    Basta, Nicholas

    1988-01-01

    Discusses the increasing need for electrical, electronic, and computer engineers; and scientists. Provides current status of the computer industry and average salaries. Considers computer chip manufacture and the current chip shortage. (MVL)

  11. Real-time operation without a real-time operating system for instrument control and data acquisition

    NASA Astrophysics Data System (ADS)

    Klein, Randolf; Poglitsch, Albrecht; Fumi, Fabio; Geis, Norbert; Hamidouche, Murad; Hoenle, Rainer; Looney, Leslie; Raab, Walfried; Viehhauser, Werner

    2004-09-01

    We are building the Field-Imaging Far-Infrared Line Spectrometer (FIFI LS) for the US-German airborne observatory SOFIA. The detector read-out system is driven by a clock signal at a certain frequency. This signal has to be provided and all other sub-systems have to work synchronously to this clock. The data generated by the instrument has to be received by a computer in a timely manner. Usually these requirements are met with a real-time operating system (RTOS). In this presentation we want to show how we meet these demands differently avoiding the stiffness of an RTOS. Digital I/O-cards with a large buffer separate the asynchronous working computers and the synchronous working instrument. The advantage is that the data processing computers do not need to process the data in real-time. It is sufficient that the computer can process the incoming data stream on average. But since the data is read-in synchronously, problems of relating commands and responses (data) have to be solved: The data is arriving at a fixed rate. The receiving I/O-card buffers the data in its buffer until the computer can access it. To relate the data to commands sent previously, the data is tagged by counters in the read-out electronics. These counters count the system's heartbeat and signals derived from that. The heartbeat and control signals synchronous with the heartbeat are sent by an I/O-card working as pattern generator. Its buffer gets continously programmed with a pattern which is clocked out on the control lines. A counter in the I/O-card keeps track of the amount of pattern words clocked out. By reading this counter, the computer knows the state of the instrument or knows the meaning of the data that will arrive with a certain time-tag.

  12. Bridge-scour analysis using the water surface profile (WSPRO) model

    USGS Publications Warehouse

    Mueller, David S.; ,

    1993-01-01

    A program was developed to extract hydraulic information required for bridge-scour computations, from the Water-Surface Profile computation model (WSPRO). The program is written in compiled BASIC and is menu driven. Using only ground points, the program can compute average ground elevation, cross-sectional area below a specified datum, or create a Drawing Exchange Format (DXF) fie of cross section. Using both ground points ad hydraulic information form the equal-conveyance tubes computed by WSPRO, the program can compute hydraulic parameters at a user-specified station or in a user-specified subsection of the cross section. The program can identify the maximum velocity in a cross section and the velocity and depth at a user-specified station. The program also can identify the maximum velocity in the cross section and the average velocity, average depth, average ground elevation, width perpendicular to the flow, cross-sectional area of flow, and discharge in a subsection of the cross section. This program does not include any help or suggestions as to what data should be extracted; therefore, the used must understand the scour equations and associated variables to the able to extract the proper information from the WSPRO output.

  13. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  14. Approximating lens power.

    PubMed

    Kaye, Stephen B

    2009-04-01

    To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.

  15. Comparison of wing-span averaging effects on lift, rolling moment, and bending moment for two span load distributions and for two turbulence representations

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1978-01-01

    An analytical method of computing the averaging effect of wing-span size on the loading of a wing induced by random turbulence was adapted for use on a digital electronic computer. The turbulence input was assumed to have a Dryden power spectral density. The computations were made for lift, rolling moment, and bending moment for two span load distributions, rectangular and elliptic. Data are presented to show the wing-span averaging effect for wing-span ratios encompassing current airplane sizes. The rectangular wing-span loading showed a slightly greater averaging effect than did the elliptic loading. In the frequency range most bothersome to airplane passengers, the wing-span averaging effect can reduce the normal lift load, and thus the acceleration, by about 7 percent for a typical medium-sized transport. Some calculations were made to evaluate the effect of using a Von Karman turbulence representation. These results showed that using the Von Karman representation generally resulted in a span averaging effect about 3 percent larger.

  16. Extraction of prostatic lumina and automated recognition for prostatic calculus image using PCA-SVM.

    PubMed

    Wang, Zhuocai; Xu, Xiangmin; Ding, Xiaojun; Xiao, Hui; Huang, Yusheng; Liu, Jian; Xing, Xiaofen; Wang, Hua; Liao, D Joshua

    2011-01-01

    Identification of prostatic calculi is an important basis for determining the tissue origin. Computation-assistant diagnosis of prostatic calculi may have promising potential but is currently still less studied. We studied the extraction of prostatic lumina and automated recognition for calculus images. Extraction of lumina from prostate histology images was based on local entropy and Otsu threshold recognition using PCA-SVM and based on the texture features of prostatic calculus. The SVM classifier showed an average time 0.1432 second, an average training accuracy of 100%, an average test accuracy of 93.12%, a sensitivity of 87.74%, and a specificity of 94.82%. We concluded that the algorithm, based on texture features and PCA-SVM, can recognize the concentric structure and visualized features easily. Therefore, this method is effective for the automated recognition of prostatic calculi.

  17. Statistical average estimates of high latitude field-aligned currents from the STARE and SABRE coherent VHF radar systems

    NASA Astrophysics Data System (ADS)

    Kosch, M. J.; Nielsen, E.

    Two bistatic VHF radar systems, STARE and SABRE, have been employed to estimate ionospheric electric fields in the geomagnetic latitude range 61.1 - 69.3° (geographic latitude range 63.8 - 72.6°) over northern Scandinavia. 173 days of good backscatter from all four radars have been analysed during the period 1982 to 1986, from which the average ionospheric divergence electric field versus latitude and time is calculated. The average magnetic field-aligned currents are computed using an AE-dependent empirical model of the ionospheric conductance. Statistical Birkeland current estimates are presented for high and low values of the Kp and AE indices as well as positive and negative orientations of the IMF B z component. The results compare very favourably to other ground-based and satellite measurements.

  18. Predictors and Health Consequences of Screen-Time Change During Adolescence—1993 Pelotas (Brazil) Birth Cohort Study

    PubMed Central

    Dumith, Samuel Carvalho; Garcia, Leandro Martin Totaro; da Silva, Kelly Samara; Menezes, Ana Maria Baptista; Hallal, Pedro Curi

    2012-01-01

    Purpose To investigate screen-time change from early to mid adolescence, its predictors, and its influence on body fat, blood pressure, and leisure-time physical activity. Methods We used data from a longitudinal prospective study, conducted among participants of the 1993 Pelotas (Brazil) Birth Cohort Study. At baseline, adolescents were, on average, 11 years old. They were later visited at age 15 years. Screen time was self-reported, accounting for the time spent watching television, playing video games, and using the computer. Several predictors were examined. The effect of screen-time change on some health outcomes was also analyzed. Results Screen time increased on average 60 min/d from 11 to 15 years of age, for the 4,218 adolescents studied. The groups that presented the highest increases in screen time were male, wealthiest, those whose mothers had higher education, and adolescents with a history of school failure. There were positive associations between screen-time change and body mass index, skinfold thickness, waist circumference, and leisure-time physical activity at 15 years of age. Conclusions Screen time increased from early to mid adolescence. This increment was higher among boys and the wealthiest adolescents. Increases in screen time affected body composition, with negative implications on adiposity. PMID:23283154

  19. An experimental and numerical investigation of shock-wave induced turbulent boundary-layer separation at hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.; Horstman, C. C.; Rubesin, M. W.; Coakley, T. J.; Kussoy, M. I.

    1975-01-01

    An experiment designed to test and guide computations of the interaction of an impinging shock wave with a turbulent boundary layer is described. Detailed mean flow-field and surface data are presented for two shock strengths which resulted in attached and separated flows, respectively. Numerical computations, employing the complete time-averaged Navier-Stokes equations along with algebraic eddy-viscosity and turbulent Prandtl number models to describe shear stress and heat flux, are used to illustrate the dependence of the computations on the particulars of the turbulence models. Models appropriate for zero-pressure-gradient flows predicted the overall features of the flow fields, but were deficient in predicting many of the details of the interaction regions. Improvements to the turbulence model parameters were sought through a combination of detailed data analysis and computer simulations which tested the sensitivity of the solutions to model parameter changes. Computer simulations using these improvements are presented and discussed.

  20. Empirical comparison of heuristic load distribution in point-to-point multicomputer networks

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.

    1990-01-01

    The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.

Top