Sample records for selected time points

  1. Selecting the most appropriate time points to profile in high-throughput studies

    PubMed Central

    Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv

    2017-01-01

    Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972

  2. Objective evaluation of female feet and leg joint conformation at time of selection and post first parity in swine.

    PubMed

    Stock, J D; Calderón Díaz, J A; Rothschild, M F; Mote, B E; Stalder, K J

    2018-06-09

    Feet and legs of replacement females were objectively evaluated at selection, i.e. approximately 150 days of age (n=319) and post first parity, i.e. any time after weaning of first litter and before 2nd parturition (n=277) to 1) compare feet and leg joint angle ranges between selection and post first parity; 2) identify feet and leg joint angle differences between selection and first three weeks of second gestation; 3) identify feet and leg join angle differences between farms and gestation days during second gestation; and 4) obtain genetic variance components for conformation angles for the two time points measured. Angles for carpal joint (knee), metacarpophalangeal joint (front pastern), metatarsophalangeal joint (rear pastern), tarsal joint (hock), and rear stance were measured using image analysis software. Between selection and post first parity significant differences were observed for all joints measured (P < 0.05). Knee, front and rear pastern angles were less (more flexion), and hock angles were greater (less flexion) as age progressed (P < 0.05), while the rear stance pattern was less (feet further under center) at selection than post first parity (only including measures during first three weeks of second gestation). Only using post first parity leg conformation information, farm was a significant source of variation for front and rear pasterns and rear stance angle measurements (P < 0.05). Knee angle was less (more flexion) (P < 0.05) as gestation age progressed. Heritability estimates were low to moderate (0.04 - 0.35) for all traits measured across time points. Genetic correlations between the same joints at different time points were high (> 0.8) between the front leg joints and low (<0.2) between the rear leg joints. High genetic correlations between time points indicate that the trait can be considered the same at either time point, and low genetic correlations indicate that the trait at different time points should be considered as two separate traits. Minimal change in the front leg suggests conformation traits that remain between selection and post first parity, while larger changes in rear leg indicate that rear leg conformation traits should be evaluated at multiple time periods.

  3. Time of travel of solutes in selected reaches of the Sandusky River Basin, Ohio, 1972 and 1973

    USGS Publications Warehouse

    Westfall, Arthur O.

    1976-01-01

    A time of travel study of a 106-mile (171-kilometer) reach of the Sandusky River and a 39-mile (63-kilometer) reach of Tymochtee Creek was made to determine the time required for water released from Killdeer Reservoir on Tymochtee Creek to reach selected downstream points. In general, two dye sample runs were made through each subreach to define the time-discharge relation for approximating travel times at selected discharges within the measured range, and time-discharge graphs are presented for 38 subreaches. Graphs of dye dispersion and variation in relation to time are given for three selected sampling sites. For estimating travel time and velocities between points in the study reach, tables for selected flow durations are given. Duration curves of daily discharge for four index stations are presented to indicate the lo-flow characteristics and for use in shaping downward extensions of the time-discharge curves.

  4. 76 FR 41454 - Caribbean Fishery Management Council; Scoping Meetings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-14

    ... based on alternative selected in Action 3(a) and time series of landings data as defined in Action 1(a...., Puerto Rico, St. Thomas/St. John, St. Croix) based on the preferred management reference point time series selected by the Council in Actions 1(a) and 2(a). Alternative 2A. Use a mid-point or equidistant...

  5. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  6. Thermophysical Properties of Selected Rocks.

    DTIC Science & Technology

    1974-04-01

    the region below the melting point . Selected values are for Dresser basalt based on the data of Navarro and DeWitt [861 and of Marovelli and Veith [51...TO AT = T2 - T 1, q Is the rate of heat flow, A is the cross-sectional area of the specimen, and Ax is the distance between points of temperature...heater provides a constant heat, q, per unit time and length, and the temperature at a point in the spec- imen is recorded as a function of time. The

  7. Building a Lego wall: Sequential action selection.

    PubMed

    Arnold, Amy; Wing, Alan M; Rotshtein, Pia

    2017-05-01

    The present study draws together two distinct lines of enquiry into the selection and control of sequential action: motor sequence production and action selection in everyday tasks. Participants were asked to build 2 different Lego walls. The walls were designed to have hierarchical structures with shared and dissociated colors and spatial components. Participants built 1 wall at a time, under low and high load cognitive states. Selection times for correctly completed trials were measured using 3-dimensional motion tracking. The paradigm enabled precise measurement of the timing of actions, while using real objects to create an end product. The experiment demonstrated that action selection was slowed at decision boundary points, relative to boundaries where no between-wall decision was required. Decision points also affected selection time prior to the actual selection window. Dual-task conditions increased selection errors. Errors mostly occurred at boundaries between chunks and especially when these required decisions. The data support hierarchical control of sequenced behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Automated corresponding point candidate selection for image registration using wavelet transformation neurla network with rotation invariant inputs and context information about neighboring candidates

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei

    2003-03-01

    An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.

  9. Getting to the point: Rapid point selection and variable density InSAR time series for urban deformation monitoring

    NASA Astrophysics Data System (ADS)

    Spaans, K.; Hooper, A. J.

    2017-12-01

    The short revisit time and high data acquisition rates of current satellites have resulted in increased interest in the development of deformation monitoring and rapid disaster response capability, using InSAR. Fast, efficient data processing methodologies are required to deliver the timely results necessary for this, and also to limit computing resources required to process the large quantities of data being acquired. Contrary to volcano or earthquake applications, urban monitoring requires high resolution processing, in order to differentiate movements between buildings, or between buildings and the surrounding land. Here we present Rapid time series InSAR (RapidSAR), a method that can efficiently update high resolution time series of interferograms, and demonstrate its effectiveness over urban areas. The RapidSAR method estimates the coherence of pixels on an interferogram-by-interferogram basis. This allows for rapid ingestion of newly acquired images without the need to reprocess the earlier acquired part of the time series. The coherence estimate is based on ensembles of neighbouring pixels with similar amplitude behaviour through time, which are identified on an initial set of interferograms, and need be re-evaluated only occasionally. By taking into account scattering properties of points during coherence estimation, a high quality coherence estimate is achieved, allowing point selection at full resolution. The individual point selection maximizes the amount of information that can be extracted from each interferogram, as no selection compromise has to be reached between high and low coherence interferograms. In other words, points do not have to be coherent throughout the time series to contribute to the deformation time series. We demonstrate the effectiveness of our method over urban areas in the UK. We show how the algorithm successfully extracts high density time series from full resolution Sentinel-1 interferograms, and distinguish clearly between buildings and surrounding vegetation or streets. The fact that new interferograms can be processed separately from the remainder of the time series helps manage the high data volumes, both in space and time, generated by current missions.

  10. Can prospect theory explain risk-seeking behavior by terminally ill patients?

    PubMed

    Rasiel, Emma B; Weinfurt, Kevin P; Schulman, Kevin A

    2005-01-01

    Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.

  11. Bayesian change point analysis of abundance trends for pelagic fishes in the upper San Francisco Estuary.

    PubMed

    Thomson, James R; Kimmerer, Wim J; Brown, Larry R; Newman, Ken B; Mac Nally, Ralph; Bennett, William A; Feyrer, Frederick; Fleishman, Erica

    2010-07-01

    We examined trends in abundance of four pelagic fish species (delta smelt, longfin smelt, striped bass, and threadfin shad) in the upper San Francisco Estuary, California, USA, over 40 years using Bayesian change point models. Change point models identify times of abrupt or unusual changes in absolute abundance (step changes) or in rates of change in abundance (trend changes). We coupled Bayesian model selection with linear regression splines to identify biotic or abiotic covariates with the strongest associations with abundances of each species. We then refitted change point models conditional on the selected covariates to explore whether those covariates could explain statistical trends or change points in species abundances. We also fitted a multispecies change point model that identified change points common to all species. All models included hierarchical structures to model data uncertainties, including observation errors and missing covariate values. There were step declines in abundances of all four species in the early 2000s, with a likely common decline in 2002. Abiotic variables, including water clarity, position of the 2 per thousand isohaline (X2), and the volume of freshwater exported from the estuary, explained some variation in species' abundances over the time series, but no selected covariates could explain statistically the post-2000 change points for any species.

  12. Topological photonic crystal with equifrequency Weyl points

    NASA Astrophysics Data System (ADS)

    Wang, Luyang; Jian, Shao-Kai; Yao, Hong

    2016-06-01

    Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on general symmetry analysis, we show that a minimal number of four symmetry-related (consequently equifrequency) Weyl points can be realized in time-reversal invariant photonic crystals. We further propose an experimentally feasible way to modify double-gyroid photonic crystals to realize four equifrequency Weyl points, which is explicitly confirmed by our first-principle photonic band-structure calculations. Remarkably, photonic crystals with equifrequency Weyl points are qualitatively advantageous in applications including angular selectivity, frequency selectivity, invisibility cloaking, and three-dimensional imaging.

  13. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    PubMed Central

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971

  14. Using multivariate regression model with least absolute shrinkage and selection operator (LASSO) to predict the incidence of Xerostomia after intensity-modulated radiotherapy for head and neck cancer.

    PubMed

    Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.

  15. Effective Detection of Low-luminosity GEO Objects Using Population and Motion Predictions

    DTIC Science & Technology

    2012-01-01

    more assumptions made on the time, and then tracks all the points where most fragments will be in geocentric equatorial inertial coordinates over time...population. A couple of candidate points in geocentric equatorial inertial coordinates can be selected with consideration that bright stars will not be... geocentric equatorial inertial coordinates. Third, motion of fragments passing through the specified single point in geocentric equatorial

  16. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  17. Optical control of multi-stage thin film solar cell production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jian; Levi, Dean H.; Contreras, Miguel A.

    2016-05-17

    Embodiments include methods of depositing and controlling the deposition of a film in multiple stages. The disclosed deposition and deposition control methods include the optical monitoring of a deposition matrix to determine a time when at least one transition point occurs. In certain embodiments, the transition point or transition points are a stoichiometry point. Methods may also include controlling the length of time in which material is deposited during a deposition stage or controlling the amount of the first, second or subsequent materials deposited during any deposition stage in response to a determination of the time when a selected transitionmore » point occurs.« less

  18. A selective array activation method for the generation of a focused source considering listening position.

    PubMed

    Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann

    2012-02-01

    A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America

  19. Time-Referenced Effects of an Internal vs. External Focus of Attention on Muscular Activity and Compensatory Variability

    PubMed Central

    Hossner, Ernst-Joachim; Ehrlenspiel, Felix

    2010-01-01

    The paralysis-by-analysis phenomenon, i.e., attending to the execution of one's movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf et al., 2001) or of step-by-step execution (Masters, 1992; Beilock et al., 2002), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. PMID:21833285

  20. International Aviation (Selected Articles)

    DTIC Science & Technology

    1991-09-11

    THE ANAYLYSIS OF DYNAMIC FORCES IN AVIATION STRUCTURES Following along with the development of test manufacturing projects for many types of aircraft...type water troughs. All the main equipment embodies automated measurement controls. It is capable of obtaining test data and curves in a real time...results from thousands of calculations, and decisions were made to select the imaginary origin point to act as the turbulence flow origination point

  1. Feature-based attention to unconscious shapes and colors.

    PubMed

    Schmidt, Filipp; Schmidt, Thomas

    2010-08-01

    Two experiments employed feature-based attention to modulate the impact of completely masked primes on subsequent pointing responses. Participants processed a color cue to select a pair of possible pointing targets out of multiple targets on the basis of their color, and then pointed to the one of those two targets with a prespecified shape. All target pairs were preceded by prime pairs triggering either the correct or the opposite response. The time interval between cue and primes was varied to modulate the time course of feature-based attentional selection. In a second experiment, the roles of color and shape were switched. Pointing trajectories showed large priming effects that were amplified by feature-based attention, indicating that attention modulated the earliest phases of motor output. Priming effects as well as their attentional modulation occurred even though participants remained unable to identify the primes, indicating distinct processes underlying visual awareness, attention, and response control.

  2. SOP: parallel surrogate global optimization with Pareto center selection for computationally expensive single objective problems

    DOE PAGES

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    2016-02-02

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  3. Using learned under-sampling pattern for increasing speed of cardiac cine MRI based on compressive sensing principles

    NASA Astrophysics Data System (ADS)

    Zamani, Pooria; Kayvanrad, Mohammad; Soltanian-Zadeh, Hamid

    2012-12-01

    This article presents a compressive sensing approach for reducing data acquisition time in cardiac cine magnetic resonance imaging (MRI). In cardiac cine MRI, several images are acquired throughout the cardiac cycle, each of which is reconstructed from the raw data acquired in the Fourier transform domain, traditionally called k-space. In the proposed approach, a majority, e.g., 62.5%, of the k-space lines (trajectories) are acquired at the odd time points and a minority, e.g., 37.5%, of the k-space lines are acquired at the even time points of the cardiac cycle. Optimal data acquisition at the even time points is learned from the data acquired at the odd time points. To this end, statistical features of the k-space data at the odd time points are clustered by fuzzy c-means and the results are considered as the states of Markov chains. The resulting data is used to train hidden Markov models and find their transition matrices. Then, the trajectories corresponding to transition matrices far from an identity matrix are selected for data acquisition. At the end, an iterative thresholding algorithm is used to reconstruct the images from the under-sampled k-space datasets. The proposed approaches for selecting the k-space trajectories and reconstructing the images generate more accurate images compared to alternative methods. The proposed under-sampling approach achieves an acceleration factor of 2 for cardiac cine MRI.

  4. Plastic catalytic pyrolysis to fuels as tertiary polymer recycling method: effect of process conditions.

    PubMed

    Gulab, Hussain; Jan, Muhammad Rasul; Shah, Jasmin; Manos, George

    2010-01-01

    This paper presents results regarding the effect of various process conditions on the performance of a zeolite catalyst in pyrolysis of high density polyethylene. The results show that polymer catalytic degradation can be operated at relatively low catalyst content reducing the cost of a potential industrial process. As the polymer to catalyst mass ratio increases, the system becomes less active, but high temperatures compensate for this activity loss resulting in high conversion values at usual batch times and even higher yields of liquid products due to less overcracking. The results also show that high flow rate of carrier gas causes evaporation of liquid products falsifying results, as it was obvious from liquid yield results at different reaction times as well as the corresponding boiling point distributions. Furthermore, results are presented regarding temperature effects on liquid selectivity. Similar values resulted from different final reactor temperatures, which are attributed to the batch operation of the experimental equipment. Since polymer and catalyst both undergo the same temperature profile, which is the same up to a specific time independent of the final temperature. Obviously, this common temperature step determines the selectivity to specific products. However, selectivity to specific products is affected by the temperature, as shown in the corresponding boiling point distributions, with higher temperatures showing an increased selectivity to middle boiling point components (C(8)-C(9)) and lower temperatures increased selectivity to heavy components (C(14)-C(18)).

  5. Evenly spaced Detrended Fluctuation Analysis: Selecting the number of points for the diffusion plot

    NASA Astrophysics Data System (ADS)

    Liddy, Joshua J.; Haddad, Jeffrey M.

    2018-02-01

    Detrended Fluctuation Analysis (DFA) has become a widely-used tool to examine the correlation structure of a time series and provided insights into neuromuscular health and disease states. As the popularity of utilizing DFA in the human behavioral sciences has grown, understanding its limitations and how to properly determine parameters is becoming increasingly important. DFA examines the correlation structure of variability in a time series by computing α, the slope of the log SD- log n diffusion plot. When using the traditional DFA algorithm, the timescales, n, are often selected as a set of integers between a minimum and maximum length based on the number of data points in the time series. This produces non-uniformly distributed values of n in logarithmic scale, which influences the estimation of α due to a disproportionate weighting of the long-timescale regions of the diffusion plot. Recently, the evenly spaced DFA and evenly spaced average DFA algorithms were introduced. Both algorithms compute α by selecting k points for the diffusion plot based on the minimum and maximum timescales of interest and improve the consistency of α estimates for simulated fractional Gaussian noise and fractional Brownian motion time series. Two issues that remain unaddressed are (1) how to select k and (2) whether the evenly-spaced DFA algorithms show similar benefits when assessing human behavioral data. We manipulated k and examined its effects on the accuracy, consistency, and confidence limits of α in simulated and experimental time series. We demonstrate that the accuracy and consistency of α are relatively unaffected by the selection of k. However, the confidence limits of α narrow as k increases, dramatically reducing measurement uncertainty for single trials. We provide guidelines for selecting k and discuss potential uses of the evenly spaced DFA algorithms when assessing human behavioral data.

  6. Systematic identification of an integrative network module during senescence from time-series gene expression.

    PubMed

    Park, Chihyun; Yun, So Jeong; Ryu, Sung Jin; Lee, Soyoung; Lee, Young-Sam; Yoon, Youngmi; Park, Sang Chul

    2017-03-15

    Cellular senescence irreversibly arrests growth of human diploid cells. In addition, recent studies have indicated that senescence is a multi-step evolving process related to important complex biological processes. Most studies analyzed only the genes and their functions representing each senescence phase without considering gene-level interactions and continuously perturbed genes. It is necessary to reveal the genotypic mechanism inferred by affected genes and their interaction underlying the senescence process. We suggested a novel computational approach to identify an integrative network which profiles an underlying genotypic signature from time-series gene expression data. The relatively perturbed genes were selected for each time point based on the proposed scoring measure denominated as perturbation scores. Then, the selected genes were integrated with protein-protein interactions to construct time point specific network. From these constructed networks, the conserved edges across time point were extracted for the common network and statistical test was performed to demonstrate that the network could explain the phenotypic alteration. As a result, it was confirmed that the difference of average perturbation scores of common networks at both two time points could explain the phenotypic alteration. We also performed functional enrichment on the common network and identified high association with phenotypic alteration. Remarkably, we observed that the identified cell cycle specific common network played an important role in replicative senescence as a key regulator. Heretofore, the network analysis from time series gene expression data has been focused on what topological structure was changed over time point. Conversely, we focused on the conserved structure but its context was changed in course of time and showed it was available to explain the phenotypic changes. We expect that the proposed method will help to elucidate the biological mechanism unrevealed by the existing approaches.

  7. Trajectory data privacy protection based on differential privacy mechanism

    NASA Astrophysics Data System (ADS)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  8. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  9. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  10. Compensatable muon collider calorimeter with manageable backgrounds

    DOEpatents

    Raja, Rajendran

    2015-02-17

    A method and system for reducing background noise in a particle collider, comprises identifying an interaction point among a plurality of particles within a particle collider associated with a detector element, defining a trigger start time for each of the pixels as the time taken for light to travel from the interaction point to the pixel and a trigger stop time as a selected time after the trigger start time, and collecting only detections that occur between the start trigger time and the stop trigger time in order to thereafter compensate the result from the particle collider to reduce unwanted background detection.

  11. Self-Regulated Learning in Younger and Older Adults: Does Aging Affect Metacognitive Control?

    PubMed Central

    Price, Jodi; Hertzog, Christopher; Dunlosky, John

    2011-01-01

    Two experiments examined whether younger and older adults’ self-regulated study (item selection and study time) conformed to the region of proximal learning (RPL) model when studying normatively easy, medium, and difficult vocabulary pairs. Experiment 2 manipulated the value of recalling different pairs and provided learning goals for words recalled and points earned. Younger and older adults in both experiments selected items for study in an easy-to-difficult order, indicating the RPL model applies to older adults’ self-regulated study. Individuals allocated more time to difficult items, but prioritized easier items when given less time or point values favoring difficult items. Older adults studied more items for longer but realized lower recall than did younger adults. Older adults’ lower memory self-efficacy and perceived control correlated with their greater item restudy and avoidance of difficult items with high point values. Results are discussed in terms of RPL and agenda-based regulation models. PMID:19866382

  12. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  13. [Demonstration plan used in the study of human reproduction in the district of Sao Paulo. 1967].

    PubMed

    Silva, Eunice Pinho de Castro

    2006-10-01

    This work presents the sampling procedure used to select the sample got for a "Human Reproduction Study in the District of São Paulo" (Brazil), done by the Department of Applied Statistics of "Faculdade de Higiene e Saúde Pública da Universidade de São Paulo". The procedure tried to solve the situation which resulted from the limitation in cost, time and lack of a frame that could be used in order to get a probability sample in the fixed term of time and with the fixed cost. It consisted in a two stage sampling with dwelling-units as primary units and women as secondary units. At the first stage, it was used stratified sampling in which sub-districts were taken as strata. In order to select primary units, there was a selection of points ("starting points") on the maps of subdistricts by a procedure that was similar to that one called "square grid" but differed from this in several aspects. There were fixed rules to establish a correspondence between each selected "starting point" and a set of three dwelling units where at least one woman of the target population lived. In the selected dwelling units where more than one woman of target population lived, there was a sub-sampling in order to select one of them. In this selection each woman living in the dwelling unit had equal probability of selection. Several "no-answer" cases and correspondent instructions to be followed by the interviewers are presented too.

  14. Computer language for identifying chemicals with comprehensive two-dimensional gas chromatography and mass spectrometry.

    PubMed

    Reichenbach, Stephen E; Kottapalli, Visweswara; Ni, Mingtian; Visvanathan, Arvind

    2005-04-15

    This paper describes a language for expressing criteria for chemical identification with comprehensive two-dimensional gas chromatography paired with mass spectrometry (GC x GC-MS) and presents computer-based tools implementing the language. The Computer Language for Indentifying Chemicals (CLIC) allows expressions that describe rules (or constraints) for selecting chemical peaks or data points based on multi-dimensional chromatographic properties and mass spectral characteristics. CLIC offers chromatographic functions of retention times, functions of mass spectra, numbers for quantitative and relational evaluation, and logical and arithmetic operators. The language is demonstrated with the compound-class selection rules described by Welthagen et al. [W. Welthagen, J. Schnelle-Kreis, R. Zimmermann, J. Chromatogr. A 1019 (2003) 233-249]. A software implementation of CLIC provides a calculator-like graphical user-interface (GUI) for building and applying selection expressions. From the selection calculator, expressions can be used to select chromatographic peaks that meet the criteria or create selection chromatograms that mask data points inconsistent with the criteria. Selection expressions can be combined with graphical, geometric constraints in the retention-time plane as a powerful component for chemical identification with template matching or used to speed and improve mass spectrum library searches.

  15. Genetic algorithms applied to the scheduling of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.

    1989-01-01

    A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.

  16. The Impact of Presentation Format on Younger and Older Adults' Self-Regulated Learning.

    PubMed

    Price, Jodi

    2017-01-01

    Background/Study Context: Self-regulated learning involves deciding what to study and for how long. Debate surrounds whether individuals' selections are influenced more by item complexity, point values, or if instead people select in a left-to-right reading order, ignoring item complexity and value. The present study manipulated whether point values and presentation format favored selection of simple or complex Chinese-English pairs to assess the impact on younger and older adults' selection behaviors. One hundred and five younger (M age  = 20.26, SD = 2.38) and 102 older adults (M age  = 70.28, SD = 6.37) participated in the experiment. Participants studied four different 3 × 3 grids (two per trial), each containing three simple, three medium, and three complex Chinese-English vocabulary pairs presented in either a simple-first or complex-first order, depending on condition. Point values were assigned in either a 2-4-8 or 8-4-2 order so that either simple or complex items were favored. Points did not influence the order in which either age group selected items, whereas presentation format did. Younger and older adults selected more simple or complex items when they appeared in the first column. However, older adults selected and allocated more time to simpler items but recalled less overall than did younger adults. Memory beliefs and working memory capacity predicted study time allocation, but not item selection, behaviors. Presentation format must be considered when evaluating which theory of self-regulated learning best accounts for younger and older adults' study behaviors and whether there are age-related differences in self-regulated learning. The results of the present study combine with others to support the importance of also considering the role of external factors (e.g., working memory capacity and memory beliefs) in each age group's self-regulated learning decisions.

  17. Attention flexibly trades off across points in time.

    PubMed

    Denison, Rachel N; Heeger, David J; Carrasco, Marisa

    2017-08-01

    Sensory signals continuously enter the brain, raising the question of how perceptual systems handle this constant flow of input. Attention to an anticipated point in time can prioritize visual information at that time. However, how we voluntarily attend across time when there are successive task-relevant stimuli has been barely investigated. We developed a novel experimental protocol that allowed us to assess, for the first time, both the benefits and costs of voluntary temporal attention when perceiving a short sequence of two or three visual targets with predictable timing. We found that when humans directed attention to a cued point in time, their ability to perceive orientation was better at that time but also worse earlier and later. These perceptual tradeoffs across time are analogous to those found across space for spatial attention. We concluded that voluntary attention is limited, and selective, across time.

  18. Instance-based learning: integrating sampling and repeated decisions from experience.

    PubMed

    Gonzalez, Cleotilde; Dutt, Varun

    2011-10-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or lose money. In the repeated-choice paradigm, participants select 1 of the 2 options for a fixed number of times and receive immediate outcome feedback that affects their earnings. These 2 experimental paradigms have been studied independently, and different cognitive processes have often been assumed to take place in each, as represented in widely diverse computational models. We demonstrate that behavior in these 2 paradigms relies upon common cognitive processes proposed by the instance-based learning theory (IBLT; Gonzalez, Lerch, & Lebiere, 2003) and that the stopping point is the only difference between the 2 paradigms. A single cognitive model based on IBLT (with an added stopping point rule in the sampling paradigm) captures human choices and predicts the sequence of choice selections across both paradigms. We integrate the paradigms through quantitative model comparison, where IBLT outperforms the best models created for each paradigm separately. We discuss the implications for the psychology of decision making. © 2011 American Psychological Association

  19. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  20. Expression analysis of selected classes of circulating exosomal miRNAs in soccer players as an indicator of adaptation to physical activity

    PubMed Central

    Jastrzębski, Zbigniew; Kiszałkiewicz, Justyna; Brzeziański, Michał; Pastuszak-Lewandoska, Dorota; Radzimińki, Łukasz; Brzeziańska-Lasota, Ewa; Jegier, Anna

    2017-01-01

    Recently studies have shown that, depending on the type of training and its duration, the expression levels of selected circulating myomiRNAs (c-miR-27a,b, c-miR-29a,b,c, c-miR-133a) differ and correlate with the physiological indicators of adaptation to physical activity. To analyse the expression of selected classes of miRNAs in soccer players during different periods of their training cycle. The study involved 22 soccer players aged 17-18 years. The multi-stage 20-m shuttle run test was used to estimate VO2 max among the soccer players. Samples serum were collected at baseline (time point I), after one week (time point II), and after 2 months of training (time point III). The analysis of the relative quantification (RQ) level of three exosomal myomiRNAs, c-miRNA-27b, c-miR-29a, and c-miR-133, was performed by quantitative polymerase chain reaction (qPCR) at three time points – before the training, after 1 week of training and after the completion of two months of competition season training. The expression analysis showed low expression levels (according to references) of all evaluated myomiRNAs before the training cycle. Analysis performed after a week of the training cycle and after completion of the entire training cycle showed elevated expression of all tested myomiRNAs. Statistical analysis revealed significant differences between the first and the second time point in soccer players for c-miR-27b and c-miR-29a; between the first and the third time point for c-miR-27b and c-miR-29a; and between the second and the third time point for c-miR-27b. Statistical analysis showed a positive correlation between the levels of c-miR-29a and VO2 max. Two months of training affected the expression of c-miR-27b and miR-29a in soccer players. The increased expression of c-miR-27b and c-miR-29 with training could indicate their probable role in the adaptation process that takes place in the muscular system. Possibly, the expression of c-miR-29a will be found to be involved in cardiorespiratory fitness in future research. PMID:29472735

  1. The method ADAMONT v1.0 for statistical adjustment of climate projections applicable to energy balance land surface models

    NASA Astrophysics Data System (ADS)

    Verfaillie, Deborah; Déqué, Michel; Morin, Samuel; Lafaysse, Matthieu

    2017-11-01

    We introduce the method ADAMONT v1.0 to adjust and disaggregate daily climate projections from a regional climate model (RCM) using an observational dataset at hourly time resolution. The method uses a refined quantile mapping approach for statistical adjustment and an analogous method for sub-daily disaggregation. The method ultimately produces adjusted hourly time series of temperature, precipitation, wind speed, humidity, and short- and longwave radiation, which can in turn be used to force any energy balance land surface model. While the method is generic and can be employed for any appropriate observation time series, here we focus on the description and evaluation of the method in the French mountainous regions. The observational dataset used here is the SAFRAN meteorological reanalysis, which covers the entire French Alps split into 23 massifs, within which meteorological conditions are provided for several 300 m elevation bands. In order to evaluate the skills of the method itself, it is applied to the ALADIN-Climate v5 RCM using the ERA-Interim reanalysis as boundary conditions, for the time period from 1980 to 2010. Results of the ADAMONT method are compared to the SAFRAN reanalysis itself. Various evaluation criteria are used for temperature and precipitation but also snow depth, which is computed by the SURFEX/ISBA-Crocus model using the meteorological driving data from either the adjusted RCM data or the SAFRAN reanalysis itself. The evaluation addresses in particular the time transferability of the method (using various learning/application time periods), the impact of the RCM grid point selection procedure for each massif/altitude band configuration, and the intervariable consistency of the adjusted meteorological data generated by the method. Results show that the performance of the method is satisfactory, with similar or even better evaluation metrics than alternative methods. However, results for air temperature are generally better than for precipitation. Results in terms of snow depth are satisfactory, which can be viewed as indicating a reasonably good intervariable consistency of the meteorological data produced by the method. In terms of temporal transferability (evaluated over time periods of 15 years only), results depend on the learning period. In terms of RCM grid point selection technique, the use of a complex RCM grid points selection technique, taking into account horizontal but also altitudinal proximity to SAFRAN massif centre points/altitude couples, generally degrades evaluation metrics for high altitudes compared to a simpler grid point selection method based on horizontal distance.

  2. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  3. Investigating the Accuracy of Point Clouds Generated for Rock Surfaces

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.

    2016-12-01

    Point clouds which are produced by means of different techniques are widely used to model the rocks and obtain the properties of rock surfaces like roughness, volume and area. These point clouds can be generated by applying laser scanning and close range photogrammetry techniques. Laser scanning is the most common method to produce point cloud. In this method, laser scanner device produces 3D point cloud at regular intervals. In close range photogrammetry, point cloud can be produced with the help of photographs taken in appropriate conditions depending on developing hardware and software technology. Many photogrammetric software which is open source or not currently provide the generation of point cloud support. Both methods are close to each other in terms of accuracy. Sufficient accuracy in the mm and cm range can be obtained with the help of a qualified digital camera and laser scanner. In both methods, field work is completed in less time than conventional techniques. In close range photogrammetry, any part of rock surfaces can be completely represented owing to overlapping oblique photographs. In contrast to the proximity of the data, these two methods are quite different in terms of cost. In this study, whether or not point cloud produced by photographs can be used instead of point cloud produced by laser scanner device is investigated. In accordance with this purpose, rock surfaces which have complex and irregular shape located in İstanbul Technical University Ayazaga Campus were selected as study object. Selected object is mixture of different rock types and consists of both partly weathered and fresh parts. Study was performed on a part of 30m x 10m rock surface. 2D and 3D analysis were performed for several regions selected from the point clouds of the surface models. 2D analysis is area-based and 3D analysis is volume-based. Analysis conclusions showed that point clouds in both are similar and can be used as alternative to each other. This proved that point cloud produced using photographs which are both economical and enables to produce data in less time can be used in several studies instead of point cloud produced by laser scanner.

  4. Real time three dimensional sensing system

    DOEpatents

    Gordon, S.J.

    1996-12-31

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane. 7 figs.

  5. Real time three dimensional sensing system

    DOEpatents

    Gordon, Steven J.

    1996-01-01

    The invention is a three dimensional sensing system which utilizes two flexibly located cameras for receiving and recording visual information with respect to a sensed object illuminated by a series of light planes. Each pixel of each image is converted to a digital word and the words are grouped into stripes, each stripe comprising contiguous pixels. One pixel of each stripe in one image is selected and an epi-polar line of that point is drawn in the other image. The three dimensional coordinate of each selected point is determined by determining the point on said epi-polar line which also lies on a stripe in the second image and which is closest to a known light plane.

  6. Topological photonic crystal with ideal Weyl points

    NASA Astrophysics Data System (ADS)

    Wang, Luyang; Jian, Shao-Kai; Yao, Hong

    Weyl points in three-dimensional photonic crystals behave as monopoles of Berry flux in momentum space. Here, based on symmetry analysis, we show that a minimal number of symmetry-related Weyl points can be realized in time-reversal invariant photonic crystals. We propose to realize these ``ideal'' Weyl points in modified double-gyroid photonic crystals, which is confirmed by our first-principle photonic band-structure calculations. Photonic crystals with ideal Weyl points are qualitatively advantageous in applications such as angular and frequency selectivity, broadband invisibility cloaking, and broadband 3D-imaging.

  7. Age-related changes in the function and structure of the peripheral sensory pathway in mice.

    PubMed

    Canta, Annalisa; Chiorazzi, Alessia; Carozzi, Valentina Alda; Meregalli, Cristina; Oggioni, Norberto; Bossi, Mario; Rodriguez-Menendez, Virginia; Avezza, Federica; Crippa, Luca; Lombardi, Raffaella; de Vito, Giuseppe; Piazza, Vincenzo; Cavaletti, Guido; Marmiroli, Paola

    2016-09-01

    This study is aimed at describing the changes occurring in the entire peripheral nervous system sensory pathway along a 2-year observation period in a cohort of C57BL/6 mice. The neurophysiological studies evidenced significant differences in the selected time points corresponding to childhood, young adulthood, adulthood, and aging (i.e., 1, 7, 15, and 25 months of age), with a parabolic course as function of time. The pathological assessment allowed to demonstrate signs of age-related changes since the age of 7 months, with a remarkable increase in both peripheral nerves and dorsal root ganglia at the subsequent time points. These changes were mainly in the myelin sheaths, as also confirmed by the Rotating-Polarization Coherent-Anti-stokes-Raman-scattering microscopy analysis. Evident changes were also present at the morphometric analysis performed on the peripheral nerves, dorsal root ganglia neurons, and skin biopsies. This extensive, multimodal characterization of the peripheral nervous system changes in aging provides the background for future mechanistic studies allowing the selection of the most appropriate time points and readouts according to the investigation aims. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krityakierne, Tipaluck; Akhtar, Taimoor; Shoemaker, Christine A.

    This paper presents a parallel surrogate-based global optimization method for computationally expensive objective functions that is more effective for larger numbers of processors. To reach this goal, we integrated concepts from multi-objective optimization and tabu search into, single objective, surrogate optimization. Our proposed derivative-free algorithm, called SOP, uses non-dominated sorting of points for which the expensive function has been previously evaluated. The two objectives are the expensive function value of the point and the minimum distance of the point to previously evaluated points. Based on the results of non-dominated sorting, P points from the sorted fronts are selected as centersmore » from which many candidate points are generated by random perturbations. Based on surrogate approximation, the best candidate point is subsequently selected for expensive evaluation for each of the P centers, with simultaneous computation on P processors. Centers that previously did not generate good solutions are tabu with a given tenure. We show almost sure convergence of this algorithm under some conditions. The performance of SOP is compared with two RBF based methods. The test results show that SOP is an efficient method that can reduce time required to find a good near optimal solution. In a number of cases the efficiency of SOP is so good that SOP with 8 processors found an accurate answer in less wall-clock time than the other algorithms did with 32 processors.« less

  10. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  11. Helping the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection through the organization of a pilot health care provider research system.

    PubMed

    Tang, Liyang

    2013-04-04

    The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.

  12. Gene selection with multiple ordering criteria.

    PubMed

    Chen, James J; Tsai, Chen-An; Tzeng, Shengli; Chen, Chun-Houh

    2007-03-05

    A microarray study may select different differentially expressed gene sets because of different selection criteria. For example, the fold-change and p-value are two commonly known criteria to select differentially expressed genes under two experimental conditions. These two selection criteria often result in incompatible selected gene sets. Also, in a two-factor, say, treatment by time experiment, the investigator may be interested in one gene list that responds to both treatment and time effects. We propose three layer ranking algorithms, point-admissible, line-admissible (convex), and Pareto, to provide a preference gene list from multiple gene lists generated by different ranking criteria. Using the public colon data as an example, the layer ranking algorithms are applied to the three univariate ranking criteria, fold-change, p-value, and frequency of selections by the SVM-RFE classifier. A simulation experiment shows that for experiments with small or moderate sample sizes (less than 20 per group) and detecting a 4-fold change or less, the two-dimensional (p-value and fold-change) convex layer ranking selects differentially expressed genes with generally lower FDR and higher power than the standard p-value ranking. Three applications are presented. The first application illustrates a use of the layer rankings to potentially improve predictive accuracy. The second application illustrates an application to a two-factor experiment involving two dose levels and two time points. The layer rankings are applied to selecting differentially expressed genes relating to the dose and time effects. In the third application, the layer rankings are applied to a benchmark data set consisting of three dilution concentrations to provide a ranking system from a long list of differentially expressed genes generated from the three dilution concentrations. The layer ranking algorithms are useful to help investigators in selecting the most promising genes from multiple gene lists generated by different filter, normalization, or analysis methods for various objectives.

  13. [Professor DONG Gui-rong's experience for the treatment of peripheral facial paralysis].

    PubMed

    Cao, Lian-Ying; Shen, Te-Li; Zhang, Wei; Chen, Si-Hui

    2012-05-01

    Professor DONG Gui-rong's theoretical principle and manipulation points for peripheral facial paralysis were introduced in details from the angels of syndrome differentiation, timing, acupoint prescription and needling methods. For the syndrome differentiation and timing, the professor emphasized to check the treatment timing and follow the symptoms, which should be treated by stages, besides, it was necessary to find and distinguish the reason and nature of diseases to have a combined treatment of tendons and muscles. For the acupoint prescription and needling methods, he has proposed that the acupoints selection should be compatible of distal and lacal, and made a best of Baihui (GV 20) to regulate the whole yang qi, also he has paid much attention to the needling methods and staging treatment. Under the consideration of late stage of peripheral facial paralysis, based on syndrome differentiation Back-shu points have been selected to regulate zang-fu function, should achieve much better therapeutic effect.

  14. Structural Controllability of Temporal Networks with a Single Switching Controller

    PubMed Central

    Yao, Peng; Hou, Bao-Yu; Pan, Yu-Jian; Li, Xiang

    2017-01-01

    Temporal network, whose topology evolves with time, is an important class of complex networks. Temporal trees of a temporal network describe the necessary edges sustaining the network as well as their active time points. By a switching controller which properly selects its location with time, temporal trees are used to improve the controllability of the network. Therefore, more nodes are controlled within the limited time. Several switching strategies to efficiently select the location of the controller are designed, which are verified with synthetic and empirical temporal networks to achieve better control performance. PMID:28107538

  15. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  16. A uniform energy consumption algorithm for wireless sensor and actuator networks based on dynamic polling point selection.

    PubMed

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2013-12-19

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.

  17. Optimizing the sequence of diameter distributions and selection harvests for uneven-aged stand management

    Treesearch

    Robert G. Haight; J. Douglas Brodie; Darius M. Adams

    1985-01-01

    The determination of an optimal sequence of diameter distributions and selection harvests for uneven-aged stand management is formulated as a discrete-time optimal-control problem with bounded control variables and free-terminal point. An efficient programming technique utilizing gradients provides solutions that are stable and interpretable on the basis of economic...

  18. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  19. A High-Resolution Capability for Large-Eddy Simulation of Jet Flows

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.

    2011-01-01

    A large-eddy simulation (LES) code that utilizes high-resolution numerical schemes is described and applied to a compressible jet flow. The code is written in a general manner such that the accuracy/resolution of the simulation can be selected by the user. Time discretization is performed using a family of low-dispersion Runge-Kutta schemes, selectable from first- to fourth-order. Spatial discretization is performed using central differencing schemes. Both standard schemes, second- to twelfth-order (3 to 13 point stencils) and Dispersion Relation Preserving schemes from 7 to 13 point stencils are available. The code is written in Fortran 90 and uses hybrid MPI/OpenMP parallelization. The code is applied to the simulation of a Mach 0.9 jet flow. Four-stage third-order Runge-Kutta time stepping and the 13 point DRP spatial discretization scheme of Bogey and Bailly are used. The high resolution numerics used allows for the use of relatively sparse grids. Three levels of grid resolution are examined, 3.5, 6.5, and 9.2 million points. Mean flow, first-order turbulent statistics and turbulent spectra are reported. Good agreement with experimental data for mean flow and first-order turbulent statistics is shown.

  20. Causality in time-neutral cosmologies

    NASA Astrophysics Data System (ADS)

    Kent, Adrian

    1999-02-01

    Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.

  1. Differences in Kindergartners' Participation and Regulation Strategies across Time and Instructional Contexts

    ERIC Educational Resources Information Center

    Neitzel, Carin; Connor, Lisa

    2017-01-01

    This study addressed questions about the function of children's various participation and regulation strategies in different instructional contexts and at different points in time in school. The developmental trajectories of kindergartners' academic participation and regulation strategy selection and use across the school year in teacher-directed…

  2. Vehicle Routing Problem Using Genetic Algorithm with Multi Compartment on Vegetable Distribution

    NASA Astrophysics Data System (ADS)

    Kurnia, Hari; Gustri Wahyuni, Elyza; Cergas Pembrani, Elang; Gardini, Syifa Tri; Kurnia Aditya, Silfa

    2018-03-01

    The problem that is often gained by the industries of managing and distributing vegetables is how to distribute vegetables so that the quality of the vegetables can be maintained properly. The problems encountered include optimal route selection and little travel time or so-called TSP (Traveling Salesman Problem). These problems can be modeled using the Vehicle Routing Problem (VRP) algorithm with rating ranking, a cross order based crossing, and also order based mutation mutations on selected chromosomes. This study uses limitations using only 20 market points, 2 point warehouse (multi compartment) and 5 vehicles. It is determined that for one distribution, one vehicle can only distribute to 4 market points only from 1 particular warehouse, and also one such vehicle can only accommodate 100 kg capacity.

  3. Evaluation of the leap motion controller as a new contact-free pointing device.

    PubMed

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2014-12-24

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.

  4. Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device

    PubMed Central

    Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard

    2015-01-01

    This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043

  5. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  6. Challenges in early clinical development of adjuvanted vaccines.

    PubMed

    Della Cioppa, Giovanni; Jonsdottir, Ingileif; Lewis, David

    2015-06-08

    A three-step approach to the early development of adjuvanted vaccine candidates is proposed, the goal of which is to allow ample space for exploratory and hypothesis-generating human experiments and to select dose(s) and dosing schedule(s) to bring into full development. Although the proposed approach is more extensive than the traditional early development program, the authors suggest that by addressing key questions upfront the overall time, size and cost of development will be reduced and the probability of public health advancement enhanced. The immunogenicity end-points chosen for early development should be critically selected: an established immunological parameter with a well characterized assay should be selected as primary end-point for dose and schedule finding; exploratory information-rich end-points should be limited in number and based on pre-defined hypothesis generating plans, including system biology and pathway analyses. Building a pharmacodynamic profile is an important aspect of early development: to this end, multiple early (within 24h) and late (up to one year) sampling is necessary, which can be accomplished by sampling subgroups of subjects at different time points. In most cases the final target population, even if vulnerable, should be considered for inclusion in early development. In order to obtain the multiple formulations necessary for the dose and schedule finding, "bed-side mixing" of various components of the vaccine is often necessary: this is a complex and underestimated area that deserves serious research and logistical support. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  8. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

  9. Helping the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection through the organization of a pilot health care provider research system

    PubMed Central

    2013-01-01

    Background The main aim of China’s Health Care System Reform was to help the decision maker find the optimal solution to China’s institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China’s health care system, and it could efficiently collect the data for determining the optimal solution to China’s institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts’ views into various optimal solutions to this problem under the support of this pilot system. Methods After the general framework of China’s institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. Results The market-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the doctors’ point of view; the traditional government’s regulation-oriented health care provider approach was the optimal solution to China’s institutional problem of health care provider selection from the pharmacists’ point of view, the hospital administrators’ point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China’s institutional problem of health care provider selection from the nurses’ point of view, the point of view of officials in medical insurance agencies, and the health care researchers’ point of view. Conclusions The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts’ views into various optimal solutions to China’s institutional problem of health care provider selection. PMID:23557082

  10. IgV gene intraclonal diversification and clonal evolution in B-cell chronic lymphocytic leukaemia.

    PubMed

    Bagnara, Davide; Callea, Vincenzo; Stelitano, Caterina; Morabito, Fortunato; Fabris, Sonia; Neri, Antonino; Zanardi, Sabrina; Ghiotto, Fabio; Ciccone, Ermanno; Grossi, Carlo Enrico; Fais, Franco

    2006-04-01

    Intraclonal diversification of immunoglobulin (Ig) variable (V) genes was evaluated in leukaemic cells from a B-cell chronic lymphocytic leukaemia (B-CLL) case over a 2-year period at four time points. Intraclonal heterogeneity was analysed by sequencing 305 molecular clones derived from polymerase chain reaction amplification of B-CLL cell IgV heavy (H) and light (C) chain gene rearrangements. Sequences were compared with evaluating intraclonal variation and the nature of somatic mutations. Although IgV intraclonal variation was detected at all time points, its level decreased with time and a parallel emergence of two more represented V(H)DJ(H) clones was observed. They differed by nine nucleotide substitutions one of which only caused a conservative replacement aminoacid change. In addition, one V(L)J(L) rearrangement became more represented over time. Analyses of somatic mutations suggest antigen selection and impairment of negative selection of neoplastic cells. In addition, a genealogical tree representing a model of clonal evolution of the neoplastic cells was created. It is of note that, during the period of study, the patient showed clinical progression of disease. We conclude that antigen stimulation and somatic hypermutation may participate in disease progression through the selection and expansion of neoplastic subclone(s).

  11. Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings

    NASA Astrophysics Data System (ADS)

    Hodgkinson, P.; Holmes, K. J.; Hore, P. J.

    Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.

  12. Effect of target color and scanning geometry on terrestrial LiDAR point-cloud noise and plane fitting

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Martinez, Aaron

    2018-01-01

    Point-cloud coordinate information derived from terrestrial Light Detection And Ranging (LiDAR) is important for several applications in surveying and civil engineering. Plane fitting and segmentation of target-surfaces is an important step in several applications such as in the monitoring of structures. Reliable parametric modeling and segmentation relies on the underlying quality of the point-cloud. Therefore, understanding how point-cloud errors affect fitting of planes and segmentation is important. Point-cloud intensity, which accompanies the point-cloud data, often goes hand-in-hand with point-cloud noise. This study uses industrial particle boards painted with eight different colors (black, white, grey, red, green, blue, brown, and yellow) and two different sheens (flat and semi-gloss) to explore how noise and plane residuals vary with scanning geometry (i.e., distance and incidence angle) and target-color. Results show that darker colors, such as black and brown, can produce point clouds that are several times noisier than bright targets, such as white. In addition, semi-gloss targets manage to reduce noise in dark targets by about 2-3 times. The study of plane residuals with scanning geometry reveals that, in many of the cases tested, residuals decrease with increasing incidence angles, which can assist in understanding the distribution of plane residuals in a dataset. Finally, a scheme is developed to derive survey guidelines based on the data collected in this experiment. Three examples demonstrate that users should consider instrument specification, required precision of plane residuals, required point-spacing, target-color, and target-sheen, when selecting scanning locations. Outcomes of this study can aid users to select appropriate instrumentation and improve planning of terrestrial LiDAR data-acquisition.

  13. Earth observing system instrument pointing control modeling for polar orbiting platforms

    NASA Technical Reports Server (NTRS)

    Briggs, H. C.; Kia, T.; Mccabe, S. A.; Bell, C. E.

    1987-01-01

    An approach to instrument pointing control performance assessment for large multi-instrument platforms is described. First, instrument pointing requirements and reference platform control systems for the Eos Polar Platforms are reviewed. Performance modeling tools including NASTRAN models of two large platforms, a modal selection procedure utilizing a balanced realization method, and reduced order platform models with core and instrument pointing control loops added are then described. Time history simulations of instrument pointing and stability performance in response to commanded slewing of adjacent instruments demonstrates the limits of tolerable slew activity. Simplified models of rigid body responses are also developed for comparison. Instrument pointing control methods required in addition to the core platform control system to meet instrument pointing requirements are considered.

  14. Microgravity cursor control device evaluation for Space Station Freedom workstations

    NASA Technical Reports Server (NTRS)

    Adam, Susan; Holden, Kritina L.; Gillan, Douglas; Rudisill, Marianne

    1991-01-01

    This research addressed direct manipulation interface (curser-controlled device) usability in microgravity. The data discussed are from KC-135 flights. This included pointing and dragging movements over a variety of angles and distances. Detailed error and completion time data provided researchers with information regarding cursor control shape, selection button arrangement, sensitivity, selection modes, and considerations for future research.

  15. ROLE OF TIMING IN ASSESSMENT OF NERVE REGENERATION

    PubMed Central

    BRENNER, MICHAEL J.; MORADZADEH, ARASH; MYCKATYN, TERENCE M.; TUNG, THOMAS H. H.; MENDEZ, ALLEN B.; HUNTER, DANIEL A.; MACKINNON, SUSAN E.

    2014-01-01

    Small animal models are indispensable for research on nerve injury and reconstruction, but their superlative regenerative potential may confound experimental interpretation. This study investigated time-dependent neuroregenerative phenomena in rodents. Forty-six Lewis rats were randomized to three nerve allograft groups treated with 2 mg/(kg day) tacrolimus; 5 mg/(kg day) Cyclosporine A; or placebo injection. Nerves were subjected to histomorphometric and walking track analysis at serial time points. Tacrolimus increased fiber density, percent neural tissue, and nerve fiber count and accelerated functional recovery at 40 days, but these differences were undetectable by 70 days. Serial walking track analysis showed a similar pattern of recovery. A ‘blow-through’ effect is observed in rodents whereby an advancing nerve front overcomes an experimental defect given sufficient time, rendering experimental groups indistinguishable at late time points. Selection of validated time points and corroboration in higher animal models are essential prerequisites for the clinical application of basic research on nerve regeneration. PMID:18381659

  16. Optical fiber biocompatible sensors for monitoring selective treatment of tumors via thermal ablation

    NASA Astrophysics Data System (ADS)

    Tosi, Daniele; Poeggel, Sven; Dinesh, Duraibabu B.; Macchi, Edoardo G.; Gallati, Mario; Braschi, Giovanni; Leen, Gabriel; Lewis, Elfed

    2015-09-01

    Thermal ablation (TA) is an interventional procedure for selective treatment of tumors, that results in low-invasive outpatient care. The lack of real-time control of TA is one of its main weaknesses. Miniature and biocompatible optical fiber sensors are applied to achieve a dense, multi-parameter monitoring, that can substantially improve the control of TA. Ex vivo measurements are reported performed on porcine liver tissue, to reproduce radiofrequency ablation of hepatocellular carcinoma. Our measurement campaign has a two-fold focus: (1) dual pressure-temperature measurement with a single probe; (2) distributed thermal measurement to estimate point-by-point cells mortality.

  17. Time as a dimension of the sample design in national-scale forest inventories

    Treesearch

    Francis Roesch; Paul Van Deusen

    2013-01-01

    Historically, the goal of forest inventories has been to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design was with selection probabilities based on land area observed at a discrete point in time. Time was not...

  18. A Uniform Energy Consumption Algorithm for Wireless Sensor and Actuator Networks Based on Dynamic Polling Point Selection

    PubMed Central

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2014-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  19. Volumetric Trends Associated with MR-guided Stereotactic Laser Amygdalohippocampectomy in Mesial Temporal Lobe Epilepsy

    PubMed Central

    Patel, Nitesh V; Sundararajan, Sri; Keller, Irwin; Danish, Shabbar

    2018-01-01

    Objective: Magnetic resonance (MR)-guided stereotactic laser amygdalohippocampectomy is a minimally invasive procedure for the treatment of refractory epilepsy in patients with mesial temporal sclerosis. Limited data exist on post-ablation volumetric trends associated with the procedure. Methods: 10 patients with mesial temporal sclerosis underwent MR-guided stereotactic laser amygdalohippocampectomy. Three independent raters computed ablation volumes at the following time points: pre-ablation (PreA), immediate post-ablation (IPA), 24 hours post-ablation (24PA), first follow-up post-ablation (FPA), and greater than three months follow-up post-ablation (>3MPA), using OsiriX DICOM Viewer (Pixmeo, Bernex, Switzerland). Statistical trends in post-ablation volumes were determined for the time points. Results: MR-guided stereotactic laser amygdalohippocampectomy produces a rapid rise and distinct peak in post-ablation volume immediately following the procedure. IPA volumes are significantly higher than all other time points. Comparing individual time points within each raters dataset (intra-rater), a significant difference was seen between the IPA time point and all others. There was no statistical difference between the 24PA, FPA, and >3MPA time points. A correlation analysis demonstrated the strongest correlations at the 24PA (r=0.97), FPA (r=0.95), and 3MPA time points (r=0.99), with a weaker correlation at IPA (r=0.92). Conclusion: MR-guided stereotactic laser amygdalohippocampectomy produces a maximal increase in post-ablation volume immediately following the procedure, which decreases and stabilizes at 24 hours post-procedure and beyond three months follow-up. Based on the correlation analysis, the lower inter-rater reliability at the IPA time point suggests it may be less accurate to assess volume at this time point. We recommend post-ablation volume assessments be made at least 24 hours post-selective ablation of the amygdalohippocampal complex (SLAH).

  20. Quantifying Selection with Pool-Seq Time Series Data.

    PubMed

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. Ergonomic comparison of operating a built-in touch-pad pointing device and a trackball mouse on posture and muscle activity.

    PubMed

    Lee, Tzu-Hsien

    2005-12-01

    This study examined the effects of operating a built-in touch-pad pointing device and a trackball mouse on participants' completion times, hand positions during operation, postural angles, and muscle activities. 8 young men were asked to perform a cursor travel task on a notebook computer using both 60- and 80-cm high table conditions. Analysis showed that the trackball mouse significantly decreased completion times. Participants selected a hand position farther from the table edge and larger elbow angle for the trackball mouse than for the built-in touch-pad pointing device. Participants' neck, thoracic, and arm angles, or splenius capitis, trapezius, deltoid, and erector spinae muscle activities were not significantly affected by the devices, but table height significantly affected participants' completion times, hand positions, and postural angles.

  2. Dynamic nigrostriatal dopamine biases action selection

    PubMed Central

    Howard, Christopher D.; Li, Hao; Geddes, Claire E.; Jin, Xin

    2017-01-01

    Summary Dopamine is thought to play a critical role in reinforcement learning and goal-directed behavior, but its function in action selection remains largely unknown. Here, we demonstrate that nigrostriatal dopamine biases ongoing action selection. When mice were trained to dynamically switch the action selected at different time points, changes in firing rate of nigrostriatal dopamine neurons, as well as dopamine signaling in the dorsal striatum, were found to be associated with action selection. This dopamine profile is specific to behavioral choice, scalable with interval duration, and doesn’t reflect reward prediction error, timing, or value as single factors alone. Genetic deletion of NMDA receptors on dopamine or striatal neurons, or optogenetic manipulation of dopamine concentration, alters dopamine signaling and biases action selection. These results unveil a crucial role of nigrostriatal dopamine in integrating diverse information for regulating upcoming actions and have important implications for neurological disorders including Parkinson’s disease and substance dependence. PMID:28285820

  3. Dynamic Nigrostriatal Dopamine Biases Action Selection.

    PubMed

    Howard, Christopher D; Li, Hao; Geddes, Claire E; Jin, Xin

    2017-03-22

    Dopamine is thought to play a critical role in reinforcement learning and goal-directed behavior, but its function in action selection remains largely unknown. Here we demonstrate that nigrostriatal dopamine biases ongoing action selection. When mice were trained to dynamically switch the action selected at different time points, changes in firing rate of nigrostriatal dopamine neurons, as well as dopamine signaling in the dorsal striatum, were found to be associated with action selection. This dopamine profile is specific to behavioral choice, scalable with interval duration, and doesn't reflect reward prediction error, timing, or value as single factors alone. Genetic deletion of NMDA receptors on dopamine or striatal neurons or optogenetic manipulation of dopamine concentration alters dopamine signaling and biases action selection. These results unveil a crucial role of nigrostriatal dopamine in integrating diverse information for regulating upcoming actions, and they have important implications for neurological disorders, including Parkinson's disease and substance dependence. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A Personalized Predictive Framework for Multivariate Clinical Time Series via Adaptive Model Selection.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2017-11-01

    Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.

  5. Theme: Supervised Experience.

    ERIC Educational Resources Information Center

    Cox, David E.; And Others

    1991-01-01

    Includes "It's Time to Stop Quibbling over the Acronym" (Cox); "Information Rich--Experience Poor" (Elliot et al.); "Supervised Agricultural Experience Selection Process" (Yokum, Boggs); "Point System" (Fraze, Vaughn); "Urban Diversity Rural Style" (Morgan, Henry); "Nonoccupational Supervised Experience" (Croom); "Reflecting Industry" (Miller);…

  6. Laser altimetry simulator. Version 3.0: User's guide

    NASA Technical Reports Server (NTRS)

    Abshire, James B.; Mcgarry, Jan F.; Pacini, Linda K.; Blair, J. Bryan; Elman, Gregory C.

    1994-01-01

    A numerical simulator of a pulsed, direct detection laser altimeter has been developed to investigate the performance of space-based laser altimeters operating over surfaces with various height profiles. The simulator calculates the laser's optical intensity waveform as it propagates to and is reflected from the terrain surface and is collected by the receiver telescope. It also calculates the signal and noise waveforms output from the receiver's optical detector and waveform digitizer. Both avalanche photodiode and photomultiplier detectors may be selected. Parameters of the detected signal, including energy, the 50 percent rise-time point, the mean timing point, and the centroid, can be collected into histograms and statistics calculated after a number of laser firings. The laser altimeter can be selected to be fixed over the terrain at any altitude. Alternatively, it can move between laser shots to simulate the terrain profile measured with the laser altimeter.

  7. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  8. Comparison of the effects of firocoxib, carprofen and vedaprofen in a sodium urate crystal induced synovitis model of arthritis in dogs.

    PubMed

    Hazewinkel, Herman A W; van den Brom, Walter E; Theyse, Lars F H; Pollmeier, Matthias; Hanson, Peter D

    2008-02-01

    A randomized, placebo-controlled, four-period cross-over laboratory study involving eight dogs was conducted to confirm the effective analgesic dose of firocoxib, a selective COX-2 inhibitor, in a synovitis model of arthritis. Firocoxib was compared to vedaprofen and carprofen, and the effect, defined as a change in weight bearing measured via peak ground reaction, was evaluated at treatment dose levels. A lameness score on a five point scale was also assigned to the affected limb. Peak vertical ground reaction force was considered to be the most relevant measurement in this study. The firocoxib treatment group performed significantly better than placebo at the 3 h post-treatment time point and significantly better than placebo and carprofen at the 7 h post-treatment time point. Improvement in lameness score was also significantly better in the dogs treated with firocoxib than placebo and carprofen at both the 3 and 7 h post-treatment time points.

  9. Minimum average 7-day, 10-year flows in the Hudson River basin, New York, with release-flow data on Rondout and Ashokan reservoirs

    USGS Publications Warehouse

    Archer, Roger J.

    1978-01-01

    Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.

  10. Spitzer Imaging of Planck-Herschel Dusty Proto-Clusters at z=2-3

    NASA Astrophysics Data System (ADS)

    Cooray, Asantha; Ma, Jingzhe; Greenslade, Joshua; Kubo, Mariko; Nayyeri, Hooshang; Clements, David; Cheng, Tai-An

    2018-05-01

    We have recently introduced a new proto-cluster selection technique by combing Herschel/SPIRE imaging data and Planck/HFIk all-sky survey point source catalog. These sources are identified as Planck point sources with clumps of Herschel source over-densities with far-IR colors comparable to z=0 ULIRGS redshifted to z=2 to 3. The selection is sensitive to dusty starbursts and obscured QSOs and we have recovered couple of the known proto-clusters and close to 30 new proto-clusters. The candidate proto-clusters selected from this technique have far-IR flux densities several times higher than those that are optically selected, such as using LBG selection, implying that the member galaxies are in a special phase of heightened dusty starburst and dusty QSO activity. This far-IR luminous phase may be short but likely to be necessary piece to understand the whole stellar mass assembly history of clusters. Moreover, our photo-clusters are missed in optical selections, suggesting that optically selected proto-clusters alone do not provide adequate statistics and a comparison of the far-IR and optical selected clusters may reveal the importance of the dusty stellar mass assembly. Here, we propose IRAC observations of six of the highest priority new proto-clusters, to establish the validity of the technique and to determine the total stellar mass through SED models. For a modest observing time the science program will have a substantial impact on an upcoming science topic in cosmology with implications for observations with JWST and WFIRST to understand the mass assembly in the universe.

  11. Fast, adaptive summation of point forces in the two-dimensional Poisson equation

    NASA Technical Reports Server (NTRS)

    Van Dommelen, Leon; Rundensteiner, Elke A.

    1989-01-01

    A comparatively simple procedure is presented for the direct summation of the velocity field introduced by point vortices which significantly reduces the required number of operations by replacing selected partial sums by asymptotic series. Tables are presented which demonstrate the speed of this algorithm in terms of the mere doubling of computational time in dealing with a doubling of the number of vortices; current methods involve a computational time extension by a factor of 4. This procedure need not be restricted to the solution of the Poisson equation, and may be applied to other problems involving groups of points in which the interaction between elements of different groups can be simplified when the distance between groups is sufficiently great.

  12. Automatic extraction of the mid-sagittal plane using an ICP variant

    NASA Astrophysics Data System (ADS)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  13. Exploring Selective Exposure and Confirmation Bias as Processes Underlying Employee Work Happiness: An Intervention Study.

    PubMed

    Williams, Paige; Kern, Margaret L; Waters, Lea

    2016-01-01

    Employee psychological capital (PsyCap), perceptions of organizational virtue (OV), and work happiness have been shown to be associated within and over time. This study examines selective exposure and confirmation bias as potential processes underlying PsyCap, OV, and work happiness associations. As part of a quasi-experimental study design, school staff (N = 69) completed surveys at three time points. After the first assessment, some staff (n = 51) completed a positive psychology training intervention. Results of descriptive statistics, correlation, and regression analyses on the intervention group provide some support for selective exposure and confirmation bias as explanatory mechanisms. In focusing on the processes through which employee attitudes may influence work happiness this study advances theoretical understanding, specifically of selective exposure and confirmation bias in a field study context.

  14. TaqMan based real time PCR assay targeting EML4-ALK fusion transcripts in NSCLC.

    PubMed

    Robesova, Blanka; Bajerova, Monika; Liskova, Kvetoslava; Skrickova, Jana; Tomiskova, Marcela; Pospisilova, Sarka; Mayer, Jiri; Dvorakova, Dana

    2014-07-01

    Lung cancer with the ALK rearrangement constitutes only a small fraction of patients with non-small cell lung cancer (NSCLC). However, in the era of molecular-targeted therapy, efficient patient selection is crucial for successful treatment. In this context, an effective method for EML4-ALK detection is necessary. We developed a new highly sensitive variant specific TaqMan based real time PCR assay applicable to RNA from formalin-fixed paraffin-embedded tissue (FFPE). This assay was used to analyze the EML4-ALK gene in 96 non-selected NSCLC specimens and compared with two other methods (end-point PCR and break-apart FISH). EML4-ALK was detected in 33/96 (34%) specimens using variant specific real time PCR, whereas in only 23/96 (24%) using end-point PCR. All real time PCR positive samples were confirmed with direct sequencing. A total of 46 specimens were subsequently analyzed by all three detection methods. Using variant specific real time PCR we identified EML4-ALK transcript in 17/46 (37%) specimens, using end-point PCR in 13/46 (28%) specimens and positive ALK rearrangement by FISH was detected in 8/46 (17.4%) specimens. Moreover, using variant specific real time PCR, 5 specimens showed more than one EML4-ALK variant simultaneously (in 2 cases the variants 1+3a+3b, in 2 specimens the variants 1+3a and in 1 specimen the variant 1+3b). In one case of 96 EML4-ALK fusion gene and EGFR mutation were detected. All simultaneous genetic variants were confirmed using end-point PCR and direct sequencing. Our variant specific real time PCR assay is highly sensitive, fast, financially acceptable, applicable to FFPE and seems to be a valuable tool for the rapid prescreening of NSCLC patients in clinical practice, so, that most patients able to benefit from targeted therapy could be identified. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Atrial septal pacing in small dogs: a pilot study.

    PubMed

    Jones, Ashley E; Estrada, Amara H; Pariaut, Romain; Sosa-Samper, Ivan; Shih, Andre C; Mincey, Brandy D; Moïse, N Sydney

    2014-09-01

    To determine the feasibility of atrial septal pacing via a delivery catheter-guided small non-retracting helix pacing lead. Six healthy beagles (8.3-12.9 kg). Using single plane fluoroscopic guidance, Medtronic(®) 3830 SelectSecure leads were connected to the atrial septum via Medtronic® Attain Select® II standard 90 Left Heart delivery catheter. Pacing threshold and lead impedance were measured at implantation. The Wenckebach point was tested via atrial pacing up to 220 paced pulses per minute (ppm). Thoracic radiographs were performed following implantation to identify the lead position, and repeated at 24 h, 1 month, and 3 months post-operatively. Macro-lead dislodgement occurred in two dogs at 24 h and in three dogs at one-month post-implantation. Lead impedance, measured at the time of implantation, ranged from 583 to 1421 Ω. The Wenckebach point was >220 ppm in four of the six dogs. The remaining two dogs had Wenckebach points of 120 and 190 ppm. This pilot study suggests the selected implantation technique and lead system were inadequate for secure placement in the atrial septum of these dogs. The possible reasons for inadequate stability include unsuitable lead design for this location, inadequate lead slack at the time of implantation and inadequate seating of the lead as evidenced by low impedance at the time of implantation. Other implantation techniques and/or pacing leads should be investigated to determine the optimal way of pacing the atria in small breed dogs that are prone to sinus node dysfunction. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Real-time measurement of quality during the compaction of subgrade soils.

    DOT National Transportation Integrated Search

    2012-12-01

    Conventional quality control of subgrade soils during their compaction is usually performed by monitoring moisture content and dry density at a few discrete locations. However, randomly selected points do not adequately represent the entire compacted...

  17. The fourth dimension in FIA

    Treesearch

    Francis A. Roesch

    2012-01-01

    In the past, the goal of forest inventory was to determine the extent of the timber resource. Predictions of how the resource was changing were made by comparing differences between successive inventories. The general view of the associated sample design included selection probabilities based on land area observed at a discrete point in time. That is, time was not...

  18. Acceptability of Adaptations for Struggling Writers: A National Survey with Primary-Grade Teachers

    ERIC Educational Resources Information Center

    Graham, Steve; Harris, Karen R.; Bartlett, Brendan J.; Popadopoulou, Eleni; Santoro, Julia

    2016-01-01

    One hundred twenty-five primary-grade teachers randomly selected from across the United States indicated how frequently they made 20 instructional adaptations for the struggling writers in their classroom. The measure of frequency ranged from never, several times a year, monthly, weekly, several times a week, and daily. Using a 6-point Likert-type…

  19. The TJO-OAdM Robotic Observatory: the scheduler

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Casteels, Kevin; Ribas, Ignasi; Francisco, Xavier

    2010-07-01

    The Joan Oró Telescope at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working under completely unattended control, due to the isolation of the site. Robotic operation is mandatory for its routine use. The level of robotization of an observatory is given by its reliability in responding to environment changes and by the required human interaction due to possible alarms. These two points establish a level of human attendance to ensure low risk at any time. But there is another key point when deciding how the system performs as a robot: the capability to adapt the scheduled observation to actual conditions. The scheduler represents a fundamental element to fully achieve an intelligent response at any time. Its main task is the mid- and short-term time optimization and it has a direct effect on the scientific return achieved by the observatory. We present a description of the scheduler developed for the TJO - OAdM, which is separated in two parts. Firstly, a pre-scheduler that makes a temporary selection of objects from the available projects according to their possibility of observation. This process is carried out before the beginning of the night following different selection criteria. Secondly, a dynamic scheduler that is executed any time a target observation is complete and a new one must be scheduled. The latter enables the selection of the best target in real time according to actual environment conditions and the set of priorities.

  20. The waiting time problem in a model hominin population.

    PubMed

    Sanford, John; Brewer, Wesley; Smith, Franzine; Baumgardner, John

    2015-09-17

    Functional information is normally communicated using specific, context-dependent strings of symbolic characters. This is true within the human realm (texts and computer programs), and also within the biological realm (nucleic acids and proteins). In biology, strings of nucleotides encode much of the information within living cells. How do such information-bearing nucleotide strings arise and become established? This paper uses comprehensive numerical simulation to understand what types of nucleotide strings can realistically be established via the mutation/selection process, given a reasonable timeframe. The program Mendel's Accountant realistically simulates the mutation/selection process, and was modified so that a starting string of nucleotides could be specified, and a corresponding target string of nucleotides could be specified. We simulated a classic pre-human hominin population of at least 10,000 individuals, with a generation time of 20 years, and with very strong selection (50% selective elimination). Random point mutations were generated within the starting string. Whenever an instance of the target string arose, all individuals carrying the target string were assigned a specified reproductive advantage. When natural selection had successfully amplified an instance of the target string to the point of fixation, the experiment was halted, and the waiting time statistics were tabulated. Using this methodology we tested the effect of mutation rate, string length, fitness benefit, and population size on waiting time to fixation. Biologically realistic numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years. We found that waiting times were reduced by higher mutation rates, stronger fitness benefits, and larger population sizes. However, even using the most generous feasible parameters settings, the waiting time required to establish any specific nucleotide string within this type of population was consistently prohibitive. We show that the waiting time problem is a significant constraint on the macroevolution of the classic hominin population. Routine establishment of specific beneficial strings of two or more nucleotides becomes very problematic.

  1. How should Fitts' Law be applied to human-computer interaction?

    NASA Technical Reports Server (NTRS)

    Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.

    1992-01-01

    The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.

  2. Detecting P and S-wave of Mt. Rinjani seismic based on a locally stationary autoregressive (LSAR) model

    NASA Astrophysics Data System (ADS)

    Nurhaida, Subanar, Abdurakhman, Abadi, Agus Maman

    2017-08-01

    Seismic data is usually modelled using autoregressive processes. The aim of this paper is to find the arrival times of the seismic waves of Mt. Rinjani in Indonesia. Kitagawa algorithm's is used to detect the seismic P and S-wave. Householder transformation used in the algorithm made it effectively finding the number of change points and parameters of the autoregressive models. The results show that the use of Box-Cox transformation on the variable selection level makes the algorithm works well in detecting the change points. Furthermore, when the basic span of the subinterval is set 200 seconds and the maximum AR order is 20, there are 8 change points which occur at 1601, 2001, 7401, 7601,7801, 8001, 8201 and 9601. Finally, The P and S-wave arrival times are detected at time 1671 and 2045 respectively using a precise detection algorithm.

  3. Differences in finger localisation performance of patients with finger agnosia.

    PubMed

    Anema, Helen A; Kessels, Roy P C; de Haan, Edward H F; Kappelle, L Jaap; Leijten, Frans S; van Zandvoort, Martine J E; Dijkerman, H Chris

    2008-09-17

    Several neuropsychological studies have suggested parallel processing of somatosensory input when localising a tactile stimulus on one's own by pointing towards it (body schema) and when localising this touched location by pointing to it on a map of a hand (body image). Usually these reports describe patients with impaired detection, but intact sensorimotor localisation. This study examined three patients with a lesion of the angular gyrus with intact somatosensory processing, but with selectively disturbed finger identification (finger agnosia). These patients performed normally when pointing towards the touched finger on their own hand but failed to indicate this finger on a drawing of a hand or to name it. Similar defects in the perception of other body parts were not observed. The findings provide converging evidence for the dissociation between body image and body schema and, more importantly, reveal for the first time that this distinction is also present in higher-order cognitive processes selectively for the fingers.

  4. Temporally selective attention modulates early perceptual processing: event-related potential evidence.

    PubMed

    Sanders, Lisa D; Astheimer, Lori B

    2008-05-01

    Some of the most important information we encounter changes so rapidly that our perceptual systems cannot process all of it in detail. Spatially selective attention is critical for perception when more information than can be processed in detail is presented simultaneously at distinct locations. When presented with complex, rapidly changing information, listeners may need to selectively attend to specific times rather than to locations. We present evidence that listeners can direct selective attention to time points that differ by as little as 500 msec, and that doing so improves target detection, affects baseline neural activity preceding stimulus presentation, and modulates auditory evoked potentials at a perceptually early stage. These data demonstrate that attentional modulation of early perceptual processing is temporally precise and that listeners can flexibly allocate temporally selective attention over short intervals, making it a viable mechanism for preferentially processing the most relevant segments in rapidly changing streams.

  5. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  6. Study of Huizhou architecture component point cloud in surface reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wang, Guangyin; Ma, Jixiang; Wu, Yulu; Zhang, Guangbin

    2017-06-01

    Surface reconfiguration softwares have many problems such as complicated operation on point cloud data, too many interaction definitions, and too stringent requirements for inputing data. Thus, it has not been widely popularized so far. This paper selects the unique Huizhou Architecture chuandou wooden beam framework as the research object, and presents a complete set of implementation in data acquisition from point, point cloud preprocessing and finally implemented surface reconstruction. Firstly, preprocessing the acquired point cloud data, including segmentation and filtering. Secondly, the surface’s normals are deduced directly from the point cloud dataset. Finally, the surface reconstruction is studied by using Greedy Projection Triangulation Algorithm. Comparing the reconstructed model with the three-dimensional surface reconstruction softwares, the results show that the proposed scheme is more smooth, time efficient and portable.

  7. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. Selection bias between 2 Medicare capitated benefit programs.

    PubMed

    Leutz, Walter; Brody, Kathleen K; Nonnenkamp, Lucy L; Perrin, Nancy A

    2007-04-01

    To assess enrollment selection bias between a standard Medicare health maintenance organization (HMO) and a higher-priced social health maintenance organization (SHMO) offering full prescription drug and unique home-based and community-based benefits and to assess how adverse selection was handled through SHMO finances. Kaiser Permanente Northwest offered the dual-choice option in the greater Portland region from 1985 to 2002. Analysis focused on 3 "choice points" when options were clear and highlighted for beneficiaries. Data collected included age and sex, utilization 1 year before and after the choice points, health status data at enrollment (1999-2002 only), mortality, and cost and revenues. Data were extracted from health plan databases. Hospital, pharmacy, and nursing facility utilization for 1 year before and after the choice points are compared for HMO and SHMO choosers. Health and functional status data are compared from 1999 to 2002. Utilization and mortality data are controlled by age and sex. SHMO joiners evidenced adverse selection, while healthier members tended to stay in the HMO, with leaner benefits. Despite adverse selection, the health plan maintained margins in the SHMO, assisted by frailty-adjusted Medicare payments and member premiums. This high-low option strategy sought to offer the "right care at the right time" and may be a model for managed care organizations to serve aging and disabled beneficiaries under Medicare's new special needs plan option.

  9. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.

    PubMed

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.

  10. Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty

    PubMed Central

    Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang

    2015-01-01

    Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946

  11. Real Time Correction of Aircraft Flight Fonfiguration

    NASA Technical Reports Server (NTRS)

    Schipper, John F. (Inventor)

    2009-01-01

    Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.

  12. An efficient method for the prediction of deleterious multiple-point mutations in the secondary structure of RNAs using suboptimal folding solutions

    PubMed Central

    Churkin, Alexander; Barash, Danny

    2008-01-01

    Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289

  13. hp-Adaptive time integration based on the BDF for viscous flows

    NASA Astrophysics Data System (ADS)

    Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.

    2015-06-01

    This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.

  14. Instance-Based Learning: Integrating Sampling and Repeated Decisions from Experience

    ERIC Educational Resources Information Center

    Gonzalez, Cleotilde; Dutt, Varun

    2011-01-01

    In decisions from experience, there are 2 experimental paradigms: sampling and repeated-choice. In the sampling paradigm, participants sample between 2 options as many times as they want (i.e., the stopping point is variable), observe the outcome with no real consequences each time, and finally select 1 of the 2 options that cause them to earn or…

  15. Exploring Selective Exposure and Confirmation Bias as Processes Underlying Employee Work Happiness: An Intervention Study

    PubMed Central

    Williams, Paige; Kern, Margaret L.; Waters, Lea

    2016-01-01

    Employee psychological capital (PsyCap), perceptions of organizational virtue (OV), and work happiness have been shown to be associated within and over time. This study examines selective exposure and confirmation bias as potential processes underlying PsyCap, OV, and work happiness associations. As part of a quasi-experimental study design, school staff (N = 69) completed surveys at three time points. After the first assessment, some staff (n = 51) completed a positive psychology training intervention. Results of descriptive statistics, correlation, and regression analyses on the intervention group provide some support for selective exposure and confirmation bias as explanatory mechanisms. In focusing on the processes through which employee attitudes may influence work happiness this study advances theoretical understanding, specifically of selective exposure and confirmation bias in a field study context. PMID:27378978

  16. [Proposal of a costing method for the provision of sterilization in a public hospital].

    PubMed

    Bauler, S; Combe, C; Piallat, M; Laurencin, C; Hida, H

    2011-07-01

    To refine the billing to institutions whose operations of sterilization are outsourced, a sterilization cost approach was developed. The aim of the study is to determine the value of a sterilization unit (one point "S") evolving according to investments, quantities processed, types of instrumentation or packaging. The time of preparation has been selected from all sub-processes of sterilization to determine the value of one point S. The time of preparation of sterilized large and small containers and pouches were raised. The reference time corresponds to one bag (equal to one point S). Simultaneously, the annual operating cost of sterilization was defined and divided into several areas of expenditure: employees, equipments and building depreciation, supplies, and maintenance. A total of 136 crossing times of containers were measured. Time to prepare a pouch has been estimated at one minute (one S). A small container represents four S and a large container represents 10S. By dividing the operating cost of sterilization by the total number of points of sterilization over a given period, the cost of one S can be determined. This method differs from traditional costing method in sterilizing services, considering each item of expenditure. This point S will be the base for billing of subcontracts to other institutions. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  17. Molecular clock of HIV-1 envelope genes under early immune selection

    DOE PAGES

    Park, Sung Yong; Love, Tanzy M. T.; Perelson, Alan S.; ...

    2016-06-01

    Here, the molecular clock hypothesis that genes or proteins evolve at a constant rate is a key tool to reveal phylogenetic relationships among species. Using the molecular clock, we can trace an infection back to transmission using HIV-1 sequences from a single time point. Whether or not a strict molecular clock applies to HIV-1’s early evolution in the presence of immune selection has not yet been fully examined.

  18. Effects of Selective Logging on Birds in the Sierra de Coalcoman, Sierra Madre del Sur, Michoacan, Western Mexico

    Treesearch

    Jose Fernando Villaseñor; Neyra Sosa; Laura Villaseñor

    2005-01-01

    In order to determine the effects of selective logging on pine-oak forest?s bird communities in central-western Mexico, we gathered information through 10-min point counts in plots without wood extraction and sites logged at different times in the past (1, 4, and 8 years). We did not find evidences to argue for effects of logging on bird communities; the study plots...

  19. Molecular clock of HIV-1 envelope genes under early immune selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung Yong; Love, Tanzy M. T.; Perelson, Alan S.

    Here, the molecular clock hypothesis that genes or proteins evolve at a constant rate is a key tool to reveal phylogenetic relationships among species. Using the molecular clock, we can trace an infection back to transmission using HIV-1 sequences from a single time point. Whether or not a strict molecular clock applies to HIV-1’s early evolution in the presence of immune selection has not yet been fully examined.

  20. Minimizing Statistical Bias with Queries.

    DTIC Science & Technology

    1995-09-14

    method for optimally selecting these points would o er enormous savings in time and money. An active learning system will typically attempt to select data...research in active learning assumes that the sec- ond term of Equation 2 is approximately zero, that is, that the learner is unbiased. If this is the case...outperforms the variance- minimizing algorithm and random exploration. and e ective strategy for active learning . I have given empirical evidence that, with

  1. Quasi-simultaneous Measurements of Ionic Currents by Vibrating Probe and pH Distribution by Ion-selective Microelectrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, H.S.; Lamaka, S.V.; Taryba, M.

    2011-01-01

    This work reports a new methodology to measure quasi-simultaneously the local electric fields and the distribution of specific ions in a solution via selective microelectrodes. The field produced by the net electric current was detected using the scanning vibrating electrode technique (SVET) with quasi-simultaneous measurements of pH with an ion-selective microelectrode (pH-SME). The measurements were performed in a validation cell providing a 48 ?m diameter Pt wire cross section as a source of electric current. A time lag between acquiring each current density and pH data-point was 1.5 s due to the response time of pH-SME. The quasi-simultaneous SVET-pH measurementsmore » that correlate electrochemical oxidation-reduction processes with acid-base chemical equilibria are reported for the first time. No cross-talk between the vibrating microelectrode and the ion-selective microelectrode could be detected under given experimental conditions.« less

  2. Analyzing the effect of selected control policy measures and sociodemographic factors on alcoholic beverage consumption in Europe within the AMPHORA project: statistical methods.

    PubMed

    Baccini, Michela; Carreras, Giulia

    2014-10-01

    This paper describes the methods used to investigate variations in total alcoholic beverage consumption as related to selected control intervention policies and other socioeconomic factors (unplanned factors) within 12 European countries involved in the AMPHORA project. The analysis presented several critical points: presence of missing values, strong correlation among the unplanned factors, long-term waves or trends in both the time series of alcohol consumption and the time series of the main explanatory variables. These difficulties were addressed by implementing a multiple imputation procedure for filling in missing values, then specifying for each country a multiple regression model which accounted for time trend, policy measures and a limited set of unplanned factors, selected in advance on the basis of sociological and statistical considerations are addressed. This approach allowed estimating the "net" effect of the selected control policies on alcohol consumption, but not the association between each unplanned factor and the outcome.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  4. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  5. Changes in Food Selectivity in Children with Autism Spectrum Disorder.

    PubMed

    Bandini, Linda G; Curtin, Carol; Phillips, Sarah; Anderson, Sarah E; Maslin, Melissa; Must, Aviva

    2017-02-01

    Food selectivity is a common problem in children with autism spectrum disorder (ASD) and has an adverse impact on nutrient adequacy and family mealtimes. Despite recent research in this area, few studies have addressed whether food selectivity present in children with ASD persists into adolescence. In this study, we assessed food selectivity in 18 children with ASD at two time points (mean age = 6.8 and 13.2 years), and examined changes in food selectivity. While food refusal improved overall, we did not observe an increase in food repertoire (number of unique foods eaten). These findings support the need for interventions early in childhood to increase variety and promote healthy eating among children with ASD.

  6. Measurement of Moisture Sorption Isotherm by DVS Hydrosorb

    NASA Astrophysics Data System (ADS)

    Kurniawan, Y. R.; Purwanto, Y. A.; Purwanti, N.; Budijanto, S.

    2018-05-01

    Artificial rice made from corn flour, sago, glycerol monostearate, vegetable oil, water and jelly powder was developed by extrusion method through the process stages of material mixing, extrusion, drying, packaging and storage. Sorption isotherm pattern information on food ingredients used to design and optimize the drying process, packaging, storage. Sorption isotherm of water of artificial rice was measured using humidity generating method with Dynamic Vapor Sorption device that has an advantage of equilibration time is about 10 to 100 times faster than saturated salt slurry method. Relative humidity modification technique are controlled automatically by adjusting the proportion of mixture of dry air and water saturated air. This paper aims to develop moisture sorption isotherm using the Hydrosorb 1000 Water Vapor Sorption Analyzer. Sample preparation was conducted by degassing sample in a heating mantle of 65°C. Analysis parameters need to be fulfilled were determination of Po, sample data, selection of water activity points, and equilibrium conditions. The selected analytical temperatures were 30°C and 45°C. Analysis lasted for 45 hours and curves of adsorption and desorption were obtained. Selected bottom point of water activity 0.05 at 30°C and 45°C yielded adsorbed mass of 0.1466 mg/g and 0.3455 mg/g, respectively, whereas selected top water activity point 0.95 at 30°C and 45°C yielded adsorbed mass of 190.8734 mg/g and 242.4161mg/g, respectively. Moisture sorption isotherm measurements of articial rice made from corn flour at temperature of 30°C and 45°C using Hydrosorb showed that the moisture sorption curve approximates sigmoid-shaped type II curve commonly found in corn-based foodstuffs (high- carbohydrate).

  7. Selection is stronger in early-versus-late stages of divergence in a Neotropical livebearing fish.

    PubMed

    Ingley, Spencer J; Johnson, Jerald B

    2016-03-01

    How selection acts to drive trait evolution at different stages of divergence is of fundamental importance in our understanding of the origins of biodiversity. Yet, most studies have focused on a single point along an evolutionary trajectory. Here, we provide a case study evaluating the strength of divergent selection acting on life-history traits at early-versus-late stages of divergence in Brachyrhaphis fishes. We find that the difference in selection is stronger in the early-diverged population than the late-diverged population, and that trait differences acquired early are maintained over time. © 2016 The Author(s).

  8. Vector control of wind turbine on the basis of the fuzzy selective neural net*

    NASA Astrophysics Data System (ADS)

    Engel, E. A.; Kovalev, I. V.; Engel, N. E.

    2016-04-01

    An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.

  9. Sequence polymorphism in an insect RNA virus field population: A snapshot from a single point in space and time reveals stochastic differences among and within individual hosts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenger, Drake C., E-mail: drake.stenger@ars.usda.

    Population structure of Homalodisca coagulata Virus-1 (HoCV-1) among and within field-collected insects sampled from a single point in space and time was examined. Polymorphism in complete consensus sequences among single-insect isolates was dominated by synonymous substitutions. The mutant spectrum of the C2 helicase region within each single-insect isolate was unique and dominated by nonsynonymous singletons. Bootstrapping was used to correct the within-isolate nonsynonymous:synonymous arithmetic ratio (N:S) for RT-PCR error, yielding an N:S value ~one log-unit greater than that of consensus sequences. Probability of all possible single-base substitutions for the C2 region predicted N:S values within 95% confidence limits of themore » corrected within-isolate N:S when the only constraint imposed was viral polymerase error bias for transitions over transversions. These results indicate that bottlenecks coupled with strong negative/purifying selection drive consensus sequences toward neutral sequence space, and that most polymorphism within single-insect isolates is composed of newly-minted mutations sampled prior to selection. -- Highlights: •Sampling protocol minimized differential selection/history among isolates. •Polymorphism among consensus sequences dominated by negative/purifying selection. •Within-isolate N:S ratio corrected for RT-PCR error by bootstrapping. •Within-isolate mutant spectrum dominated by new mutations yet to undergo selection.« less

  10. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  11. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  12. When Does Frequency-Independent Selection Maintain Genetic Variation?

    PubMed

    Novak, Sebastian; Barton, Nicholas H

    2017-10-01

    Frequency-independent selection is generally considered as a force that acts to reduce the genetic variation in evolving populations, yet rigorous arguments for this idea are scarce. When selection fluctuates in time, it is unclear whether frequency-independent selection may maintain genetic polymorphism without invoking additional mechanisms. We show that constant frequency-independent selection with arbitrary epistasis on a well-mixed haploid population eliminates genetic variation if we assume linkage equilibrium between alleles. To this end, we introduce the notion of frequency-independent selection at the level of alleles, which is sufficient to prove our claim and contains the notion of frequency-independent selection on haploids. When selection and recombination are weak but of the same order, there may be strong linkage disequilibrium; numerical calculations show that stable equilibria are highly unlikely. Using the example of a diallelic two-locus model, we then demonstrate that frequency-independent selection that fluctuates in time can maintain stable polymorphism if linkage disequilibrium changes its sign periodically. We put our findings in the context of results from the existing literature and point out those scenarios in which the possible role of frequency-independent selection in maintaining genetic variation remains unclear. Copyright © 2017 by the Genetics Society of America.

  13. Proceedings: Ejector Workshop for Aerospace Applications

    DTIC Science & Technology

    1982-06-01

    probably a good thing to take some time at selected points in the proceedings to take stock of where we are. one of the points of the meeting was to give...sides, fundamental and experimental work going on ; but was 5 quite surprised by the fact that these are treated as two 7 different things . Some people...personnel. DR. W~ILSON: I think one of the things that has slowed the development of computational methods is just that. We haven’t had much information

  14. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  15. Precise Spatially Selective Photothermolysis Using Modulated Femtosecond Lasers and Real-time Multimodal Microscopy Monitoring.

    PubMed

    Huang, Yimei; Lui, Harvey; Zhao, Jianhua; Wu, Zhenguo; Zeng, Haishan

    2017-01-01

    The successful application of lasers in the treatment of skin diseases and cosmetic surgery is largely based on the principle of conventional selective photothermolysis which relies strongly on the difference in the absorption between the therapeutic target and its surroundings. However, when the differentiation in absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To deal with such cases, we introduce a novel spatially selective photothermolysis method based on multiphoton absorption in which the radiant energy of a tightly focused near-infrared femtosecond laser beam can be directed spatially by aiming the laser focal point to the target of interest. We construct a multimodal optical microscope to perform and monitor the spatially selective photothermolysis. We demonstrate that precise alteration of the targeted tissue is achieved while leaving surrounding tissue intact by choosing appropriate femtosecond laser exposure with multimodal optical microscopy monitoring in real time.

  16. Precise Spatially Selective Photothermolysis Using Modulated Femtosecond Lasers and Real-time Multimodal Microscopy Monitoring

    PubMed Central

    Huang, Yimei; Lui, Harvey; Zhao, Jianhua; Wu, Zhenguo; Zeng, Haishan

    2017-01-01

    The successful application of lasers in the treatment of skin diseases and cosmetic surgery is largely based on the principle of conventional selective photothermolysis which relies strongly on the difference in the absorption between the therapeutic target and its surroundings. However, when the differentiation in absorption is not sufficient, collateral damage would occur due to indiscriminate and nonspecific tissue heating. To deal with such cases, we introduce a novel spatially selective photothermolysis method based on multiphoton absorption in which the radiant energy of a tightly focused near-infrared femtosecond laser beam can be directed spatially by aiming the laser focal point to the target of interest. We construct a multimodal optical microscope to perform and monitor the spatially selective photothermolysis. We demonstrate that precise alteration of the targeted tissue is achieved while leaving surrounding tissue intact by choosing appropriate femtosecond laser exposure with multimodal optical microscopy monitoring in real time. PMID:28255346

  17. Redshift Evolution of Non-Gaussianity in Cosmic Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Sullivan, James; Wiegand, Alexander; Eisenstein, Daniel

    2018-01-01

    We probe the higher-order galaxy clustering in the final data release (DR12) of the Sloan Digital Sky Survey using germ-grain Minkowski Functionals (MFs). Our data selection contains 979,430 BOSS galaxies from both the northern and southern galactic caps over the redshift range 0.2 - 0.6. We extract the higher-order parts of the MFs and find deviations from the case without higher order MFs with chi-squared values of order 1000 for 24 degrees of freedom across the entire data selection. We show the MFs to be sensitive to contributions up to the five-point correlation function across the entire data selection. We measure significant redshift evolution in the higher-order functionals for the first time, with a percentage growth between redshift bins of approximately 20 % in both galactic caps. This is a nearly a factor of 2 greater than similar growth in the two-point correlation function and will allow for tests of non-linear structure growth by comparing the three-point and higher-order parts to their expected theoretical values. The SAO REU program is funded by the National Science Foundation REU and Department of Defense ASSURE programs under NSF Grant AST-1659473, and by the Smithsonian Institution.

  18. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  19. NMR diffusion simulation based on conditional random walk.

    PubMed

    Gudbjartsson, H; Patz, S

    1995-01-01

    The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.

  20. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  1. Seismic Line Location Map Hot Pot Project, Humboldt County, Nevada 2010

    DOE Data Explorer

    Lane, Michael

    2010-01-01

    Seismic Line Location Map Hot Pot Project, Humboldt County, Nevada 2010. ArcGIS map package containing topographic base map, Township and Range layer, Oski BLM and private leases at time of survey, and locations, with selected shot points, of the five seismic lines.

  2. Characteristics of Outstanding Student Teachers

    ERIC Educational Resources Information Center

    Eldar, Eitan; Talmor, Rachel

    2006-01-01

    This paper describes the characteristics of student teachers who were evaluated as outstanding during their teacher education studies. Outstanding students were selected after 2 years of field experiences based on their teaching abilities and academic achievements. Data were collected at three points of time: before they commenced their studies at…

  3. Mixed-Poisson Point Process with Partially-Observed Covariates: Ecological Momentary Assessment of Smoking.

    PubMed

    Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul

    2012-01-01

    Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.

  4. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  5. Wavenumber-frequency Spectra of Pressure Fluctuations Measured via Fast Response Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Panda, J.; Roozeboom, N. H.; Ross, J. C.

    2016-01-01

    The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.

  6. Fast and flexible selection with a single switch.

    PubMed

    Broderick, Tamara; MacKay, David J C

    2009-10-22

    Selection methods that require only a single-switch input, such as a button click or blink, are potentially useful for individuals with motor impairments, mobile technology users, and individuals wishing to transmit information securely. We present a single-switch selection method, "Nomon," that is general and efficient. Existing single-switch selection methods require selectable options to be arranged in ways that limit potential applications. By contrast, traditional operating systems, web browsers, and free-form applications (such as drawing) place options at arbitrary points on the screen. Nomon, however, has the flexibility to select any point on a screen. Nomon adapts automatically to an individual's clicking ability; it allows a person who clicks precisely to make a selection quickly and allows a person who clicks imprecisely more time to make a selection without error. Nomon reaps gains in information rate by allowing the specification of beliefs (priors) about option selection probabilities and by avoiding tree-based selection schemes in favor of direct (posterior) inference. We have developed both a Nomon-based writing application and a drawing application. To evaluate Nomon's performance, we compared the writing application with a popular existing method for single-switch writing (row-column scanning). Novice users wrote 35% faster with the Nomon interface than with the scanning interface. An experienced user (author TB, with 10 hours practice) wrote at speeds of 9.3 words per minute with Nomon, using 1.2 clicks per character and making no errors in the final text.

  7. Breakfast Skipping, Extreme Commutes, and the Sex Composition at Birth.

    PubMed

    Mazumder, Bhashkar; Seeskin, Zachary

    2015-01-01

    A growing body of literature has shown that environmental exposures in the period around conception can affect the sex ratio at birth through selective attrition that favors the survival of female conceptuses. Glucose availability is considered a key indicator of the fetal environment, and its absence as a result of meal skipping may inhibit male survival. We hypothesize that breakfast skipping during pregnancy may lead to a reduction in the fraction of male births. Using time use data from the United States we show that women with commute times of 90 minutes or longer are 20 percentage points more likely to skip breakfast. Using U.S. census data we show that women with commute times of 90 minutes or longer are 1.2 percentage points less likely to have a male child under the age of 2. Under some assumptions, this implies that routinely skipping breakfast around the time of conception leads to a 6 percentage point reduction in the probability of a male child. Skipping breakfast during pregnancy may therefore constitute a poor environment for fetal health more generally.

  8. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Treesearch

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  9. On-line calibration of high-response pressure transducers during jet-engine testing

    NASA Technical Reports Server (NTRS)

    Armentrout, E. C.

    1974-01-01

    Jet engine testing is reported concerned with the effect of inlet pressure and temperature distortions on engine performance and involves the use of numerous miniature pressure transducers. Despite recent improvements in the manufacture of miniature pressure transducers, they still exhibit sensitivity change and zero-shift with temperature and time. To obtain meaningful data, a calibration system is needed to determine these changes. A system has been developed which provides for computer selection of appropriate reference pressures selected from nine different sources to provide a two- or three-point calibration. Calibrations are made on command, before and sometimes after each data point. A unique no leak matrix valve design is used in the reference pressure system. Zero-shift corrections are measured and the values are automatically inserted into the data reduction program.

  10. Hybrid Propulsion Technology Program, phase 1. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The study program was contracted to evaluate concepts of hybrid propulsion, select the most optimum, and prepare a conceptual design package. Further, this study required preparation of a technology definition package to identify hybrid propulsion enabling technologies and planning to acquire that technology in Phase 2 and demonstrate that technology in Phase 3. Researchers evaluated two design philosophies for Hybrid Rocket Booster (HRB) selection. The first is an ASRM modified hybrid wherein as many components/designs as possible were used from the present Advanced Solid Rocket Motor (ASRM) design. The second was an entirely new hybrid optimized booster using ASRM criteria as a point of departure, i.e., diameter, thrust time curve, launch facilities, and external tank attach points. Researchers selected the new design based on the logic of optimizing a hybrid booster to provide NASA with a next generation vehicle in lieu of an interim advancement over the ASRM. The enabling technologies for hybrid propulsion are applicable to either and vehicle design may be selected at a downstream point (Phase 3) at NASA's discretion. The completion of these studies resulted in ranking the various concepts of boosters from the RSRM to a turbopump fed (TF) hybrid. The scoring resulting from the Figure of Merit (FOM) scoring system clearly shows a natural growth path where the turbopump fed solid liquid staged combustion hybrid provides maximized payload and the highest safety, reliability, and low life cycle costing.

  11. A LiDAR and IMU Integrated Indoor Navigation System for UAVs and Its Application in Real-Time Pipeline Classification

    PubMed Central

    Kumar, G. Ajay; Patil, Ashok Kumar; Patil, Rekha; Park, Seong Sill; Chai, Young Ho

    2017-01-01

    Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues. Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics. This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors. Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR. The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR. Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data. Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach. Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle. The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram. Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering. PMID:28574474

  12. 40 CFR 60.143 - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... steel production cycle, and the time and duration of any diversion of exhaust gases from the main stack... sensor or pressure tap must be located close to the water discharge point. The Administrator must be consulted for approval in advance of selecting alternative locations for the pressure sensor or tap. (3) All...

  13. 40 CFR 60.143 - Monitoring of operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... steel production cycle, and the time and duration of any diversion of exhaust gases from the main stack... sensor or pressure tap must be located close to the water discharge point. The Administrator must be consulted for approval in advance of selecting alternative locations for the pressure sensor or tap. (3) All...

  14. Drought and host selection influence microbial community dynamics in the grass root microbiome

    USDA-ARS?s Scientific Manuscript database

    Through 16S rRNA gene profiling across two distinct watering regimes and two developmental time points, we demonstrate that there is a strong correlation between host phylogenetic distance and the microbiome dissimilarity within root tissues, and that drought weakens this correlation by inducing con...

  15. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  16. Sampling Error in a Particulate Mixture: An Analytical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Kratochvil, Byron

    1980-01-01

    Presents an undergraduate experiment demonstrating sampling error. Selected as the sampling system is a mixture of potassium hydrogen phthalate and sucrose; using a self-zeroing, automatically refillable buret to minimize titration time of multiple samples and employing a dilute back-titrant to obtain high end-point precision. (CS)

  17. Preparation of Solid Derivatives by Differential Scanning Calorimetry.

    ERIC Educational Resources Information Center

    Crandall, E. W.; Pennington, Maxine

    1980-01-01

    Describes the preparation of selected aldehydes and ketones, alcohols, amines, phenols, haloalkanes, and tertiaryamines by differential scanning calorimetry. Technique is advantageous because formation of the reaction product occurs and the melting point of the product is obtained on the same sample in a short time with no additional purification…

  18. Selected Papers on Low-Energy Antiprotons and Possible Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Robert

    1998-09-19

    The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.

  19. Caries selective ablation: the handpiece

    NASA Astrophysics Data System (ADS)

    Hennig, Thomas; Rechmann, Peter; Holtermann, Andreas

    1995-05-01

    Caries selective ablation is fixed to a window of fluences predicted by the ablation thresholds of carious and healthy dentin, respectively. The aim of the study was to develop a dental handpiece which guarantees homogeneous fluence at the irradiated tooth surface. Furthermore the point of treatment should be cooled down without energy losses due to the cooling system. We suggest the direct coupling of the laser radiation into a laminar stream of liquid, which acts in turn as a lengthened beam guide. The impacts of the laser radiation and of the cooling medium fall exactly into the same point. Hot ablation debris is removed out of the crater by the flush of the water jet. Fluences are constant if the handpiece is used in contact mode or at a distance. Normally the surface of a bare fiber working in contact mode is destroyed after a few shots. Coupling the laser radiation into a stream of liquid prevents this destruction. Putting together the benefits of this special handpiece short overall treatment times seem to be possible. High average power can be applied to the tooth without the threat of thermal damage. Furthermore no time consuming cutting of the fiber prolongs the treatment time.

  20. Improved performance of selective ablation using a specially designed handpiece

    NASA Astrophysics Data System (ADS)

    Hennig, Thomas; Rechmann, Peter

    1996-01-01

    Selective ablation is fixed to a range of fluences predicted by the ablation thresholds of infected and healthy tooth structures respectively. The aim of the study was to develop a dental handpiece, which guarantees homogeneous fluence at the irradiated tooth surface. Furthermore the point of treatment should be cooled down without energy losses due to the cooling system. We suggest the direct coupling of the laser radiation into a laminar stream of liquid, which may act in turn as a lengthened beam guide. The impacts of the laser radiation and of the cooling medium hit exactly the same point. Hot ablation debris is removed out of the crater by the flush of the water jet. While the surface of a bare fiber working on contact mode is destroyed after a few shots, it was shown that coupling the laser radiation into a stream of liquid prevents this destruction. Putting together the benefits of this special handpiece short overall treatment times seem to be possible. High average power can be applied to the tooth without the threat of thermal damage. Furthermore no time consuming cutting of the fiber prolongs the treatment time.

  1. Developing a Markov Model for Forecasting End Strength of Selected Marine Corps Reserve (SMCR) Officers

    DTIC Science & Technology

    2013-03-01

    moving average ( ARIMA ) model because the data is not a times series. The best a manpower planner can do at this point is to make an educated assumption...MARKOV MODEL FOR FORECASTING END STRENGTH OF SELECTED MARINE CORPS RESERVE (SMCR) OFFICERS by Anthony D. Licari March 2013 Thesis Advisor...March 2013 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE DEVELOPING A MARKOV MODEL FOR FORECASTING END STRENGTH OF

  2. Influenza virus drug resistance: a time-sampled population genetics perspective.

    PubMed

    Foll, Matthieu; Poh, Yu-Ping; Renzette, Nicholas; Ferrer-Admetlla, Anna; Bank, Claudia; Shim, Hyunjin; Malaspinas, Anna-Sapfo; Ewing, Gregory; Liu, Ping; Wegmann, Daniel; Caffrey, Daniel R; Zeldovich, Konstantin B; Bolon, Daniel N; Wang, Jennifer P; Kowalik, Timothy F; Schiffer, Celia A; Finberg, Robert W; Jensen, Jeffrey D

    2014-02-01

    The challenge of distinguishing genetic drift from selection remains a central focus of population genetics. Time-sampled data may provide a powerful tool for distinguishing these processes, and we here propose approximate Bayesian, maximum likelihood, and analytical methods for the inference of demography and selection from time course data. Utilizing these novel statistical and computational tools, we evaluate whole-genome datasets of an influenza A H1N1 strain in the presence and absence of oseltamivir (an inhibitor of neuraminidase) collected at thirteen time points. Results reveal a striking consistency amongst the three estimation procedures developed, showing strongly increased selection pressure in the presence of drug treatment. Importantly, these approaches re-identify the known oseltamivir resistance site, successfully validating the approaches used. Enticingly, a number of previously unknown variants have also been identified as being positively selected. Results are interpreted in the light of Fisher's Geometric Model, allowing for a quantification of the increased distance to optimum exerted by the presence of drug, and theoretical predictions regarding the distribution of beneficial fitness effects of contending mutations are empirically tested. Further, given the fit to expectations of the Geometric Model, results suggest the ability to predict certain aspects of viral evolution in response to changing host environments and novel selective pressures.

  3. LINE-1 methylation in plasma DNA as a biomarker of activity of DNA methylation inhibitors in patients with solid tumors.

    PubMed

    Aparicio, Ana; North, Brittany; Barske, Lindsey; Wang, Xuemei; Bollati, Valentina; Weisenberger, Daniel; Yoo, Christine; Tannir, Nizar; Horne, Erin; Groshen, Susan; Jones, Peter; Yang, Allen; Issa, Jean-Pierre

    2009-04-01

    Multiple clinical trials are investigating the use of the DNA methylation inhibitors azacitidine and decitabine for the treatment of solid tumors. Clinical trials in hematological malignancies have shown that optimal activity does not occur at their maximum tolerated doses but selection of an optimal biological dose and schedule for use in solid tumor patients is hampered by the difficulty of obtaining tumor tissue to measure their activity. Here we investigate the feasibility of using plasma DNA to measure the demethylating activity of the DNA methylation inhibitors in patients with solid tumors. We compared four methods to measure LINE-1 and MAGE-A1 promoter methylation in T24 and HCT116 cancer cells treated with decitabine treatment and selected Pyrosequencing for its greater reproducibility and higher signal to noise ratio. We then obtained DNA from plasma, peripheral blood mononuclear cells, buccal mucosa cells and saliva from ten patients with metastatic solid tumors at two different time points, without any intervening treatment. DNA methylation measurements were not significantly different between time point 1 and time point 2 in patient samples. We conclude that measurement of LINE-1 methylation in DNA extracted from the plasma of patients with advanced solid tumors, using Pyrosequencing, is feasible and has low within patient variability. Ongoing studies will determine whether changes in LINE-1 methylation in plasma DNA occur as a result of treatment with DNA methylation inhibitors and parallel changes in tumor tissue DNA.

  4. Methods for estimating selected flow-duration and flood-frequency characteristics at ungaged sites in Central Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.

    1998-01-01

    Methods for estimating daily mean discharges for selected flow durations and flood discharge for selected recurrence intervals at ungaged sites in central Idaho were applied using data collected at streamflow-gaging stations in the area. The areal and seasonal variability of discharge from ungaged drainage basins may be described by estimating daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month. At 73 gaging stations, mean monthly discharge was regressed with discharge at three points—20, 50, and 80—from daily mean flow-duration curves for each month. Regression results were improved by dividing the study area into six regions. Previously determined estimates of mean monthly discharge from about 1,200 ungaged drainage basins provided the basis for applying the developed techniques to the ungaged basins. Estimates of daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month at ungaged drainage basins can be made by multiplying mean monthly discharges estimated at ungaged sites by a regression factor for the appropriate region. In general, the flow-duration data were less accurately estimated at discharges exceeded 80 percent of the time than at discharges exceeded 20 percent of the time. Curves drawn through the three points for each of the six regions were most similar in July and most different from December through March. Coefficients of determination of the regressions indicate that differences in mean monthly discharge largely explain differences in discharge at points on the daily mean flow-duration curve. Inherent in the method are errors in the technique used to estimate mean monthly discharge. Flood discharge estimates for selected recurrence intervals at ungaged sites upstream or downstream from gaging stations can be determined by a transfer technique. A weighted ratio of drainage area times flood discharge for selected recurrence intervals at the gaging station can be used to estimate flood discharge at the ungaged site. Best results likely are obtained when the difference between gaged and ungaged drainage areas is small.

  5. Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case

    NASA Astrophysics Data System (ADS)

    Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.

    2016-11-01

    This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.

  6. Evaluation of photopoint photosensitizer mv6401, indium chloride methyl pyropheophorbide, as a photodynamic therapy agent in primate choriocapillaris and laser-induced choroidal neovascularization.

    PubMed

    Ciulla, Thomas A; Criswell, Mark H; Danis, Ronald P; Snyder, Wendy J; Small, Ward

    2004-08-01

    To assess the potential of a new photosensitizer, indium chloride methyl pyropheophorbide (PhotoPoint MV6401), for ocular photodynamic therapy (PDT) in normal choriocapillaris vessels and experimentally induced choroidal neovascularization in New-World monkeys (Saimiri sciureus). PhotoPoint MV6401 (Miravant Pharmaceuticals, Inc., Santa Barbara, CA) was activated at 664 nm using a DD3-0665 (Miravant Systems, Inc., Santa Barbara, CA) 0.5 W diode laser. The efficacy of MV6401 was evaluated by indirect ophthalmoscopy, fundus photography, fluorescein angiography, and histology. The drug and light doses were 0.10 micromoles/kg to 0.3 micromoles/kg and 10 J/cm to 40 J/cm, respectively, and post-injection activation times ranged from +10 minutes to +120 minutes. Best closure of normal choriocapillaris was achieved at a dosage level of 0.15 micromoles/kg in primates. Histology demonstrated that increased post-injection activation times (+60 minutes to +90 minutes) and low laser light doses (10 J/cm to 20 J/cm) in the primate model resulted in selective closure of the choriocapillaris and medium sized choroidal vessels with minimal effect to the retina. Histology from neovascular lesions PDT-treated with MV6401 revealed significant diminution of vascularity, correlating with diminution of leakage observed on angiography. PhotoPoint MV6401, indium chloride methyl pyropheophorbide, is a potent photosensitizer that demonstrates both efficacy and selectivity in primate choriocapillaris and laser-induced choroidal neovascularization occlusion. Maximum selectivity was achieved using a post infusion interval of +60 to +90 minutes.

  7. Investigating the impact of the properties of pilot points on calibration of groundwater models: case study of a karst catchment in Rote Island, Indonesia

    NASA Astrophysics Data System (ADS)

    Klaas, Dua K. S. Y.; Imteaz, Monzur Alam

    2017-09-01

    A robust configuration of pilot points in the parameterisation step of a model is crucial to accurately obtain a satisfactory model performance. However, the recommendations provided by the majority of recent researchers on pilot-point use are considered somewhat impractical. In this study, a practical approach is proposed for using pilot-point properties (i.e. number, distance and distribution method) in the calibration step of a groundwater model. For the first time, the relative distance-area ratio ( d/ A) and head-zonation-based (HZB) method are introduced, to assign pilot points into the model domain by incorporating a user-friendly zone ratio. This study provides some insights into the trade-off between maximising and restricting the number of pilot points, and offers a relative basis for selecting the pilot-point properties and distribution method in the development of a physically based groundwater model. The grid-based (GB) method is found to perform comparably better than the HZB method in terms of model performance and computational time. When using the GB method, this study recommends a distance-area ratio of 0.05, a distance-x-grid length ratio ( d/ X grid) of 0.10, and a distance-y-grid length ratio ( d/ Y grid) of 0.20.

  8. Using the Inflection Points and Rates of Growth and Decay to Predict Levels of Solar Activity

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    The ascending and descending inflection points and rates of growth and decay at specific times during the sunspot cycle are examined as predictors for future activity. On average, the ascending inflection point occurs about 1-2 yr after sunspot minimum amplitude (Rm) and the descending inflection point occurs about 6-7 yr after Rm. The ascending inflection point and the inferred slope (including the 12-mo moving average (12-mma) of (Delta)R (the month-to-month change in the smoothed monthly mean sunspot number (R)) at the ascending inflection point provide strong indications as to the expected size of the ongoing cycle s sunspot maximum amplitude (RM), while the descending inflection point appears to provide an indication as to the expected length of the ongoing cycle. The value of the 12-mma of (Delta)R at elapsed time T = 27 mo past the epoch of RM (E(RM)) seems to provide a strong indication as to the expected size of Rm for the following cycle. The expected Rm for cycle 24 is 7.6 +/- 4.4 (the 90-percent prediction interval), occurring before September 2008. Evidence is also presented for secular rises in selected cycle-related parameters and for preferential grouping of sunspot cycles by amplitude and/or period.

  9. Appropriate selection for omalizumab treatment in patients with severe asthma?

    PubMed

    Nygaard, Leo; Henriksen, Daniel Pilsgaard; Madsen, Hanne; Davidsen, Jesper Rømhild

    2017-01-01

    Background : Omalizumab improves asthma control in patients with uncontrolled severe allergic asthma; however, appropriate patient selection is crucial. Information in this field is sparse. Objective : We aimed to estimate whether potential omalizumab candidates were appropriately selected according to guidelines, and the clinical effect of omalizumab treatment over time. Design : We performed a retrospective observational study on adult patients with asthma treated with omalizumab during 2006-2015 at the Department of Respiratory Medicine at Odense University Hospital (OUH), Denmark. Data were obtained from the Electronic Patient Journal of OUH and Odense Pharmaco-Epidemiological Database. Guideline criteria for omalizumab treatment were used to evaluate the appropriateness of omalizumab candidate selection, and the Asthma Control Test (ACT) to assess the clinical effects of omalizumab at weeks 16 and 52 from treatment initiation. Results : During the observation period, 24 patients received omalizumab, but only 10 patients (42%) fulfilled criteria recommended by international guidelines. The main reasons for not fulfilling the criteria were inadequately reduced lung function, insufficient number of exacerbations, and asthma standard therapy below Global Initiative for Asthma (GINA) step 4-5. Seventeen and 11 patients completed treatment at weeks 16 and 52, with a statistically significant increase in ACT score of 5.1 points [95% confidence interval (CI) 3.1-7.2, p  = 0.0001] and 7.7 points (95% CI 4.3-11.1, p  = 0.0005), respectively. Conclusion : Only 42% of the omalizumab-treated patients were appropriately selected according to current guidelines. Still, as omalizumab showed significant improvement in asthma control over time, it is important to keep this drug in mind as an add-on to asthma therapy in well-selected patients.

  10. Appropriate selection for omalizumab treatment in patients with severe asthma?

    PubMed Central

    Nygaard, Leo; Henriksen, Daniel Pilsgaard; Madsen, Hanne; Davidsen, Jesper Rømhild

    2017-01-01

    ABSTRACT Background: Omalizumab improves asthma control in patients with uncontrolled severe allergic asthma; however, appropriate patient selection is crucial. Information in this field is sparse. Objective: We aimed to estimate whether potential omalizumab candidates were appropriately selected according to guidelines, and the clinical effect of omalizumab treatment over time. Design: We performed a retrospective observational study on adult patients with asthma treated with omalizumab during 2006–2015 at the Department of Respiratory Medicine at Odense University Hospital (OUH), Denmark. Data were obtained from the Electronic Patient Journal of OUH and Odense Pharmaco-Epidemiological Database. Guideline criteria for omalizumab treatment were used to evaluate the appropriateness of omalizumab candidate selection, and the Asthma Control Test (ACT) to assess the clinical effects of omalizumab at weeks 16 and 52 from treatment initiation. Results: During the observation period, 24 patients received omalizumab, but only 10 patients (42%) fulfilled criteria recommended by international guidelines. The main reasons for not fulfilling the criteria were inadequately reduced lung function, insufficient number of exacerbations, and asthma standard therapy below Global Initiative for Asthma (GINA) step 4–5. Seventeen and 11 patients completed treatment at weeks 16 and 52, with a statistically significant increase in ACT score of 5.1 points [95% confidence interval (CI) 3.1–7.2, p = 0.0001] and 7.7 points (95% CI 4.3–11.1, p = 0.0005), respectively. Conclusion: Only 42% of the omalizumab-treated patients were appropriately selected according to current guidelines. Still, as omalizumab showed significant improvement in asthma control over time, it is important to keep this drug in mind as an add-on to asthma therapy in well-selected patients. PMID:28815007

  11. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    PubMed

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  12. Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    PubMed Central

    Perdikis, Dionysios; Huys, Raoul; Jirsa, Viktor K.

    2011-01-01

    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time). PMID:21980278

  13. Three-dimensional Simulations of Pure Deflagration Models for Thermonuclear Supernovae

    NASA Astrophysics Data System (ADS)

    Long, Min; Jordan, George C., IV; van Rossum, Daniel R.; Diemer, Benedikt; Graziani, Carlo; Kessler, Richard; Meyer, Bradley; Rich, Paul; Lamb, Don Q.

    2014-07-01

    We present a systematic study of the pure deflagration model of Type Ia supernovae (SNe Ia) using three-dimensional, high-resolution, full-star hydrodynamical simulations, nucleosynthetic yields calculated using Lagrangian tracer particles, and light curves calculated using radiation transport. We evaluate the simulations by comparing their predicted light curves with many observed SNe Ia using the SALT2 data-driven model and find that the simulations may correspond to under-luminous SNe Iax. We explore the effects of the initial conditions on our results by varying the number of randomly selected ignition points from 63 to 3500, and the radius of the centered sphere they are confined in from 128 to 384 km. We find that the rate of nuclear burning depends on the number of ignition points at early times, the density of ignition points at intermediate times, and the radius of the confining sphere at late times. The results depend primarily on the number of ignition points, but we do not expect this to be the case in general. The simulations with few ignition points release more nuclear energy E nuc, have larger kinetic energies E K, and produce more 56Ni than those with many ignition points, and differ in the distribution of 56Ni, Si, and C/O in the ejecta. For these reasons, the simulations with few ignition points exhibit higher peak B-band absolute magnitudes M B and light curves that rise and decline more quickly; their M B and light curves resemble those of under-luminous SNe Iax, while those for simulations with many ignition points are not.

  14. Pursuing optimal electric machines transient diagnosis: The adaptive slope transform

    NASA Astrophysics Data System (ADS)

    Pons-Llinares, Joan; Riera-Guasp, Martín; Antonino-Daviu, Jose A.; Habetler, Thomas G.

    2016-12-01

    The aim of this paper is to introduce a new linear time-frequency transform to improve the detection of fault components in electric machines transient currents. Linear transforms are analysed from the perspective of the atoms used. A criterion to select the atoms at every point of the time-frequency plane is proposed, taking into account the characteristics of the searched component at each point. This criterion leads to the definition of the Adaptive Slope Transform, which enables a complete and optimal capture of the different components evolutions in a transient current. A comparison with conventional linear transforms (Short-Time Fourier Transform and Wavelet Transform) is carried out, showing their inherent limitations. The approach is tested with laboratory and field motors, and the Lower Sideband Harmonic is captured for the first time during an induction motor startup and subsequent load oscillations, accurately tracking its evolution.

  15. 76 FR 51038 - Draft Guidance for Industry: Cell Selection Devices for Point of Care Production of Minimally...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-17

    ...; formerly Docket No. 2007D-0290] Draft Guidance for Industry: Cell Selection Devices for Point of Care Production of Minimally Manipulated Autologous Peripheral Blood Stem Cells; Withdrawal of Draft Guidance...: Cell Selection Devices for Point of Care Production of Minimally Manipulated Autologous Peripheral...

  16. Insect outbreak shifts the direction of selection from fast to slow growth rates in the long-lived conifer Pinus ponderosa.

    PubMed

    de la Mata, Raul; Hood, Sharon; Sala, Anna

    2017-07-11

    Long generation times limit species' rapid evolution to changing environments. Trees provide critical global ecosystem services, but are under increasing risk of mortality because of climate change-mediated disturbances, such as insect outbreaks. The extent to which disturbance changes the dynamics and strength of selection is unknown, but has important implications on the evolutionary potential of tree populations. Using a 40-y-old Pinus ponderosa genetic experiment, we provide rare evidence of context-dependent fluctuating selection on growth rates over time in a long-lived species. Fast growth was selected at juvenile stages, whereas slow growth was selected at mature stages under strong herbivory caused by a mountain pine beetle ( Dendroctonus ponderosae ) outbreak. Such opposing forces led to no net evolutionary response over time, thus providing a mechanism for the maintenance of genetic diversity on growth rates. Greater survival to mountain pine beetle attack in slow-growing families reflected, in part, a host-based life-history trade-off. Contrary to expectations, genetic effects on tree survival were greatest at the peak of the outbreak and pointed to complex defense responses. Our results suggest that selection forces in tree populations may be more relevant than previously thought, and have implications for tree population responses to future environments and for tree breeding programs.

  17. Extinction risk and eco-evolutionary dynamics in a variable environment with increasing frequency of extreme events

    PubMed Central

    Vincenzi, Simone

    2014-01-01

    One of the most dramatic consequences of climate change will be the intensification and increased frequency of extreme events. I used numerical simulations to understand and predict the consequences of directional trend (i.e. mean state) and increased variability of a climate variable (e.g. temperature), increased probability of occurrence of point extreme events (e.g. floods), selection pressure and effect size of mutations on a quantitative trait determining individual fitness, as well as the their effects on the population and genetic dynamics of a population of moderate size. The interaction among climate trend, variability and probability of point extremes had a minor effect on risk of extinction, time to extinction and distribution of the trait after accounting for their independent effects. The survival chances of a population strongly and linearly decreased with increasing strength of selection, as well as with increasing climate trend and variability. Mutation amplitude had no effects on extinction risk, time to extinction or genetic adaptation to the new climate. Climate trend and strength of selection largely determined the shift of the mean phenotype in the population. The extinction or persistence of the populations in an ‘extinction window’ of 10 years was well predicted by a simple model including mean population size and mean genetic variance over a 10-year time frame preceding the ‘extinction window’, although genetic variance had a smaller role than population size in predicting contemporary risk of extinction. PMID:24920116

  18. Hydraulic fracturing stress measurement in underground salt rock mines at Upper Kama Deposit

    NASA Astrophysics Data System (ADS)

    Rubtsova, EV; Skulkin, AA

    2018-03-01

    The paper reports the experimental results on hydraulic fracturing in-situ stress measurements in potash mines of Uralkali. The selected HF procedure, as well as locations and designs of measuring points are substantiated. From the evidence of 78 HF stress measurement tests at eight measuring points, it has been found that the in-situ stress field is nonequicomponent, with the vertical stresses having value close to the estimates obtained with respect to the overlying rock weight while the horizontal stresses exceed the gravity stresses by 2–3 times.

  19. Time-lapse culture with morphokinetic embryo selection improves pregnancy and live birth chances and reduces early pregnancy loss: a meta-analysis.

    PubMed

    Pribenszky, Csaba; Nilselid, Anna-Maria; Montag, Markus

    2017-11-01

    Embryo evaluation and selection is fundamental in clinical IVF. Time-lapse follow-up of embryo development comprises undisturbed culture and the application of the visual information to support embryo evaluation. A meta-analysis of randomized controlled trials was carried out to study whether time-lapse monitoring with the prospective use of a morphokinetic algorithm for selection of embryos improves overall clinical outcome (pregnancy, early pregnancy loss, stillbirth and live birth rate) compared with embryo selection based on single time-point morphology in IVF cycles. The meta-analysis of five randomized controlled trials (n = 1637) showed that the application of time-lapse monitoring was associated with a significantly higher ongoing clinical pregnancy rate (51.0% versus 39.9%), with a pooled odds ratio of 1.542 (P < 0.001), significantly lower early pregnancy loss (15.3% versus 21.3%; OR: 0.662; P = 0.019) and a significantly increased live birth rate (44.2% versus 31.3%; OR 1.668; P = 0.009). Difference in stillbirth was not significant between groups (4.7% versus 2.4%). Quality of the evidence was moderate to low owing to inconsistencies across the studies. Selective application and variability were also limitations. Although time-lapse is shown to significantly improve overall clinical outcome, further high-quality evidence is needed before universal conclusions can be drawn. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. V/STOL tilt rotor aircraft study. Volume 1: Conceptual design of useful military and/or commercial aircraft

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The conceptual designs of four useful tilt-rotor aircraft for the 1975 to 1980 time period are presented. Parametric studies leading to design point selection are described, and the characteristics and capabilities of each configuration are presented. An assessment is made of current technology status, and additional tilt-rotor research programs are recommended to minimize the time, cost, and risk of development of these vehicles.

  1. Thinking too positive? Revisiting current methods of population genetic selection inference.

    PubMed

    Bank, Claudia; Ewing, Gregory B; Ferrer-Admettla, Anna; Foll, Matthieu; Jensen, Jeffrey D

    2014-12-01

    In the age of next-generation sequencing, the availability of increasing amounts and improved quality of data at decreasing cost ought to allow for a better understanding of how natural selection is shaping the genome than ever before. However, alternative forces, such as demography and background selection (BGS), obscure the footprints of positive selection that we would like to identify. In this review, we illustrate recent developments in this area, and outline a roadmap for improved selection inference. We argue (i) that the development and obligatory use of advanced simulation tools is necessary for improved identification of selected loci, (ii) that genomic information from multiple time points will enhance the power of inference, and (iii) that results from experimental evolution should be utilized to better inform population genomic studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Centroid tracker and aimpoint selection

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, Ronda; Sujata, K. V.; Venkateswara Rao, B.

    1992-11-01

    Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.

  3. Moire technique utilization for detection and measurement of scoliosis

    NASA Astrophysics Data System (ADS)

    Zawieska, Dorota; Podlasiak, Piotr

    1993-02-01

    Moire projection method enables non-contact measurement of the shape or deformation of different surfaces and constructions by fringe pattern analysis. The fringe map acquisition of the whole surface of the object under test is one of the main advantages compared with 'point by point' methods. The computer analyzes the shape of the whole surface and next user can selected different points or cross section of the object map. In this paper a few typical examples of an application of the moire technique in solving different medical problems will be presented. We will also present to you the equipment the moire pattern analysis is done in real time using the phase stepping method with CCD camera.

  4. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  5. The Effects of Prior Knowledge Activation on Free Recall and Study Time Allocation.

    ERIC Educational Resources Information Center

    Machiels-Bongaerts, Maureen; And Others

    The effects of mobilizing prior knowledge on information processing were studied. Two hypotheses, the cognitive set-point hypothesis and the selective attention hypothesis, try to account for the facilitation effects of prior knowledge activation. These hypotheses predict different recall patterns as a result of mobilizing prior knowledge. In…

  6. Understanding Recovery from Object Substitution Masking

    ERIC Educational Resources Information Center

    Goodhew, Stephanie C.; Dux, Paul E.; Lipp, Ottmar V.; Visser, Troy A. W.

    2012-01-01

    When we look at a scene, we are conscious of only a small fraction of the available visual information at any given point in time. This raises profound questions regarding how information is selected, when awareness occurs, and the nature of the mechanisms underlying these processes. One tool that may be used to probe these issues is…

  7. The Recruiting Game: Toward a New System of Intercollegiate Sport. Second Edition, Revised.

    ERIC Educational Resources Information Center

    Rooney, John F., Jr.

    Problems in recruitment for big-time collegiate sports are updated, and an eleven-point improvement program is proposed. Statistics on football and basketball recruitment are updated, many through the 1985 season. New focus is placed on "blue chip" recruiting, and maps of recruiting by selected institutions, conferences, and states are…

  8. Methods for measuring populations of small, diurnal forest birds.

    Treesearch

    D.A. Manuwal; A.B. Carey

    1991-01-01

    Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...

  9. Dominant Personality Types in Public Accounting: Selection Bias or Indoctrinated?

    ERIC Educational Resources Information Center

    Burton, Hughlene; Daugherty, Brian; Dickins, Denise; Schisler, Dan

    2016-01-01

    Prior studies concerning the personality type and preferences of accountants generally draw conclusions based upon the reports of either practicing accountants, or accounting students, at a single point in time. So while much is known about the personality type of accountants in general, left unexplored is the question of whether public…

  10. Parity-time-symmetric teleportation

    NASA Astrophysics Data System (ADS)

    Ra'di, Y.; Sounas, D. L.; Alù, A.; Tretyakov, S. A.

    2016-06-01

    We show that electromagnetic plane waves can be fully "teleported" through thin, nearly fully reflective sheets, assisted by a pair of parity-time-symmetric lossy and active sheets in front and behind the screen. The proposed structure is able to almost perfectly absorb incident waves over a wide range of frequency and incidence angles, while waves having a specific frequency and incidence angle are replicated behind the structure in synchronization with the input signal. It is shown that the proposed structure can be designed to teleport waves at any desired frequency and incidence angle. Furthermore, we generalize the proposed concept to the case of teleportation of electromagnetic waves over electrically long distances, enabling full absorption at one surface and the synthesis of the same signal at another point located electrically far away from the first surface. The physical principle behind this selective teleportation is discussed, and similarities and differences with tunneling and cloaking concepts based on PT symmetry are investigated. From the application point of view, the proposed structure works as an extremely selective filter, both in frequency and spatial domains.

  11. Statistical Considerations Concerning Dissimilar Regulatory Requirements for Dissolution Similarity Assessment. The Example of Immediate-Release Dosage Forms.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-05-01

    When performing in vitro dissolution testing, especially in the area of biowaivers, it is necessary to follow regulatory guidelines to minimize the risk of an unsafe or ineffective product being approved. The present study examines model-independent and model-dependent methods of comparing dissolution profiles based on various compared and contrasted international guidelines. Dissolution profiles for immediate release solid oral dosage forms were generated. The test material comprised tablets containing several substances, with at least 85% of the labeled amount dissolved within 15 min, 20-30 min, or 45 min. Dissolution profile similarity can vary with regard to the following criteria: time point selection (including the last time point), coefficient of variation, and statistical method selection. Variation between regulatory guidance and statistical methods can raise methodological questions and result potentially in a different outcome when reporting dissolution profile testing. The harmonization of existing guidelines would address existing problems concerning the interpretation of regulatory recommendations and research findings. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  13. Accelerometer Method and Apparatus for Integral Display and Control Functions

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  14. Breadth of Coverage, Ease of Use, and Quality of Mobile Point-of-Care Tool Information Summaries: An Evaluation

    PubMed Central

    Ren, Jinma

    2016-01-01

    Background With advances in mobile technology, accessibility of clinical resources at the point of care has increased. Objective The objective of this research was to identify if six selected mobile point-of-care tools meet the needs of clinicians in internal medicine. Point-of-care tools were evaluated for breadth of coverage, ease of use, and quality. Methods Six point-of-care tools were evaluated utilizing four different devices (two smartphones and two tablets). Breadth of coverage was measured using select International Classification of Diseases, Ninth Revision, codes if information on summary, etiology, pathophysiology, clinical manifestations, diagnosis, treatment, and prognosis was provided. Quality measures included treatment and diagnostic inline references and individual and application time stamping. Ease of use covered search within topic, table of contents, scrolling, affordance, connectivity, and personal accounts. Analysis of variance based on the rank of score was used. Results Breadth of coverage was similar among Medscape (mean 6.88), Uptodate (mean 6.51), DynaMedPlus (mean 6.46), and EvidencePlus (mean 6.41) (P>.05) with DynaMed (mean 5.53) and Epocrates (mean 6.12) scoring significantly lower (P<.05). Ease of use had DynaMedPlus with the highest score, and EvidencePlus was lowest (6.0 vs 4.0, respectively, P<.05). For quality, reviewers rated the same score (4.00) for all tools except for Medscape, which was rated lower (P<.05). Conclusions For breadth of coverage, most point-of-care tools were similar with the exception of DynaMed. For ease of use, only UpToDate and DynaMedPlus allow for search within a topic. All point-of-care tools have remote access with the exception of UpToDate and Essential Evidence Plus. All tools except Medscape covered criteria for quality evaluation. Overall, there was no significant difference between the point-of-care tools with regard to coverage on common topics used by internal medicine clinicians. Selection of point-of-care tools is highly dependent on individual preference based on ease of use and cost of the application. PMID:27733328

  15. Vibration Pattern Imager (VPI): A control and data acquisition system for scanning laser vibrometers

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Brown, Donald E.; Shaffer, Thomas A.

    1993-01-01

    The Vibration Pattern Imager (VPI) system was designed to control and acquire data from scanning laser vibrometer sensors. The PC computer based system uses a digital signal processing (DSP) board and an analog I/O board to control the sensor and to process the data. The VPI system was originally developed for use with the Ometron VPI Sensor, but can be readily adapted to any commercially available sensor which provides an analog output signal and requires analog inputs for control of mirror positioning. The sensor itself is not part of the VPI system. A graphical interface program, which runs on a PC under the MS-DOS operating system, functions in an interactive mode and communicates with the DSP and I/O boards in a user-friendly fashion through the aid of pop-up menus. Two types of data may be acquired with the VPI system: single point or 'full field.' In the single point mode, time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and is stored by the PC. The position of the measuring point (adjusted by mirrors in the sensor) is controlled via a mouse input. The mouse input is translated to output voltages by the D/A converter on the I/O board to control the mirror servos. In the 'full field' mode, the measurement point is moved over a user-selectable rectangular area. The time series data is sampled by the A/D converter on the I/O board (at a user-defined sampling rate for a selectable number of samples) and converted to a root-mean-square (rms) value by the DSP board. The rms 'full field' velocity distribution is then uploaded for display and storage on the PC.

  16. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  17. Cross-modal decoupling in temporal attention.

    PubMed

    Mühlberg, Stefanie; Oriolo, Giovanni; Soto-Faraco, Salvador

    2014-06-01

    Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross-modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non-attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross-modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Effects of tranexamic acid on coagulation indexes of patients undergoing heart valve replacement surgery under cardiopulmonary bypass

    PubMed Central

    Liu, Fei; Xu, Dong; Zhang, Kefeng; Zhang, Jian

    2016-01-01

    This study aims to explore the effects of tranexamic acid on the coagulation indexes of patients undergoing heart valve replacement surgery under the condition of cardiopulmonary bypass (CPB). One hundred patients who conformed to the inclusive criteria were selected and divided into a tranexamic acid group and a non-tranexamic acid group. They all underwent heart valve replacement surgery under CPB. Patients in the tranexamic acid group were intravenously injected with 1 g of tranexamic acid (100 mL) at the time point after anesthesia induction and before skin incision and at the time point after the neutralization of heparin. Patients in the non-tranexamic acid group were given 100 mL of normal saline at corresponding time points, respectively. Then the coagulation indexes of the two groups were analyzed. The activated blood clotting time (ACT) of the two groups was within normal scope before CPB, while four coagulation indexes including prothrombin time (PT), activated partial thromboplastin time (APTT), international normalized ratio (INR), and fibrinogen (FIB) had significant increases after surgery; the PT and INR of the tranexamic acid group had a remarkable decline after surgery. All the findings suggest that the application of tranexamic acid in heart valve replacement surgery under CPB can effectively reduce intraoperative and postoperative blood loss. PMID:27694613

  19. Terminal attractors in neural networks

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.

  20. On the effective point of measurement in megavoltage photon beams.

    PubMed

    Kawrakow, Iwan

    2006-06-01

    This paper presents a numerical investigation of the effective point of measurement of thimble ionization chambers in megavoltage photon beams using Monte Carlo simulations with the EGSNRC system. It is shown that the effective point of measurement for relative photon beam dosimetry depends on every detail of the chamber design, including the cavity length, the mass density of the wall material, and the size of the central electrode, in addition to the cavity radius. Moreover, the effective point of measurement also depends on the beam quality and the field size. The paper therefore argues that the upstream shift of 0.6 times the cavity radius, recommended in current dosimetry protocols, is inadequate for accurate relative photon beam dosimetry, particularly in the build-up region. On the other hand, once the effective point of measurement is selected appropriately, measured depth-ionization curves can be equated to measured depth-dose curves for all depths within +/- 0.5%.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roper, J; Bradshaw, B; Godette, K

    Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process andmore » then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.« less

  2. Bayesian dynamic modeling of time series of dengue disease case counts.

    PubMed

    Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-07-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.

  3. Numerical and In Vitro Experimental Investigation of the Hemolytic Performance at the Off-Design Point of an Axial Ventricular Assist Pump.

    PubMed

    Liu, Guang-Mao; Jin, Dong-Hai; Jiang, Xi-Hang; Zhou, Jian-Ye; Zhang, Yan; Chen, Hai-Bo; Hu, Sheng-Shou; Gui, Xing-Min

    The ventricular assist pumps do not always function at the design point; instead, these pumps may operate at unfavorable off-design points. For example, the axial ventricular assist pump FW-2, in which the design point is 5 L/min flow rate against 100 mm Hg pressure increase at 8,000 rpm, sometimes works at off-design flow rates of 1 to 4 L/min. The hemolytic performance of the FW-2 at both the design point and at off-design points was estimated numerically and tested in vitro. Flow characteristics in the pump were numerically simulated and analyzed with special attention paid to the scalar sheer stress and exposure time. An in vitro hemolysis test was conducted to verify the numerical results. The simulation results showed that the scalar shear stress in the rotor region at the 1 L/min off-design point was 70% greater than at the 5 L/min design point. The hemolysis index at the 1 L/min off-design point was 3.6 times greater than at the 5 L/min design point. The in vitro results showed that the normalized index of hemolysis increased from 0.017 g/100 L at the 5 L/min design point to 0.162 g/100 L at the 1 L/min off-design point. The hemolysis comparison between the different blood pump flow rates will be helpful for future pump design point selection and will guide the usage of ventricular assist pumps. The hemolytic performance of the blood pump at the working point in the clinic should receive more focus.

  4. SU-E-T-362: Automatic Catheter Reconstruction of Flap Applicators in HDR Surface Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buzurovic, I; Devlin, P; Hansen, J

    2014-06-01

    Purpose: Catheter reconstruction is crucial for the accurate delivery of radiation dose in HDR brachytherapy. The process becomes complicated and time-consuming for large superficial clinical targets with a complex topology. A novel method for the automatic catheter reconstruction of flap applicators is proposed in this study. Methods: We have developed a program package capable of image manipulation, using C++class libraries of The-Visualization-Toolkit(VTK) software system. The workflow for automatic catheter reconstruction is: a)an anchor point is placed in 3D or in the axial view of the first slice at the tip of the first, last and middle points for the curvedmore » surface; b)similar points are placed on the last slice of the image set; c)the surface detection algorithm automatically registers the points to the images and applies the surface reconstruction filter; d)then a structured grid surface is generated through the center of the treatment catheters placed at a distance of 5mm from the patient's skin. As a result, a mesh-style plane is generated with the reconstructed catheters placed 10mm apart. To demonstrate automatic catheter reconstruction, we used CT images of patients diagnosed with cutaneous T-cell-lymphoma and imaged with Freiburg-Flap-Applicators (Nucletron™-Elekta, Netherlands). The coordinates for each catheter were generated and compared to the control points selected during the manual reconstruction for 16catheters and 368control point Results: The variation of the catheter tip positions between the automatically and manually reconstructed catheters was 0.17mm(SD=0.23mm). The position difference between the manually selected catheter control points and the corresponding points obtained automatically was 0.17mm in the x-direction (SD=0.23mm), 0.13mm in the y-direction (SD=0.22mm), and 0.14mm in the z-direction (SD=0.24mm). Conclusion: This study shows the feasibility of the automatic catheter reconstruction of flap applicators with a high level of positioning accuracy. Implementation of this technique has potential to decrease the planning time and may improve overall quality in superficial brachytherapy.« less

  5. The coalescent of a sample from a binary branching process.

    PubMed

    Lambert, Amaury

    2018-04-25

    At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.

  6. One-loop gravitational wave spectrum in de Sitter spacetime

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Roura, Albert; Verdaguer, Enric

    2012-08-01

    The two-point function for tensor metric perturbations around de Sitter spacetime including one-loop corrections from massless conformally coupled scalar fields is calculated exactly. We work in the Poincaré patch (with spatially flat sections) and employ dimensional regularization for the renormalization process. Unlike previous studies we obtain the result for arbitrary time separations rather than just equal times. Moreover, in contrast to existing results for tensor perturbations, ours is manifestly invariant with respect to the subgroup of de Sitter isometries corresponding to a simultaneous time translation and rescaling of the spatial coordinates. Having selected the right initial state for the interacting theory via an appropriate iepsilon prescription is crucial for that. Finally, we show that although the two-point function is a well-defined spacetime distribution, the equal-time limit of its spatial Fourier transform is divergent. Therefore, contrary to the well-defined distribution for arbitrary time separations, the power spectrum is strictly speaking ill-defined when loop corrections are included.

  7. Williams' paradox and the role of phenotypic plasticity in sexual systems.

    PubMed

    Leonard, Janet L

    2013-10-01

    As George Williams pointed out in 1975, although evolutionary explanations, based on selection acting on individuals, have been developed for the advantages of simultaneous hermaphroditism, sequential hermaphroditism and gonochorism, none of these evolutionary explanations adequately explains the current distribution of these sexual systems within the Metazoa (Williams' Paradox). As Williams further pointed out, the current distribution of sexual systems is explained largely by phylogeny. Since 1975, we have made a great deal of empirical and theoretical progress in understanding sexual systems. However, we still lack a theory that explains the current distribution of sexual systems in animals and we do not understand the evolutionary transitions between hermaphroditism and gonochorism. Empirical data, collected over the past 40 years, demonstrate that gender may have more phenotypic plasticity than was previously realized. We know that not only sequential hermaphrodites, but also simultaneous hermaphrodites have phenotypic plasticity that alters sex allocation in response to social and environmental conditions. A focus on phenotypic plasticity suggests that one sees a continuum in animals between genetically determined gonochorism on the one hand and simultaneous hermaphroditism on the other, with various types of sequential hermaphroditism and environmental sex determination as points along the spectrum. Here I suggest that perhaps the reason we have been unable to resolve Williams' Paradox is because the problem was not correctly framed. First, because, for example, simultaneous hermaphroditism provides reproductive assurance or dioecy ensures outcrossing does not mean that there are no other evolutionary paths that can provide adaptive responses to those selective pressures. Second, perhaps the question we need to ask is: What selective forces favor increased versus reduced phenotypic plasticity in gender expression? It is time to begin to look at the question of sexual system as one of understanding the timing and degree of phenotypic plasticity in gender expression in the life history in terms of selection acting on a continuum, rather than on a set of discrete sexual systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, R; Zhu, X; Li, S

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less

  9. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  10. On selecting satellite conjunction filter parameters

    NASA Astrophysics Data System (ADS)

    Alfano, Salvatore; Finkleman, David

    2014-06-01

    This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.

  11. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 15-20 Sept. 1993 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on 20 Sept. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured FWHM distribution of the bright points in the image is lognormal with a modal value of 220 km (0.30 sec) and an average value of 250 km (0.35 sec). The smallest measured bright point diameter is 120 km (0.17 sec) and the largest is 600 km (O.69 sec). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  12. Comparison of two methods for estimating base flow in selected reaches of the South Platte River, Colorado

    USGS Publications Warehouse

    Capesius, Joseph P.; Arnold, L. Rick

    2012-01-01

    The Mass Balance results were quite variable over time such that they appeared suspect with respect to the concept of groundwater flow as being gradual and slow. The large degree of variability in the day-to-day and month-to-month Mass Balance results is likely the result of many factors. These factors could include ungaged stream inflows or outflows, short-term streamflow losses to and gains from temporary bank storage, and any lag in streamflow accounting owing to streamflow lag time of flow within a reach. The Pilot Point time series results were much less variable than the Mass Balance results and extreme values were effectively constrained. Less day-to-day variability, smaller magnitude extreme values, and smoother transitions in base-flow estimates provided by the Pilot Point method are more consistent with a conceptual model of groundwater flow being gradual and slow. The Pilot Point method provided a better fit to the conceptual model of groundwater flow and appeared to provide reasonable estimates of base flow.

  13. New clinical insights for transiently evoked otoacoustic emission protocols.

    PubMed

    Hatzopoulos, Stavros; Grzanka, Antoni; Martini, Alessandro; Konopka, Wieslaw

    2009-08-01

    The objective of the study was to optimize the area of a time-frequency analysis and then investigate any stable patterns in the time-frequency structure of otoacoustic emissions in a population of 152 healthy adults sampled over one year. TEOAE recordings were collected from 302 ears in subjects presenting normal hearing and normal impedance values. The responses were analyzed by the Wigner-Ville distribution (WVD). The TF region of analysis was optimized by examining the energy content of various rectangular and triangular TF regions. The TEOAE components from the initial and recordings 12 months later were compared in the optimized TF region. The best region for TF analysis was identified with base point 1 at 2.24 ms and 2466 Hz, base point 2 at 6.72 ms and 2466 Hz, and the top point at 2.24 ms and 5250 Hz. Correlation indices from the TF optimized region were higher, and were statistically significant, than the traditional indices in the selected time window. An analysis of the TF data within a 12-month period indicated a 85% TEOAE component similarity in 90% of the tested subjects.

  14. Stability analysis of BWR nuclear-coupled thermal-hyraulics using a simple model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karve, A.A.; Rizwan-uddin; Dorning, J.J.

    1995-09-01

    A simple mathematical model is developed to describe the dynamics of the nuclear-coupled thermal-hydraulics in a boiling water reactor (BWR) core. The model, which incorporates the essential features of neutron kinetics, and single-phase and two-phase thermal-hydraulics, leads to simple dynamical system comprised of a set of nonlinear ordinary differential equations (ODEs). The stability boundary is determined and plotted in the inlet-subcooling-number (enthalpy)/external-reactivity operating parameter plane. The eigenvalues of the Jacobian matrix of the dynamical system also are calculated at various steady-states (fixed points); the results are consistent with those of the direct stability analysis and indicate that a Hopf bifurcationmore » occurs as the stability boundary in the operating parameter plane is crossed. Numerical simulations of the time-dependent, nonlinear ODEs are carried out for selected points in the operating parameter plane to obtain the actual damped and growing oscillations in the neutron number density, the channel inlet flow velocity, and the other phase variables. These indicate that the Hopf bifurcation is subcritical, hence, density wave oscillations with growing amplitude could result from a finite perturbation of the system even where the steady-state is stable. The power-flow map, frequently used by reactor operators during start-up and shut-down operation of a BWR, is mapped to the inlet-subcooling-number/neutron-density (operating-parameter/phase-variable) plane, and then related to the stability boundaries for different fixed inlet velocities corresponding to selected points on the flow-control line. The stability boundaries for different fixed inlet subcooling numbers corresponding to those selected points, are plotted in the neutron-density/inlet-velocity phase variable plane and then the points on the flow-control line are related to their respective stability boundaries in this plane.« less

  15. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point.

    PubMed

    Shao, Chenxi; Xue, Yong; Fang, Fang; Bai, Fangzhou; Yin, Peifeng; Wang, Binghong

    2015-07-01

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedback control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.

  16. Series-nonuniform rational B-spline signal feedback: From chaos to any embedded periodic orbit or target point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Chenxi, E-mail: cxshao@ustc.edu.cn; Xue, Yong; Fang, Fang

    2015-07-15

    The self-controlling feedback control method requires an external periodic oscillator with special design, which is technically challenging. This paper proposes a chaos control method based on time series non-uniform rational B-splines (SNURBS for short) signal feedback. It first builds the chaos phase diagram or chaotic attractor with the sampled chaotic time series and any target orbit can then be explicitly chosen according to the actual demand. Second, we use the discrete timing sequence selected from the specific target orbit to build the corresponding external SNURBS chaos periodic signal, whose difference from the system current output is used as the feedbackmore » control signal. Finally, by properly adjusting the feedback weight, we can quickly lead the system to an expected status. We demonstrate both the effectiveness and efficiency of our method by applying it to two classic chaotic systems, i.e., the Van der Pol oscillator and the Lorenz chaotic system. Further, our experimental results show that compared with delayed feedback control, our method takes less time to obtain the target point or periodic orbit (from the starting point) and that its parameters can be fine-tuned more easily.« less

  17. Model selection criterion in survival analysis

    NASA Astrophysics Data System (ADS)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  18. Damage imaging in a laminated composite plate using an air-coupled time reversal mirror

    DOE PAGES

    Le Bas, P. -Y.; Remillieux, M. C.; Pieczonka, L.; ...

    2015-11-03

    We demonstrate the possibility of selectively imaging the features of a barely visible impact damage in a laminated composite plate by using an air-coupled time reversal mirror. The mirror consists of a number of piezoelectric transducers affixed to wedges of power law profiles, which act as unconventional matching layers. The transducers are enclosed in a hollow reverberant cavity with an opening to allow progressive emission of the ultrasonic wave field towards the composite plate. The principle of time reversal is used to focus elastic waves at each point of a scanning grid spanning the surface of the plate, thus allowingmore » localized inspection at each of these points. The proposed device and signal processing removes the need to be in direct contact with the plate and reveals the same features as vibrothermography and more features than a C-scan. More importantly, this device can decouple the features of the defect according to their orientation, by selectively focusing vector components of motion into the object, through air. For instance, a delamination can be imaged in one experiment using out-of-plane focusing, whereas a crack can be imaged in a separate experiment using in-plane focusing. As a result, this capability, inherited from the principle of time reversal, cannot be found in conventional air-coupled transducers.« less

  19. Photodynamic therapy with PhotoPoint photosensitiser MV6401, indium chloride methyl pyropheophorbide, achieves selective closure of rat corneal neovascularisation and rabbit choriocapillaris

    PubMed Central

    Ciulla, T A; Criswell, M H; Snyder, W J; Small, W

    2005-01-01

    Aim: The new photosensitiser PhotoPoint MV6401, indium chloride methyl pyropheophorbide, was assessed as a possible ocular photodynamic therapy agent in a rat model of experimentally induced corneal neovascularisation and in choriocapillaris closure in the rabbit. Optimal drug and light activation parameters were determined. Methods: MV6401 (Miravant Pharmaceuticals, Inc, Santa Barbara, CA, USA) was activated at 664 nm using a DD3-0665 (Miravant Systems Inc) 0.5 W diode laser. Corneal neovascularisation in rats was induced using an N-heptanol technique. The evaluated drug dosages, light dosages, and post-injection activation times ranged from 0.01–0.1 μmol/kg, 5–25 J/cm2, and 10–60 minutes, respectively. The efficacy of MV6401 on normal choriocapillaris and choroidal vessels was evaluated in rabbits with indirect ophthalmoscopy, fundus photography, fluorescein angiography, and histology. In rabbits, the evaluated drug dosages, light dosages, and post-injection activation times ranged from 0.025–0.25 μmol/kg, 3.3–20 J/cm2, and 10 minutes, respectively. Results: In the rat corneal neovascularisation model, an optimal intravenous drug dosage of 0.075 μmol/kg was activated by a 20 J/cm2 light dose at 10 minutes after drug administration, the results of which demonstrated early evidence of efficacy in ocular neovascularisation. In rabbits, closure of the normal choriocapillaris was selectively achieved at a drug dosage of 0.15 μmol/kg using light doses from 3.3 to 20 J/cm2. Conclusion: PhotoPoint MV6401 is a potent photosensitiser that demonstrates both efficacy and selectivity in experimental ocular models. PMID:15615758

  20. Photodynamic therapy with PhotoPoint photosensitiser MV6401, indium chloride methyl pyropheophorbide, achieves selective closure of rat corneal neovascularisation and rabbit choriocapillaris.

    PubMed

    Ciulla, T A; Criswell, M H; Snyder, W J; Small, W

    2005-01-01

    The new photosensitiser PhotoPoint MV6401, indium chloride methyl pyropheophorbide, was assessed as a possible ocular photodynamic therapy agent in a rat model of experimentally induced corneal neovascularisation and in choriocapillaris closure in the rabbit. Optimal drug and light activation parameters were determined. MV6401 (Miravant Pharmaceuticals, Inc, Santa Barbara, CA, USA) was activated at 664 nm using a DD3-0665 (Miravant Systems Inc) 0.5 W diode laser. Corneal neovascularisation in rats was induced using an N-heptanol technique. The evaluated drug dosages, light dosages, and post-injection activation times ranged from 0.01-0.1 micromol/kg, 5-25 J/cm(2), and 10-60 minutes, respectively. The efficacy of MV6401 on normal choriocapillaris and choroidal vessels was evaluated in rabbits with indirect ophthalmoscopy, fundus photography, fluorescein angiography, and histology. In rabbits, the evaluated drug dosages, light dosages, and post-injection activation times ranged from 0.025-0.25 micromol/kg, 3.3-20 J/cm(2), and 10 minutes, respectively. In the rat corneal neovascularisation model, an optimal intravenous drug dosage of 0.075 micromol/kg was activated by a 20 J/cm(2) light dose at 10 minutes after drug administration, the results of which demonstrated early evidence of efficacy in ocular neovascularisation. In rabbits, closure of the normal choriocapillaris was selectively achieved at a drug dosage of 0.15 micromol/kg using light doses from 3.3 to 20 J/cm(2). PhotoPoint MV6401 is a potent photosensitiser that demonstrates both efficacy and selectivity in experimental ocular models.

  1. Pioneers and Followers: Migrant Selectivity and the Development of U.S. Migration Streams in Latin America

    PubMed Central

    Lindstrom, David P.; Ramírez, Adriana López

    2013-01-01

    We present a method for dividing the historical development of community migration streams into an initial period and a subsequent takeoff stage with the purpose of systemically differentiating pioneer migrants from follower migrants. The analysis is organized around five basic research questions. First, can we empirically identify a juncture in the historical development of community-based migration that marks the transition from an initial stage of low levels of migration and gradual growth into a takeoff stage in which the prevalence of migration grows at a more accelerated rate? Second, does this juncture point exist at roughly similar migration prevalence levels across communities? Third, are first-time migrants in the initial stage (pioneers) different from first-time migrants in the takeoff stage (followers)? Fourth, what is the nature of this migrant selectivity? Finally, does the nature and degree of pioneer selectivity vary across country migration streams? PMID:24489382

  2. Decision-making patterns for dietary supplement purchases among women aged 25 to 45 years.

    PubMed

    Miller, Carla K; Russell, Teri; Kissling, Grace

    2003-11-01

    Women frequently consume dietary supplements but the criteria used to select supplements have received little investigation. This research identified the decision-making criteria used for dietary supplements among women aged 25 to 45 years who consumed a supplement at least four times per week. Participants (N=51) completed an in-store shopping interview that was audiotaped, transcribed, and analyzed qualitatively for the criteria used to make supplement selections. Qualitative analysis revealed 10 key criteria and the number of times each person used each criterion was quantified. Cluster analysis identified five homogeneous subgroups of participants based on the criteria used. These included brand shopper, bargain shopper, quality shopper, convenience shopper, and information gatherer. Supplement users vary in the criteria used to make point-of-purchase supplement selections. Dietetics professionals can classify supplement users according to the criteria used to tailor their nutrition counseling and better meet the educational needs of consumers.

  3. Ischemia episode detection in ECG using kernel density estimation, support vector machine and feature selection

    PubMed Central

    2012-01-01

    Background Myocardial ischemia can be developed into more serious diseases. Early Detection of the ischemic syndrome in electrocardiogram (ECG) more accurately and automatically can prevent it from developing into a catastrophic disease. To this end, we propose a new method, which employs wavelets and simple feature selection. Methods For training and testing, the European ST-T database is used, which is comprised of 367 ischemic ST episodes in 90 records. We first remove baseline wandering, and detect time positions of QRS complexes by a method based on the discrete wavelet transform. Next, for each heart beat, we extract three features which can be used for differentiating ST episodes from normal: 1) the area between QRS offset and T-peak points, 2) the normalized and signed sum from QRS offset to effective zero voltage point, and 3) the slope from QRS onset to offset point. We average the feature values for successive five beats to reduce effects of outliers. Finally we apply classifiers to those features. Results We evaluated the algorithm by kernel density estimation (KDE) and support vector machine (SVM) methods. Sensitivity and specificity for KDE were 0.939 and 0.912, respectively. The KDE classifier detects 349 ischemic ST episodes out of total 367 ST episodes. Sensitivity and specificity of SVM were 0.941 and 0.923, respectively. The SVM classifier detects 355 ischemic ST episodes. Conclusions We proposed a new method for detecting ischemia in ECG. It contains signal processing techniques of removing baseline wandering and detecting time positions of QRS complexes by discrete wavelet transform, and feature extraction from morphology of ECG waveforms explicitly. It was shown that the number of selected features were sufficient to discriminate ischemic ST episodes from the normal ones. We also showed how the proposed KDE classifier can automatically select kernel bandwidths, meaning that the algorithm does not require any numerical values of the parameters to be supplied in advance. In the case of the SVM classifier, one has to select a single parameter. PMID:22703641

  4. Cost-Benefit Analysis of Computer Resources for Machine Learning

    USGS Publications Warehouse

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  5. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-01

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  6. 3D object retrieval using salient views

    PubMed Central

    Shapiro, Linda G.

    2013-01-01

    This paper presents a method for selecting salient 2D views to describe 3D objects for the purpose of retrieval. The views are obtained by first identifying salient points via a learning approach that uses shape characteristics of the 3D points (Atmosukarto and Shapiro in International workshop on structural, syntactic, and statistical pattern recognition, 2008; Atmosukarto and Shapiro in ACM multimedia information retrieval, 2008). The salient views are selected by choosing views with multiple salient points on the silhouette of the object. Silhouette-based similarity measures from Chen et al. (Comput Graph Forum 22(3):223–232, 2003) are then used to calculate the similarity between two 3D objects. Retrieval experiments were performed on three datasets: the Heads dataset, the SHREC2008 dataset, and the Princeton dataset. Experimental results show that the retrieval results using the salient views are comparable to the existing light field descriptor method (Chen et al. in Comput Graph Forum 22(3):223–232, 2003), and our method achieves a 15-fold speedup in the feature extraction computation time. PMID:23833704

  7. Adaptive 4d Psi-Based Change Detection

    NASA Astrophysics Data System (ADS)

    Yang, Chia-Hsiang; Soergel, Uwe

    2018-04-01

    In a previous work, we proposed a PSI-based 4D change detection to detect disappearing and emerging PS points (3D) along with their occurrence dates (1D). Such change points are usually caused by anthropic events, e.g., building constructions in cities. This method first divides an entire SAR image stack into several subsets by a set of break dates. The PS points, which are selected based on their temporal coherences before or after a break date, are regarded as change candidates. Change points are then extracted from these candidates according to their change indices, which are modelled from their temporal coherences of divided image subsets. Finally, we check the evolution of the change indices for each change point to detect the break date that this change occurred. The experiment validated both feasibility and applicability of our method. However, two questions still remain. First, selection of temporal coherence threshold associates with a trade-off between quality and quantity of PS points. This selection is also crucial for the amount of change points in a more complex way. Second, heuristic selection of change index thresholds brings vulnerability and causes loss of change points. In this study, we adapt our approach to identify change points based on statistical characteristics of change indices rather than thresholding. The experiment validates this adaptive approach and shows increase of change points compared with the old version. In addition, we also explore and discuss optimal selection of temporal coherence threshold.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hraber, Peter; Korber, Bette; Wagh, Kshitij

    Within-host genetic sequencing from samples collected over time provides a dynamic view of how viruses evade host immunity. Immune-driven mutations might stimulate neutralization breadth by selecting antibodies adapted to cycles of immune escape that generate within-subject epitope diversity. Comprehensive identification of immune-escape mutations is experimentally and computationally challenging. With current technology, many more viral sequences can readily be obtained than can be tested for binding and neutralization, making down-selection necessary. Typically, this is done manually, by picking variants that represent different time-points and branches on a phylogenetic tree. Such strategies are likely to miss many relevant mutations and combinations ofmore » mutations, and to be redundant for other mutations. Longitudinal Antigenic Sequences and Sites from Intrahost Evolution (LASSIE) uses transmitted founder loss to identify virus “hot-spots” under putative immune selection and chooses sequences that represent recurrent mutations in selected sites. LASSIE favors earliest sequences in which mutations arise. Here, with well-characterized longitudinal Env sequences, we confirmed selected sites were concentrated in antibody contacts and selected sequences represented diverse antigenic phenotypes. Finally, practical applications include rapidly identifying immune targets under selective pressure within a subject, selecting minimal sets of reagents for immunological assays that characterize evolving antibody responses, and for immunogens in polyvalent “cocktail” vaccines.« less

  9. A Simple Test Identifies Selection on Complex Traits.

    PubMed

    Beissinger, Tim; Kruppa, Jochen; Cavero, David; Ha, Ngoc-Thuy; Erbe, Malena; Simianer, Henner

    2018-05-01

    Important traits in agricultural, natural, and human populations are increasingly being shown to be under the control of many genes that individually contribute only a small proportion of genetic variation. However, the majority of modern tools in quantitative and population genetics, including genome-wide association studies and selection-mapping protocols, are designed to identify individual genes with large effects. We have developed an approach to identify traits that have been under selection and are controlled by large numbers of loci. In contrast to existing methods, our technique uses additive-effects estimates from all available markers, and relates these estimates to allele-frequency change over time. Using this information, we generate a composite statistic, denoted [Formula: see text] which can be used to test for significant evidence of selection on a trait. Our test requires pre- and postselection genotypic data but only a single time point with phenotypic information. Simulations demonstrate that [Formula: see text] is powerful for identifying selection, particularly in situations where the trait being tested is controlled by many genes, which is precisely the scenario where classical approaches for selection mapping are least powerful. We apply this test to breeding populations of maize and chickens, where we demonstrate the successful identification of selection on traits that are documented to have been under selection. Copyright © 2018 Beissinger et al.

  10. A proposed method for world weightlifting championships team selection.

    PubMed

    Chiu, Loren Z F

    2009-08-01

    The caliber of competitors at the World Weightlifting Championships (WWC) has increased greatly over the past 20 years. As the WWC are the primary qualifiers for Olympic slots (1996 to present), it is imperative for a nation to select team members who will finish with a high placing and score team points. Previous selection methods were based on a simple percentage system. Analysis of the results from the 2006 and 2007 WWC indicates a curvilinear trend in each weight class, suggesting a simple percentage system will not maximize the number of team points earned. To maximize team points, weightlifters should be selected based on their potential to finish in the top 25. A 5-tier ranking system is proposed that should ensure the athletes with the greatest potential to score team points are selected.

  11. Evaluating the Variations in the Flood Susceptibility Maps Accuracies due to the Alterations in the Type and Extent of the Flood Inventory

    NASA Astrophysics Data System (ADS)

    Tehrany, M. Sh.; Jones, S.

    2017-10-01

    This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.

  12. A program of telementoring in laparoscopic bariatric surgery.

    PubMed

    Fuertes-Guiró, Fernando; Vitali-Erion, Enrique; Rodriguez-Franco, Amalia

    2016-01-01

    This study proposes a system for teaching and surgical support with the benefits of online Information and Communications Technology (ITC) -based telementoring for laparoscopic bariatric surgery (LBS). A system of telementoring was established between a university center and two community hospitals. Telementoring was performed via internet protocol using a direct point-to-point connection, ASDL 1.2 Mbps, time delay 150 ms, 256-bit advanced encryption standard (AES). In the period of time selected, all interventions for LBS in both hospitals were included. When patients agree with telementoring, data outcomes (operating time, hospital stay, conversion to open surgery and complications) were collected. The rest of these interventions were recorded. Thirty-six patients underwent elective LBS, 20 of whom were referred and accepted for telementoring. Patients selected without telementoring took longer: 200 (46) min vs 139 (33) min, p < 0.01. There were two conversions in non-mentored groups. The hospital stay was 4.6 (0.5) days for telementored interventions and 6.7 (0.5) days without mentoring (p < 0.01). Four patients (12,5%) in non-mentored groups suffered minor complications. This program supports the safety and feasibility of telementoring in LBS. Telementoring is an alternative in community hospitals because it can improve the quality of advanced procedures of laparoscopic surgery.

  13. cDNA Microarray Analysis of Host-Pathogen Interactions in a Porcine In Vitro Model for Toxoplasma gondii Infection†

    PubMed Central

    Okomo-Adhiambo, Margaret; Beattie, Craig; Rink, Anette

    2006-01-01

    Toxoplasma gondii induces the expression of proinflammatory cytokines, reorganizes organelles, scavenges nutrients, and inhibits apoptosis in infected host cells. We used a cDNA microarray of 420 annotated porcine expressed sequence tags to analyze the molecular basis of these changes at eight time points over a 72-hour period in porcine kidney epithelial (PK13) cells infected with T. gondii. A total of 401 genes with Cy3 and Cy5 spot intensities of ≥500 were selected for analysis, of which 263 (65.6%) were induced ≥2-fold (expression ratio, ≥2.0; P ≤ 0.05 [t test]) over at least one time point and 48 (12%) were significantly down-regulated. At least 12 functional categories of genes were modulated (up- or down-regulated) by T. gondii. The majority of induced genes were clustered as transcription, signal transduction, host immune response, nutrient metabolism, and apoptosis related. The expression of selected genes altered by T. gondii was validated by quantitative real-time reverse transcription-PCR. These results suggest that significant changes in gene expression occur in response to T. gondii infection in PK13 cells, facilitating further analysis of host-pathogen interactions in toxoplasmosis in a secondary host. PMID:16790800

  14. Applications of Time-Reversal Processing for Planetary Surface Communications

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2007-01-01

    Due to the power constraints imposed on wireless sensor and communication networks deployed on a planetary surface during exploration, energy efficient transfer of data becomes a critical issue. In situations where groups of nodes within a network are located in relatively close proximity, cooperative communication techniques can be utilized to improve the range, data rate, power efficiency, and lifetime of the network. In particular, if the point-to-point communication channels on the network are well modeled as frequency non-selective, distributed or cooperative beamforming can employed. For frequency-selective channels, beamforming itself is not generally appropriate, but a natural generalization of it, time-reversal communication (TRC), can still be effective. Time-reversal processing has been proposed and studied previously for other applications, including acoustical imaging, electromagnetic imaging, underwater acoustic communication, and wireless communication channels. In this paper, we study both the theoretical advantages and the experimental performance of cooperative TRC for wireless communication on planetary surfaces. We give a brief introduction to TRC and present several scenarios where TRC could be profitably employed during planetary exploration. We also present simulation results illustrating the performance of cooperative TRC employed in a complex multipath environment and discuss the optimality of cooperative TRC for data aggregation in wireless sensor networks

  15. Multi-locus analysis of genomic time series data from experimental evolution.

    PubMed

    Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S

    2015-04-01

    Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.

  16. The Value in Rushing: Memory and Selectivity when Short on Time

    PubMed Central

    Middlebrooks, Catherine D.; Murayama, Kou; Castel, Alan D.

    2016-01-01

    While being short on time can certainly limit what one remembers, are there always such costs? The current study investigates the impact of time constraints on selective memory and the self-regulated study of valuable information. Participants studied lists of words ranging in value from 1-10 points, with the goal being to maximize their score during recall. Half of the participants studied these words at a constant presentation rate of either 1 or 5 seconds. The other half of participants studied under both rates, either fast (1sec) during the first several lists and then slow (5sec) during later lists, or vice versa. Study was then self-paced during a final segment of lists for all participants to determine how people regulate their study time after experiencing different presentation rates during study. While participants recalled more words overall when studying at a 5-second rate, there were no significant differences in terms of value-based recall, with all participants demonstrating better recall for higher-valued words and similar patterns of selectivity, regardless of study time or prior timing experience. Self-paced study was also value-based, with participants spending more time studying high-value words than low-value. Thus, while being short on time may have impaired memory overall, participants’ attention to item value during study was not differentially impacted by the fast and slow timing rates. Overall, these findings offer further insight regarding the influence that timing schedules and task experience have on how people selectively focus on valuable information. PMID:27305652

  17. NLS Handbook, 2005. National Longitudinal Surveys

    ERIC Educational Resources Information Center

    Bureau of Labor Statistics, 2006

    2006-01-01

    The National Longitudinal Surveys (NLS), sponsored by the U.S. Bureau of Labor Statistics (BLS), are a set of surveys designed to gather information at multiple points in time on the labor market experiences of groups of men and women. Each of the cohorts has been selected to represent all people living in the United States at the initial…

  18. Diversity of human small intestinal Streptococcus and Veillonella populations.

    PubMed

    van den Bogert, Bartholomeus; Erkus, Oylum; Boekhorst, Jos; de Goffau, Marcus; Smid, Eddy J; Zoetendal, Erwin G; Kleerebezem, Michiel

    2013-08-01

    Molecular and cultivation approaches were employed to study the phylogenetic richness and temporal dynamics of Streptococcus and Veillonella populations in the small intestine. Microbial profiling of human small intestinal samples collected from four ileostomy subjects at four time points displayed abundant populations of Streptococcus spp. most affiliated with S. salivarius, S. thermophilus, and S. parasanguinis, as well as Veillonella spp. affiliated with V. atypica, V. parvula, V. dispar, and V. rogosae. Relative abundances varied per subject and time of sampling. Streptococcus and Veillonella isolates were cultured using selective media from ileostoma effluent samples collected at two time points from a single subject. The richness of the Streptococcus and Veillonella isolates was assessed at species and strain level by 16S rRNA gene sequencing and genetic fingerprinting, respectively. A total of 160 Streptococcus and 37 Veillonella isolates were obtained. Genetic fingerprinting differentiated seven Streptococcus lineages from ileostoma effluent, illustrating the strain richness within this ecosystem. The Veillonella isolates were represented by a single phylotype. Our study demonstrated that the small intestinal Streptococcus populations displayed considerable changes over time at the genetic lineage level because only representative strains of a single Streptococcus lineage could be cultivated from ileostoma effluent at both time points. © 2013 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  19. Effect of selected water temperatures used in Mycoplasma gallisepticum vaccine reconstitution on titer at selected time intervals.

    PubMed

    Branton, S L; Leigh, S A; Roush, W B; Purswell, J L; Olanrewaju, H A; Collier, S D

    2008-06-01

    Numerous methods are currently used throughout the poultry industry for the administration of vaccines. Each utilizes water for vaccine reconstitution and/or administration, including two of the three commercially available live Mycoplasma gallisepticum (MG) vaccines. Selected water temperatures were used to reconstitute and/or dilute the three commercially available live MG vaccines. Water temperatures included 4 C, 22 C (room temperature), and 32 C, and titer (color change units) was recorded at four time intervals, at point of reconstitution (time 0), 15, 30, and 60 min postreconstitution of the vaccines (time periods 15, 30, and 60, respectively). Results for F strain MG (FMG) vaccine showed significant decreases in titer from time 0 to time 15 for the 22 C and 32 C water temperatures but no significant decrease for any time period for FMG reconstituted with 4 C water. For 6/85 strain MG no significant difference in titer was noted for any of four time periods within any of the three water temperatures. For ts-11 strain MG a significant decrease was observed in titer at each of the four postdilution time periods when diluted with 32 C water. There was no significant decrease in titer at any time period for ts-11 MG vaccine when diluted with either 4 C or 22 C water.

  20. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  1. Wi-Fi real time location systems

    NASA Astrophysics Data System (ADS)

    Doll, Benjamin A.

    This thesis objective was to determine the viability of utilizing an untrained Wi-Fi. real time location system as a GPS alternative for indoor environments. Background. research showed that GPS is rarely able to penetrate buildings to provide reliable. location data. The benefit of having location information in a facility and how they might. be used for disaster or emergency relief personnel and their resources motivated this. research. A building was selected with a well-deployed Wi-Fi infrastructure and its. untrained location feature was used to determine the distance between the specified. test points and the system identified location. It was found that the average distance. from the test point throughout the facility was 14.3 feet 80% of the time. This fell within. the defined viable range and supported that an untrained Wi-Fi RTLS system could be a. viable solution for GPS's lack of availability indoors.

  2. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  3. Comparison of aged polyamide powders for selective laser sintering

    NASA Astrophysics Data System (ADS)

    Martínez, A.; Ibáñez, A.; Sánchez, A.; León, M. A.

    2012-04-01

    Selective Laser Sintering (SLS) is an additive manufacturing technology in which a three-dimensional object is manufactured layer by layer by melting powder materials with heat generated from a CO2 laser. However, a disadvantage of sintered materials is that the unsintered powder material during the process can be reused only a limited number of cycles, as during the heating phase in the sintering chamber the material remains at a temperature near the fusion point for a certain period of time and lose properties. This work shows the study of two polyamides (PA12)-based powders used in SLS with the aim of understanding the modification of their properties mainly with the temperature and the time at which they are exposed during the processing.

  4. Insect outbreak shifts the direction of selection from fast to slow growth rates in the long-lived conifer Pinus ponderosa

    PubMed Central

    Sala, Anna

    2017-01-01

    Long generation times limit species’ rapid evolution to changing environments. Trees provide critical global ecosystem services, but are under increasing risk of mortality because of climate change-mediated disturbances, such as insect outbreaks. The extent to which disturbance changes the dynamics and strength of selection is unknown, but has important implications on the evolutionary potential of tree populations. Using a 40-y-old Pinus ponderosa genetic experiment, we provide rare evidence of context-dependent fluctuating selection on growth rates over time in a long-lived species. Fast growth was selected at juvenile stages, whereas slow growth was selected at mature stages under strong herbivory caused by a mountain pine beetle (Dendroctonus ponderosae) outbreak. Such opposing forces led to no net evolutionary response over time, thus providing a mechanism for the maintenance of genetic diversity on growth rates. Greater survival to mountain pine beetle attack in slow-growing families reflected, in part, a host-based life-history trade-off. Contrary to expectations, genetic effects on tree survival were greatest at the peak of the outbreak and pointed to complex defense responses. Our results suggest that selection forces in tree populations may be more relevant than previously thought, and have implications for tree population responses to future environments and for tree breeding programs. PMID:28652352

  5. Projection of distributed-collector solar-thermal electric power plant economics to years 1990-2000

    NASA Technical Reports Server (NTRS)

    Fujita, T.; Elgabalawi, N.; Herrera, G.; Turner, R. H.

    1977-01-01

    A preliminary comparative evaluation of distributed-collector solar thermal power plants was undertaken by projecting power plant economics of selected systems to the 1990 to 2000 time frame. The selected systems include: (1) fixed orientation collectors with concentrating reflectors and vacuum tube absorbers, (2) one axis tracking linear concentrator including parabolic trough and variable slat designs, and (3) two axis tracking parabolic dish systems including concepts with small heat engine-electric generator assemblies at each focal point as well as approaches having steam generators at the focal point with pipeline collection to a central power conversion unit. Comparisons are presented primarily in terms of energy cost and capital cost over a wide range of operating load factors. Sensitvity of energy costs for a range of efficiency and cost of major subsystems/components is presented to delineate critical technological development needs.

  6. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  7. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  8. Path planning during combustion mode switch

    DOEpatents

    Jiang, Li; Ravi, Nikhil

    2015-12-29

    Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.

  9. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity.

    PubMed

    Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G

    2015-01-01

    Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

  10. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  11. ASRDI oxygen technology survey. Volume 4: Low temperature measurement

    NASA Technical Reports Server (NTRS)

    Sparks, L. L.

    1974-01-01

    Information is presented on temperature measurement between the triple point and critical point of liquid oxygen. The criterion selected is that all transducers which may reasonably be employed in the liquid oxygen (LO2) temperature range are considered. The temperature range for each transducer is the appropriate full range for the particular thermometer. The discussion of each thermometer or type of thermometer includes the following information: (1) useful temperature range, (2) general and particular methods of construction and the advantages of each type, (3) specifications (accuracy, reproducibility, response time, etc.), (4) associated instrumentation, (5) calibrations and procedures, and (6) analytical representations.

  12. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  13. Extinction risk and eco-evolutionary dynamics in a variable environment with increasing frequency of extreme events.

    PubMed

    Vincenzi, Simone

    2014-08-06

    One of the most dramatic consequences of climate change will be the intensification and increased frequency of extreme events. I used numerical simulations to understand and predict the consequences of directional trend (i.e. mean state) and increased variability of a climate variable (e.g. temperature), increased probability of occurrence of point extreme events (e.g. floods), selection pressure and effect size of mutations on a quantitative trait determining individual fitness, as well as the their effects on the population and genetic dynamics of a population of moderate size. The interaction among climate trend, variability and probability of point extremes had a minor effect on risk of extinction, time to extinction and distribution of the trait after accounting for their independent effects. The survival chances of a population strongly and linearly decreased with increasing strength of selection, as well as with increasing climate trend and variability. Mutation amplitude had no effects on extinction risk, time to extinction or genetic adaptation to the new climate. Climate trend and strength of selection largely determined the shift of the mean phenotype in the population. The extinction or persistence of the populations in an 'extinction window' of 10 years was well predicted by a simple model including mean population size and mean genetic variance over a 10-year time frame preceding the 'extinction window', although genetic variance had a smaller role than population size in predicting contemporary risk of extinction. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Sulzberger Ice Shelf Tidal Signal Reconstruction Using InSAR

    NASA Astrophysics Data System (ADS)

    Baek, S.; Shum, C.; Yi, Y.; Kwoun, O.; Lu, Z.; Braun, A.

    2005-12-01

    Synthetic Aperture Radar Interferometry (InSAR) and Differential InSAR (DInSAR) have been demonstrated as useful techniques to detect surface deformation over ice sheet and ice shelves over Antarctica. In this study, we use multiple-pass InSAR from the ERS-1 and ERS-2 data to detect ocean tidal deformation with an attempt towards modeling of tides underneath an ice shelf. High resolution Digital Elevation Model (DEM) from repeat-pass interferometry and ICESat profiles as ground control points is used for topographic correction over the study region in Sulzberger Ice Shelf, West Antarctica. Tidal differences measured by InSAR are obtained by the phase difference between a point on the grounded ice and a point on ice shelf. Comparison with global or regional tide models (including NAO, TPXO, GOT, and CATS) of a selected point shows that the tidal amplitude is consistent with the values predicted from tide models to within 4 cm RMS. Even though the lack of data hinders the effort to readily develop a tide model using longer term data (time series span over years), we suggest a method to reconstruction selected tidal constituents using both vertical deformation from InSAR and the knowledge on aliased tidal frequencies from ERS satellites. Finally, we report the comparison results of tidal deformation observed by InSAR and ICESat altimetry.

  15. A 6.7 GHz Methanol Maser Survey at High Galactic Latitudes

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xi; Shen, Zhi-Qiang; Li, Xiao-Qiong; Wang, Jun-Zhi; Jiang, Dong-Rong; Li, Juan; Dong, Jian; Wu, Ya-Jun; Qiao, Hai-Hua; Ren, Zhiyuan

    2017-09-01

    We performed a systematic 6.7 GHz Class II methanol maser survey using the Shanghai Tianma Radio Telescope toward targets selected from the all-sky Wide-Field Infrared Survey Explorer (WISE) point catalog. In this paper, we report the results from the survey of those at high Galactic latitudes, I.e., | b| > 2°. Of 1473 selected WISE point sources at high latitude, 17 point positions that were actually associated with 12 sources were detected with maser emission, reflecting the rarity (1%-2%) of methanol masers in the region away from the Galactic plane. Out of the 12 sources, 3 are detected for the first time. The spectral energy distribution at infrared bands shows that these new detected masers occur in the massive star-forming regions. Compared to previous detections, the methanol maser changes significantly in both spectral profiles and flux densities. The infrared WISE images show that almost all of these masers are located in the positions of the bright WISE point sources. Compared to the methanol masers at the Galactic plane, these high-latitude methanol masers provide good tracers for investigating the physics and kinematics around massive young stellar objects, because they are believed to be less affected by the surrounding cluster environment.

  16. Unidirectional invisibility induced by parity-time symmetric circuit

    NASA Astrophysics Data System (ADS)

    Lv, Bo; Fu, Jiahui; Wu, Bian; Li, Rujiang; Zeng, Qingsheng; Yin, Xinhua; Wu, Qun; Gao, Lei; Chen, Wan; Wang, Zhefei; Liang, Zhiming; Li, Ao; Ma, Ruyu

    2017-01-01

    Parity-time (PT) symmetric structures present the unidirectional invisibility at the spontaneous PT-symmetry breaking point. In this paper, we propose a PT-symmetric circuit consisting of a resistor and a microwave tunnel diode (TD) which represent the attenuation and amplification, respectively. Based on the scattering matrix method, the circuit can exhibit an ideal unidirectional performance at the spontaneous PT-symmetry breaking point by tuning the transmission lines between the lumped elements. Additionally, the resistance of the reactance component can alter the bandwidth of the unidirectional invisibility flexibly. Furthermore, the electromagnetic simulation for the proposed circuit validates the unidirectional invisibility and the synchronization with the input energy well. Our work not only provides an unidirectional invisible circuit based on PT-symmetry, but also proposes a potential solution for the extremely selective filter or cloaking applications.

  17. Fast object detection algorithm based on HOG and CNN

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Wang, Dandan; Zhang, Yanduo

    2018-04-01

    In the field of computer vision, object classification and object detection are widely used in many fields. The traditional object detection have two main problems:one is that sliding window of the regional selection strategy is high time complexity and have window redundancy. And the other one is that Robustness of the feature is not well. In order to solve those problems, Regional Proposal Network (RPN) is used to select candidate regions instead of selective search algorithm. Compared with traditional algorithms and selective search algorithms, RPN has higher efficiency and accuracy. We combine HOG feature and convolution neural network (CNN) to extract features. And we use SVM to classify. For TorontoNet, our algorithm's mAP is 1.6 percentage points higher. For OxfordNet, our algorithm's mAP is 1.3 percentage higher.

  18. Cloud-Scale Vertical Velocity and Turbulent Dissipation Rate Retrievals

    DOE Data Explorer

    Shupe, Matthew

    2013-05-22

    Time-height fields of retrieved in-cloud vertical wind velocity and turbulent dissipation rate, both retrieved primarily from vertically-pointing, Ka-band cloud radar measurements. Files are available for manually-selected, stratiform, mixed-phase cloud cases observed at the North Slope of Alaska (NSA) site during periods covering the Mixed-Phase Arctic Cloud Experiment (MPACE, late September through early November 2004) and the Indirect and Semi-Direct Aerosol Campaign (ISDAC, April-early May 2008). These time periods will be expanded in a future submission.

  19. Longitudinal Antigenic Sequences and Sites from Intra-Host Evolution (LASSIE) identifies immune-selected HIV variants

    DOE PAGES

    Hraber, Peter; Korber, Bette; Wagh, Kshitij; ...

    2015-10-21

    Within-host genetic sequencing from samples collected over time provides a dynamic view of how viruses evade host immunity. Immune-driven mutations might stimulate neutralization breadth by selecting antibodies adapted to cycles of immune escape that generate within-subject epitope diversity. Comprehensive identification of immune-escape mutations is experimentally and computationally challenging. With current technology, many more viral sequences can readily be obtained than can be tested for binding and neutralization, making down-selection necessary. Typically, this is done manually, by picking variants that represent different time-points and branches on a phylogenetic tree. Such strategies are likely to miss many relevant mutations and combinations ofmore » mutations, and to be redundant for other mutations. Longitudinal Antigenic Sequences and Sites from Intrahost Evolution (LASSIE) uses transmitted founder loss to identify virus “hot-spots” under putative immune selection and chooses sequences that represent recurrent mutations in selected sites. LASSIE favors earliest sequences in which mutations arise. Here, with well-characterized longitudinal Env sequences, we confirmed selected sites were concentrated in antibody contacts and selected sequences represented diverse antigenic phenotypes. Finally, practical applications include rapidly identifying immune targets under selective pressure within a subject, selecting minimal sets of reagents for immunological assays that characterize evolving antibody responses, and for immunogens in polyvalent “cocktail” vaccines.« less

  20. LMJ Points Plus v2.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos

    Short summary of the software's functionality: • built-in scan feature to acquire optical image of the surface to be analyzed • click-and-point selection of points of interest on the surface • supporting standalone autosampler/HPLC/MS operation: creating independent batch files after points of interests are selected for LEAPShell (autosampler control software from Leap Technologies) and Analyst® (mass spectrometry (MS) software from AB Sciex) • supporting integrated autosampler/HPLC/MS operation: creating one batch file for all instruments controlled by Analyst® (mass spectrometry software from AB Sciex) after points of interests are selected •creating heatmaps of analytes of interests from collected MS files inmore » a hand-off fashion« less

  1. Effects of tranexamic acid on coagulation indexes of patients undergoing heart valve replacement surgery under cardiopulmonary bypass.

    PubMed

    Liu, Fei; Xu, Dong; Zhang, Kefeng; Zhang, Jian

    2016-12-01

    This study aims to explore the effects of tranexamic acid on the coagulation indexes of patients undergoing heart valve replacement surgery under the condition of cardiopulmonary bypass (CPB). One hundred patients who conformed to the inclusive criteria were selected and divided into a tranexamic acid group and a non-tranexamic acid group. They all underwent heart valve replacement surgery under CPB. Patients in the tranexamic acid group were intravenously injected with 1 g of tranexamic acid (100 mL) at the time point after anesthesia induction and before skin incision and at the time point after the neutralization of heparin. Patients in the non-tranexamic acid group were given 100 mL of normal saline at corresponding time points, respectively. Then the coagulation indexes of the two groups were analyzed. The activated blood clotting time (ACT) of the two groups was within normal scope before CPB, while four coagulation indexes including prothrombin time (PT), activated partial thromboplastin time (APTT), international normalized ratio (INR), and fibrinogen (FIB) had significant increases after surgery; the PT and INR of the tranexamic acid group had a remarkable decline after surgery. All the findings suggest that the application of tranexamic acid in heart valve replacement surgery under CPB can effectively reduce intraoperative and postoperative blood loss. © The Author(s) 2016.

  2. [Study of Determination of Oil Mixture Components Content Based on Quasi-Monte Carlo Method].

    PubMed

    Wang, Yu-tian; Xu, Jing; Liu, Xiao-fei; Chen, Meng-han; Wang, Shi-tao

    2015-05-01

    Gasoline, kerosene, diesel is processed by crude oil with different distillation range. The boiling range of gasoline is 35 ~205 °C. The boiling range of kerosene is 140~250 °C. And the boiling range of diesel is 180~370 °C. At the same time, the carbon chain length of differentmineral oil is different. The carbon chain-length of gasoline is within the scope of C7 to C11. The carbon chain length of kerosene is within the scope of C12 to C15. And the carbon chain length of diesel is within the scope of C15 to C18. The recognition and quantitative measurement of three kinds of mineral oil is based on different fluorescence spectrum formed in their different carbon number distribution characteristics. Mineral oil pollution occurs frequently, so monitoring mineral oil content in the ocean is very important. A new method of components content determination of spectra overlapping mineral oil mixture is proposed, with calculation of characteristic peak power integrationof three-dimensional fluorescence spectrum by using Quasi-Monte Carlo Method, combined with optimal algorithm solving optimum number of characteristic peak and range of integral region, solving nonlinear equations by using BFGS(a rank to two update method named after its inventor surname first letter, Boyden, Fletcher, Goldfarb and Shanno) method. Peak power accumulation of determined points in selected area is sensitive to small changes of fluorescence spectral line, so the measurement of small changes of component content is sensitive. At the same time, compared with the single point measurement, measurement sensitivity is improved by the decrease influence of random error due to the selection of points. Three-dimensional fluorescence spectra and fluorescence contour spectra of single mineral oil and the mixture are measured by taking kerosene, diesel and gasoline as research objects, with a single mineral oil regarded whole, not considered each mineral oil components. Six characteristic peaks are selected for characteristic peak power integration to determine components content of mineral oil mixture of gasoline, kerosene and diesel by optimal algorithm. Compared with single point measurement of peak method and mean method, measurement sensitivity is improved about 50 times. The implementation of high precision measurement of mixture components content of gasoline, kerosene and diesel provides a practical algorithm for components content direct determination of spectra overlapping mixture without chemical separation.

  3. Influencing Food Selection with Point-of-Choice Nutrition Information.

    ERIC Educational Resources Information Center

    Davis-Chervin, Doryn; And Others

    1985-01-01

    Evaluated the effectiveness of a point-of-choice nutrition information program that used a comprehensive set of communication functions in its design. Results indicate that point-of-choice information without direct tangible rewards can (to a moderate degree) modify food-selection behavior of cafeteria patrons. (JN)

  4. A time-series study of sick building syndrome: chronic, biotoxin-associated illness from exposure to water-damaged buildings.

    PubMed

    Shoemaker, Ritchie C; House, Dennis E

    2005-01-01

    The human health risk for chronic illnesses involving multiple body systems following inhalation exposure to the indoor environments of water-damaged buildings (WDBs) has remained poorly characterized and the subject of intense controversy. The current study assessed the hypothesis that exposure to the indoor environments of WDBs with visible microbial colonization was associated with illness. The study used a cross-sectional design with assessments at five time points, and the interventions of cholestyramine (CSM) therapy, exposure avoidance following therapy, and reexposure to the buildings after illness resolution. The methodological approach included oral administration of questionnaires, medical examinations, laboratory analyses, pulmonary function testing, and measurements of visual function. Of the 21 study volunteers, 19 completed assessment at each of the five time points. Data at Time Point 1 indicated multiple symptoms involving at least four organ systems in all study participants, a restrictive respiratory condition in four participants, and abnormally low visual contrast sensitivity (VCS) in 18 participants. Serum leptin levels were abnormally high and alpha melanocyte stimulating hormone (MSH) levels were abnormally low. Assessments at Time Point 2, following 2 weeks of CSM therapy, indicated a highly significant improvement in health status. Improvement was maintained at Time Point 3, which followed exposure avoidance without therapy. Reexposure to the WDBs resulted in illness reacquisition in all participants within 1 to 7 days. Following another round of CSM therapy, assessments at Time Point 5 indicated a highly significant improvement in health status. The group-mean number of symptoms decreased from 14.9+/-0.8 S.E.M. at Time Point 1 to 1.2+/-0.3 S.E.M., and the VCS deficit of approximately 50% at Time Point 1 was fully resolved. Leptin and MSH levels showed statistically significant improvement. The results indicated that CSM was an effective therapeutic agent, that VCS was a sensitive and specific indicator of neurologic function, and that illness involved systemic and hypothalamic processes. Although the results supported the general hypothesis that illness was associated with exposure to the WDBs, this conclusion was tempered by several study limitations. Exposure to specific agents was not demonstrated, study participants were not randomly selected, and double-blinding procedures were not used. Additional human and animal studies are needed to confirm this conclusion, investigate the role of complex mixtures of bacteria, fungi, mycotoxins, endotoxins, and antigens in illness causation, and characterize modes of action. Such data will improve the assessment of human health risk from chronic exposure to WDBs.

  5. Online Event Reconstruction in the CBM Experiment at FAIR

    NASA Astrophysics Data System (ADS)

    Akishina, Valentina; Kisel, Ivan

    2018-02-01

    Targeting for rare observables, the CBM experiment will operate at high interaction rates of up to 10 MHz, which is unprecedented in heavy-ion experiments so far. It requires a novel free-streaming readout system and a new concept of data processing. The huge data rates of the CBM experiment will be reduced online to the recordable rate before saving the data to the mass storage. Full collision reconstruction and selection will be performed online in a dedicated processor farm. In order to make an efficient event selection online a clean sample of particles has to be provided by the reconstruction package called First Level Event Selection (FLES). The FLES reconstruction and selection package consists of several modules: track finding, track fitting, event building, short-lived particles finding, and event selection. Since detector measurements contain also time information, the event building is done at all stages of the reconstruction process. The input data are distributed within the FLES farm in a form of time-slices. A time-slice is reconstructed in parallel between processor cores. After all tracks of the whole time-slice are found and fitted, they are collected into clusters of tracks originated from common primary vertices, which then are fitted, thus identifying the interaction points. Secondary tracks are associated with primary vertices according to their estimated production time. After that short-lived particles are found and the full event building process is finished. The last stage of the FLES package is a selection of events according to the requested trigger signatures. The event reconstruction procedure and the results of its application to simulated collisions in the CBM detector setup are presented and discussed in detail.

  6. In Situ Bioremediation of Chlorinated Solvents Source Areas with Enhanced Mass Transfer

    DTIC Science & Technology

    2009-11-01

    cells within NAPL Area 3 ................................. 22 Figure 6. Impact of whey injection on pH in the treatment cells...locations following 1% and 10% whey injections. ............................ 39 Figure 12. Total chlorinated ethene concentration contours at select time...points. ................ 40 Figure 13. Relationship between interfacial tension reduction and enhanced solubility of TCE DNAPL as a function of whey

  7. Group Selection Methods and Contribution to the West Point Leadership Development System (WPLDS)

    DTIC Science & Technology

    2015-08-01

    Government. 14. ABSTRACT Group work in an academic setting can consist of projects or problems students can work on collaboratively. Although pedagogical ...ABSTRACT Group work in an academic setting can consist of projects or problems students can work on collaboratively. Although pedagogical studies...helping students develop intangibles like communication, time management, organization, leadership, interpersonal, and relationship skills. Supporting

  8. Quantifying selection in evolving populations using time-resolved genetic data

    NASA Astrophysics Data System (ADS)

    Illingworth, Christopher J. R.; Mustonen, Ville

    2013-01-01

    Methods which uncover the molecular basis of the adaptive evolution of a population address some important biological questions. For example, the problem of identifying genetic variants which underlie drug resistance, a question of importance for the treatment of pathogens, and of cancer, can be understood as a matter of inferring selection. One difficulty in the inference of variants under positive selection is the potential complexity of the underlying evolutionary dynamics, which may involve an interplay between several contributing processes, including mutation, recombination and genetic drift. A source of progress may be found in modern sequencing technologies, which confer an increasing ability to gather information about evolving populations, granting a window into these complex processes. One particularly interesting development is the ability to follow evolution as it happens, by whole-genome sequencing of an evolving population at multiple time points. We here discuss how to use time-resolved sequence data to draw inferences about the evolutionary dynamics of a population under study. We begin by reviewing our earlier analysis of a yeast selection experiment, in which we used a deterministic evolutionary framework to identify alleles under selection for heat tolerance, and to quantify the selection acting upon them. Considering further the use of advanced intercross lines to measure selection, we here extend this framework to cover scenarios of simultaneous recombination and selection, and of two driver alleles with multiple linked neutral, or passenger, alleles, where the driver pair evolves under an epistatic fitness landscape. We conclude by discussing the limitations of the approach presented and outlining future challenges for such methodologies.

  9. Method and apparatus for aligning a solar concentrator using two lasers

    DOEpatents

    Diver Jr., Richard Boyer

    2003-07-22

    A method and apparatus are provided for aligning the facets of a solar concentrator. A first laser directs a first laser beam onto a selected facet of the concentrator such that a target board positioned adjacent to the first laser at approximately one focal length behind the focal point of the concentrator is illuminated by the beam after reflection thereof off of the selected facet. A second laser, located adjacent to the vertex of the optical axis of the concentrator, is used to direct a second laser beam onto the target board at a target point thereon. By adjusting the selected facet to cause the first beam to illuminate the target point on the target board produced by the second beam, the selected facet can be brought into alignment with the target point. These steps are repeated for other selected facets of the concentrator, as necessary, to provide overall alignment of the concentrator.

  10. Accuracy of heart rate variability estimation by photoplethysmography using an smartphone: Processing optimization and fiducial point selection.

    PubMed

    Ferrer-Mileo, V; Guede-Fernandez, F; Fernandez-Chimeno, M; Ramos-Castro, J; Garcia-Gonzalez, M A

    2015-08-01

    This work compares several fiducial points to detect the arrival of a new pulse in a photoplethysmographic signal using the built-in camera of smartphones or a photoplethysmograph. Also, an optimization process for the signal preprocessing stage has been done. Finally we characterize the error produced when we use the best cutoff frequencies and fiducial point for smartphones and photopletysmograph and compare if the error of smartphones can be reasonably be explained by variations in pulse transit time. The results have revealed that the peak of the first derivative and the minimum of the second derivative of the pulse wave have the lowest error. Moreover, for these points, high pass filtering the signal between 0.1 to 0.8 Hz and low pass around 2.7 Hz or 3.5 Hz are the best cutoff frequencies. Finally, the error in smartphones is slightly higher than in a photoplethysmograph.

  11. Extending the Search for Neutrino Point Sources with IceCube above the Horizon

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Alba, J. L. Bazo; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Carson, M.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miyamoto, H.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Paul, L.; de Los Heros, C. Pérez; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Toale, P. A.; Tooker, J.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.

    2009-11-01

    Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.

  12. Accelerator Vacuum Protection System

    NASA Astrophysics Data System (ADS)

    Barua, Pradip; Kothari, Ashok; Archunan, M.; Joshi, Rajan

    2012-11-01

    A new and elaborate automatic vacuum protection system using fast acting valve has been installed to avoid accidental venting of accelerator from experimental chamber side. To cover all the beam lines and to reduce the system cost, it has been installed at a common point from where all the seven beam lines originate. The signals are obtained by placing fast response pressure sensing gauges (HV SENSOR) near all the experimental stations. The closing time of the fast valve is 10 milli-second. The fast closing system protects only one vacuum line at a time. At IUAC, we have seven beam lines so one sensor was placed in each of the beam lines near experimental chamber and a multiplexer was incorporated into the fast closing system. At the time of experiment, the sensor of the active beam line is selected through the multiplexer and the Fast closing valve is interlocked with the selected sensor. As soon as the pressure sensor senses the pressure rise beyond a selected pressure, the signal is transferred and the fast valve closes within 10 to 12 millisecond.

  13. Four-dimensional modeling of recent vertical movements in the area of the southern California uplift

    USGS Publications Warehouse

    Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.

    1979-01-01

    This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift. 

  14. Changes in health selection of obesity among Mexican immigrants: a binational examination.

    PubMed

    Ro, Annie; Fleischer, Nancy

    2014-12-01

    Health selection is often measured by comparing the health of more recent immigrants to the native born of their new host country. However, this comparison fails to take into account two important factors: (1) that changes in the health profile of sending countries may impact the health of immigrants over time, and (2) that the best comparison group for health selection would be people who remain in the country of origin. Obesity represents an important health outcome that may be best understood by taking into account these two factors. Using nationally-representative datasets from Mexico and the US, we examined differences in obesity-related health selection, by gender, in 2000 and 2012. We calculated prevalence ratios from log-binomial models to compare the risk of obesity among recent immigrants to the US to Mexican nationals with varying likelihood of migration, in order to determine changes in health selection over time. Among men in 2000, we found little difference in obesity status between recent immigrants to the US and Mexican non-migrants. However, in 2012, Mexican men who were the least likely to migrate had higher obesity prevalence than recent immigrants, which may reflect emerging health selection. The trends for women, however, indicated differences in obesity status between recent Mexican immigrants and non-migrants at both time points. In both 2000 and 2012, Mexican national women had significantly higher obesity prevalence than recent immigrant women, with the biggest difference between recent immigrants and Mexican women who were least likely to migrate. There was also indication that selection increased with time for women, as the differences between Mexican nationals and recent immigrants to the US grew from 2000 to 2012. Our study is among the first to use a binational dataset to examine the impact of health selectivity, over time, on obesity. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Design of automation tools for management of descent traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Nedell, William

    1988-01-01

    The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.

  16. Development and validation of a set of six adaptable prognosis prediction (SAP) models based on time-series real-world big data analysis for patients with cancer receiving chemotherapy: A multicenter case crossover study

    PubMed Central

    Kanai, Masashi; Okamoto, Kazuya; Yamamoto, Yosuke; Yoshioka, Akira; Hiramoto, Shuji; Nozaki, Akira; Nishikawa, Yoshitaka; Yamaguchi, Daisuke; Tomono, Teruko; Nakatsui, Masahiko; Baba, Mika; Morita, Tatsuya; Matsumoto, Shigemi; Kuroda, Tomohiro; Okuno, Yasushi; Muto, Manabu

    2017-01-01

    Background We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. Methods Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. Results A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1–6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. Conclusion By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy. PMID:28837592

  17. Random Time Identity Based Firewall In Mobile Ad hoc Networks

    NASA Astrophysics Data System (ADS)

    Suman, Patel, R. B.; Singh, Parvinder

    2010-11-01

    A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.

  18. Emerging late adolescent friendship networks and Big Five personality traits: a social network approach.

    PubMed

    Selfhout, Maarten; Burk, William; Branje, Susan; Denissen, Jaap; van Aken, Marcel; Meeus, Wim

    2010-04-01

    The current study focuses on the emergence of friendship networks among just-acquainted individuals, investigating the effects of Big Five personality traits on friendship selection processes. Sociometric nominations and self-ratings on personality traits were gathered from 205 late adolescents (mean age=19 years) at 5 time points during the first year of university. SIENA, a novel multilevel statistical procedure for social network analysis, was used to examine effects of Big Five traits on friendship selection. Results indicated that friendship networks between just-acquainted individuals became increasingly more cohesive within the first 3 months and then stabilized. Whereas individuals high on Extraversion tended to select more friends than those low on this trait, individuals high on Agreeableness tended to be selected more as friends. In addition, individuals tended to select friends with similar levels of Agreeableness, Extraversion, and Openness.

  19. Functionalization of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N. (Inventor); Meyyappan, Meyya (Inventor)

    2007-01-01

    Method and system for functionalizing a collection of carbon nanotubes (CNTs). A selected precursor gas (e.g., H2, or F2, or CnHm) is irradiated to provide a cold plasma of selected target particles, such as atomic H or F, in a first chamber. The target particles are directed toward an array of CNTs located in a second chamber while suppressing transport of ultraviolet radiation to the second chamber. A CNT array is functionalized with the target particles, at or below room temperature, to a point of saturation, in an exposure time interval no longer than about 30 sec.

  20. Functionalization of carbon nanotubes

    NASA Technical Reports Server (NTRS)

    Khare, Bishun N. (Inventor); Meyyappan, Meyya (Inventor)

    2007-01-01

    Method and system for functionalizing a collection of carbon nanotubes (CNTs). A selected precursor gas (e.g., H.sub.2 or F.sub.2 or C.sub.nH.sub.m) is irradiated to provide a cold plasma of selected target particles, such as atomic H or F, in a first chamber. The target particles are directed toward an array of CNTs located in a second chamber while suppressing transport of ultraviolet radiation to the second chamber. A CNT array is functionalized with the target particles, at or below room temperature, to a point of saturation, in an exposure time interval no longer than about 30 sec.

  1. Selection and trajectory design to mission secondary targets

    NASA Astrophysics Data System (ADS)

    Victorino Sarli, Bruno; Kawakatsu, Yasuhiro

    2017-02-01

    Recently, with new trajectory design techniques and use of low-thrust propulsion systems, missions have become more efficient and cheaper with respect to propellant. As a way to increase the mission's value and scientific return, secondary targets close to the main trajectory are often added with a small change in the transfer trajectory. As a result of their large number, importance and facility to perform a flyby, asteroids are commonly used as such targets. This work uses the Primer Vector theory to define the direction and magnitude of the thrust for a minimum fuel consumption problem. The design of a low-thrust trajectory with a midcourse asteroid flyby is not only challenging for the low-thrust problem solution, but also with respect to the selection of a target and its flyby point. Currently more than 700,000 minor bodies have been identified, which generates a very large number of possible flyby points. This work uses a combination of reachability, reference orbit, and linear theory to select appropriate candidates, drastically reducing the simulation time, to be later included in the main trajectory and optimized. Two test cases are presented using the aforementioned selection process and optimization to add and design a secondary flyby to a mission with the primary objective of 3200 Phaethon flyby and 25143 Itokawa rendezvous.

  2. Simplified Night Sky Display System

    NASA Technical Reports Server (NTRS)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  3. Foveal analysis and peripheral selection during active visual sampling

    PubMed Central

    Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.

    2014-01-01

    Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588

  4. Selective Fragmentation of Biorefinery Corncob Lignin into p-Hydroxycinnamic Esters with a Supported ZnMoO4 Catalyst.

    PubMed

    Wang, Shuizhong; Gao, Wa; Li, Helong; Xiao, Ling-Ping; Sun, Run-Cang; Song, Guoyong

    2018-04-16

    Lignin is the largest renewable resource of bio-aromatics, and catalytic fragmentation of lignin into phenolic monomers is increasingly recognized as an important starting point for lignin valorization. Herein, we reported zinc molybdate (ZnMoO4) supported on MCM-41 can catalyze fragmentation of biorefinery technical lignin, enzymatic mild acidolysis lignin and native lignin derived from corncob, to give lignin oily products containing 15 to 37.8 wt% phenolic monomers, in which the high selectivities towards methyl coumarate 1 and methyl ferulate 2 were obtained (up to 78%). The effects of some key parameters such as the influences of solvent, reaction temperature, time, H2 pressure and catalyst dosage were examined in view of activity and selectivity. The loss of zinc atom in catalyst is appointed as a primary cause of deactivation, and catalytic activity and selectivity can be well-preserved for at least six times by thermal calcination. The high selectivity to compounds 1 and 2 make them easily separated and purified from lignin oily product, thus providing sustainable monomers for preparation of functional polyetheresters and polyesters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Comparative MicroRNA Expression Patterns in Fibroblasts after Low and High Doses of Low-LET Radiation Exposure

    NASA Technical Reports Server (NTRS)

    Maes, Olivier C.; Xu, Suying; Hada, Megumi; Wu, Honglu; Wang, Eugenia

    2007-01-01

    Exposure to ionizing radiation causes DNA damage to cells, and provokes a plethora of cellular responses controlled by unique gene-directed signaling pathways. MicroRNAs (miRNAs) are small (22-nucleotide), non-coding RNAs which functionally silence gene expression by either degrading the messages or inhibiting translation. Here we investigate radiation-dependent changes in these negative regulators by comparing the expression patterns of all 462 known human miRNAs in fibroblasts, after exposure to low (0.1 Gy) or high (2 Gy) doses of X-rays at 30 min, 2, 6 and 24 hrs post-treatment. The expression patterns of microRNAs after low and high doses of radiation show a similar qualitative down-regulation trend at early (0.5 hr) and late (24 hr) time points, with a quantitatively steeper slope following the 2 Gy exposures. Interestingly, an interruption of this downward trend is observed after the 2 Gy exposure, i.e. a significant up-regulation of microRNAs at 2 hrs, then reverting to the downward trend by 6 hrs; this interruption at the intermediate time point was not observed with the 0.1 Gy exposure. At the early time point (0.5 hr), candidate gene targets of selected down-regulated microRNAs, common to both 0.1 and 2 Gy exposures, were those functioning in chromatin remodeling. Candidate target genes of unique up-regulated microRNAs seen at a 2 hr intermediate time point, after the 2 Gy exposure only, are those involved in cell death signaling. Finally, putative target genes of down-regulated microRNAs seen at the late (24 hr) time point after either doses of radiation are those involved in the up-regulation of DNA repair, cell signaling and homeostasis. Thus we hypothesize that after radiation exposure, microRNAs acting as hub negative regulators for unique signaling pathways needed to be down-regulated so as to de-repress their target genes for the proper cellular responses, including DNA repair and cell maintenance. The unique microRNAs up-regulated at 2 hr after 2 Gy suggest the cellular response to functionally suppress the apoptotic death signaling reflex after exposure to high dose radiation. Further analyses with transcriptome and global proteomic profiling will validate the reciprocal expression of signature microRNAs selected in our radiation-exposed cells, and their candidate target gene families, and test our hypothesis that unique radiation-specific microRNAs are keys in governing signaling responses for damage control of this environmental hazard.

  6. Experimental and simulation study results of an Adaptive Video Guidance System /AVGS/

    NASA Technical Reports Server (NTRS)

    Schappell, R. T.; Knickerbocker, R. L.

    1975-01-01

    Studies relating to stellar-body exploration programs have pointed out the need for an adaptive guidance scheme capable of providing automatic real-time guidance and site selection capability. For the case of a planetary lander, without such guidance, targeting is limited to what are believed to be generally benign areas in order to ensure a reasonable landing-success probability. Typically, the Mars Viking Lander will be jeopardized by obstacles exceeding 22 centimers in diameter. The benefits of on-board navigation and real-time selection of a landing site and obstacle avoidance have been demonstrated by the Apollo lunar landings, in which man performed the surface sensing and steering functions. Therefore, an Adaptive Video Guidance System (AVGS) has been developed, bread-boarded, and flown on a six-degree-of-freedom simulator.

  7. Circulating intact and cleaved forms of the urokinase-type plasminogen activator receptor: biological variation, reference intervals and clinical useful cut-points.

    PubMed

    Thurison, Tine; Christensen, Ib J; Lund, Ida K; Nielsen, Hans J; Høyer-Hansen, Gunilla

    2015-01-15

    High levels of circulating forms of the urokinase-type plasminogen activator receptor (uPAR) are significantly associated to poor prognosis in cancer patients. Our aim was to determine biological variations and reference intervals of the uPAR forms in blood, and in addition, to test the clinical relevance of using these as cut-points in colorectal cancer (CRC) prognosis. uPAR forms were measured in citrated and EDTA plasma samples using time-resolved fluorescence immunoassays. Diurnal, intra- and inter-individual variations were assessed in plasma samples from cohorts of healthy individuals. Reference intervals were determined in plasma from healthy individuals randomly selected from a Danish multi-center cross-sectional study. A cohort of CRC patients was selected from the same cross-sectional study. The reference intervals showed a slight increase with age and women had ~20% higher levels. The intra- and inter-individual variations were ~10% and ~20-30%, respectively and the measured levels of the uPAR forms were within the determined 95% reference intervals. No diurnal variation was found. Applying the normal upper limit of the reference intervals as cut-point for dichotomizing CRC patients revealed significantly decreased overall survival of patients with levels above this cut-point of any uPAR form. The reference intervals for the different uPAR forms are valid and the upper normal limits are clinically relevant cut-points for CRC prognosis. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Predicting Intracerebral Hemorrhage Expansion With Noncontrast Computed Tomography: The BAT Score.

    PubMed

    Morotti, Andrea; Dowlatshahi, Dar; Boulouis, Gregoire; Al-Ajlan, Fahad; Demchuk, Andrew M; Aviv, Richard I; Yu, Liyang; Schwab, Kristin; Romero, Javier M; Gurol, M Edip; Viswanathan, Anand; Anderson, Christopher D; Chang, Yuchiao; Greenberg, Steven M; Qureshi, Adnan I; Rosand, Jonathan; Goldstein, Joshua N

    2018-05-01

    Although the computed tomographic angiography spot sign performs well as a biomarker for hematoma expansion (HE), computed tomographic angiography is not routinely performed in the emergency setting. We developed and validated a score to predict HE-based on noncontrast computed tomography (NCCT) findings in spontaneous acute intracerebral hemorrhage. After developing the score in a single-center cohort of patients with intracerebral hemorrhage (n=344), we validated it in a large clinical trial population (n=954) and in a multicenter intracerebral hemorrhage cohort (n=241). The following NCCT markers of HE were analyzed: hypodensities, blend sign, hematoma shape and density, and fluid level. HE was defined as hematoma growth >6 mL or >33%. The score was created using the estimates from multivariable logistic regression after final predictors were selected from bootstrap samples. Presence of blend sign (odds ratio, 3.09; 95% confidence interval [CI],1.49-6.40; P =0.002), any intrahematoma hypodensity (odds ratio, 4.54; 95% CI, 2.44-8.43; P <0.0001), and time from onset to NCCT <2.5 hours (odds ratio, 3.73; 95% CI, 1.86-7.51; P =0.0002) were predictors of HE. A 5-point score was created (BAT score: 1 point for blend sign, 2 points for any hypodensity, and 2 points for timing of NCCT <2.5 hours). The c statistic was 0.77 (95% CI, 0.70-0.83) in the development population, 0.65 (95% CI 0.61-0.68) and 0.70 (95% CI, 0.64-0.77) in the 2 validation cohorts. A dichotomized score (BAT score ≥3) predicted HE with 0.50 sensitivity and 0.89 specificity. An easy to use 5-point prediction score can identify subjects at high risk of HE with good specificity and accuracy. This tool requires just a baseline NCCT scan and may help select patients with intracerebral hemorrhage for antiexpansion clinical trials. © 2018 American Heart Association, Inc.

  9. New Observations of Subarcsecond Photospheric Bright Points

    NASA Technical Reports Server (NTRS)

    Berger, T. E.; Schrijver, C. J.; Shine, R. A.; Tarbell, T. D.; Title, A. M.; Scharmer, G.

    1995-01-01

    We have used an interference filter centered at 4305 A within the bandhead of the CH radical (the 'G band') and real-time image selection at the Swedish Vacuum Solar Telescope on La Palma to produce very high contrast images of subarcsecond photospheric bright points at all locations on the solar disk. During the 6 day period of 1993 September 15-20 we observed active region NOAA 7581 from its appearance on the East limb to a near-disk-center position on September 20. A total of 1804 bright points were selected for analysis from the disk center image using feature extraction image processing techniques. The measured Full Width at Half Maximum (FWHM) distribution of the bright points in the image is lognormal with a modal value of 220 km (0 sec .30) and an average value of 250 km (0 sec .35). The smallest measured bright point diameter is 120 km (0 sec .17) and the largest is 600 km (O sec .69). Approximately 60% of the measured bright points are circular (eccentricity approx. 1.0), the average eccentricity is 1.5, and the maximum eccentricity corresponding to filigree in the image is 6.5. The peak contrast of the measured bright points is normally distributed. The contrast distribution variance is much greater than the measurement accuracy, indicating a large spread in intrinsic bright-point contrast. When referenced to an averaged 'quiet-Sun' area in the image, the modal contrast is 29% and the maximum value is 75%; when referenced to an average intergranular lane brightness in the image, the distribution has a modal value of 61% and a maximum of 119%. The bin-averaged contrast of G-band bright points is constant across the entire measured size range. The measured area of the bright points, corrected for pixelation and selection effects, covers about 1.8% of the total image area. Large pores and micropores occupy an additional 2% of the image area, implying a total area fraction of magnetic proxy features in the image of 3.8%. We discuss the implications of this area fraction measurement in the context of previously published measurements which show that typical active region plage has a magnetic filling factor on the order of 10% or greater. The results suggest that in the active region analyzed here, less than 50% of the small-scale magnetic flux tubes are demarcated by visible proxies such as bright points or pores.

  10. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase

    DOE PAGES

    Wang, Jun; Liu, Xi; Wang, Xu -Dong; ...

    2016-08-18

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7 °C) and decrease ofmore » crystallizing point (3 °C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from 212.3 to 14.6 per batch with the microreactor. Altogether, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts.« less

  11. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jun; Liu, Xi; Wang, Xu -Dong

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7 °C) and decrease ofmore » crystallizing point (3 °C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from 212.3 to 14.6 per batch with the microreactor. Altogether, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts.« less

  12. Stabilization of time domain acoustic boundary element method for the exterior problem avoiding the nonuniqueness.

    PubMed

    Jang, Hae-Won; Ih, Jeong-Guon

    2013-03-01

    The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.

  13. Selective synthesis of human milk fat-style structured triglycerides from microalgal oil in a microfluidic reactor packed with immobilized lipase.

    PubMed

    Wang, Jun; Liu, Xi; Wang, Xu-Dong; Dong, Tao; Zhao, Xing-Yu; Zhu, Dan; Mei, Yi-Yuan; Wu, Guo-Hua

    2016-11-01

    Human milk fat-style structured triacylglycerols were produced from microalgal oil in a continuous microfluidic reactor packed with immobilized lipase for the first time. A remarkably high conversion efficiency was demonstrated in the microreactor with reaction time being reduced by 8 times, Michaelis constant decreased 10 times, the lipase reuse times increased 2.25-fold compared to those in a batch reactor. In addition, the content of palmitic acid at sn-2 position (89.0%) and polyunsaturated fatty acids at sn-1, 3 positions (81.3%) are slightly improved compared to the product in a batch reactor. The increase of melting points (1.7°C) and decrease of crystallizing point (3°C) implied higher quality product was produced using the microfluidic technology. The main cost can be reduced from $212.3 to $14.6 per batch with the microreactor. Overall, the microfluidic bioconversion technology is promising for modified functional lipids production allowing for cost-effective approach to produce high-value microalgal coproducts. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A pointing facilitation system for motor-impaired users combining polynomial smoothing and time-weighted gradient target prediction models.

    PubMed

    Blow, Nikolaus; Biswas, Pradipta

    2017-01-01

    As computers become more and more essential for everyday life, people who cannot use them are missing out on an important tool. The predominant method of interaction with a screen is a mouse, and difficulty in using a mouse can be a huge obstacle for people who would otherwise gain great value from using a computer. If mouse pointing were to be made easier, then a large number of users may be able to begin using a computer efficiently where they may previously have been unable to. The present article aimed to improve pointing speeds for people with arm or hand impairments. The authors investigated different smoothing and prediction models on a stored data set involving 25 people, and the best of these algorithms were chosen. A web-based prototype was developed combining a polynomial smoothing algorithm with a time-weighted gradient target prediction model. The adapted interface gave an average improvement of 13.5% in target selection times in a 10-person study of representative users of the system. A demonstration video of the system is available at https://youtu.be/sAzbrKHivEY.

  15. Rotating Arc Jet Test Model: Time-Accurate Trajectory Heat Flux Replication in a Ground Test Environment

    NASA Technical Reports Server (NTRS)

    Laub, Bernard; Grinstead, Jay; Dyakonov, Artem; Venkatapathy, Ethiraj

    2011-01-01

    Though arc jet testing has been the proven method employed for development testing and certification of TPS and TPS instrumentation, the operational aspects of arc jets limit testing to selected, but constant, conditions. Flight, on the other hand, produces timevarying entry conditions in which the heat flux increases, peaks, and recedes as a vehicle descends through an atmosphere. As a result, we are unable to "test as we fly." Attempts to replicate the time-dependent aerothermal environment of atmospheric entry by varying the arc jet facility operating conditions during a test have proven to be difficult, expensive, and only partially successful. A promising alternative is to rotate the test model exposed to a constant-condition arc jet flow to yield a time-varying test condition at a point on a test article (Fig. 1). The model shape and rotation rate can be engineered so that the heat flux at a point on the model replicates the predicted profile for a particular point on a flight vehicle. This simple concept will enable, for example, calibration of the TPS sensors on the Mars Science Laboratory (MSL) aeroshell for anticipated flight environments.

  16. Smoking selectivity among Mexican immigrants to the United States using binational data, 1999-2012.

    PubMed

    Fleischer, Nancy L; Ro, Annie; Bostean, Georgiana

    2017-04-01

    Mexican immigrants have lower smoking rates than US-born Mexicans, which some scholars attribute to health selection-that individuals who migrate are healthier and have better health behaviors than their non-migrant counterparts. Few studies have examined smoking selectivity using binational data and none have assessed whether selectivity remains constant over time. This study combined binational data from the US and Mexico to examine: 1) the extent to which recent Mexican immigrants (<10years) in the US are selected with regard to cigarette smoking compared to non-migrants in Mexico, and 2) whether smoking selectivity varied between 2000 and 2012-a period of declining tobacco use in Mexico and the US. We combined repeated cross-sectional US data (n=10.901) on adult (ages 20-64) Mexican immigrants and US-born Mexicans from the 1999/2000 and 2011/2012 National Health Interview Survey, and repeated cross-sectional Mexican data on non-migrants (n=67.188) from the 2000 Encuesta Nacional de Salud and 2012 Encuesta Nacional de Salud y Nutrición. Multinomial logistic regressions, stratified by gender, predicted smoking status (current, former, never) by migration status. At both time points, we found lower overall smoking prevalence among recent US immigrants compared to non-migrants for both genders. Moreover, from the regression analyses, smoking selectivity remained constant between 2000 and 2012 among men, but increased among women. These findings suggest that Mexican immigrants are indeed selected on smoking compared to their non-migrating counterparts, but that selectivity is subject to smoking conditions in the sending countries and may not remain constant over time. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Modeling Homophily Over Time With an Actor–Partner Interdependence Model

    PubMed Central

    Popp, Danielle; Laursen, Brett; Kerr, Margaret; Stattin, Håkan; Burk, William J.

    2009-01-01

    Selection and socialization have been implicated in friendship homophily, but the relative contributions of each are difficult to measure simultaneously because of the nonindependent nature of the data. To address this problem, the authors applied a multiple-groups longitudinal actor–partner interdependence model (D. A. Kashy & D. A. Kenny, 2000) for distinguishable dyads to 3 consecutive years of intoxication frequency data from a large community-based sample of Swedish youth. Participants, ranging from 12 to 18 years old (M = 14.35, SD = 1.56) at the start of the study, included 902 adolescents (426 girls and 476 boys) with at least one reciprocated friend during at least one time point and 212 adolescents (84 girls and 128 boys) without reciprocated friends at any time. Similarity estimates indicated strong effects for selection and socialization in friends’ intoxication frequency. Over time, younger members of these dyads had less stable patterns of intoxication than older members, largely because younger partners changed their drinking behavior to resemble that of older partners. PMID:18605832

  18. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...

  19. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...

  20. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...

  1. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...

  2. 49 CFR 236.303 - Control circuits for signals, selection through circuit controller operated by switch points or...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... circuit controller operated by switch points or by switch locking mechanism. 236.303 Section 236.303... § 236.303 Control circuits for signals, selection through circuit controller operated by switch points or by switch locking mechanism. The control circuit for each aspect with indication more favorable...

  3. Automated liver sampling using a gradient dual-echo Dixon-based technique.

    PubMed

    Bashir, Mustafa R; Dale, Brian M; Merkle, Elmar M; Boll, Daniel T

    2012-05-01

    Magnetic resonance spectroscopy of the liver requires input from a physicist or physician at the time of acquisition to insure proper voxel selection, while in multiecho chemical shift imaging, numerous regions of interest must be manually selected in order to ensure analysis of a representative portion of the liver parenchyma. A fully automated technique could improve workflow by selecting representative portions of the liver prior to human analysis. Complete volumes from three-dimensional gradient dual-echo acquisitions with two-point Dixon reconstruction acquired at 1.5 and 3 T were analyzed in 100 subjects, using an automated liver sampling algorithm, based on ratio pairs calculated from signal intensity image data as fat-only/water-only and log(in-phase/opposed-phase) on a voxel-by-voxel basis. Using different gridding variations of the algorithm, the average correct liver volume samples ranged from 527 to 733 mL. The average percentage of sample located within the liver ranged from 95.4 to 97.1%, whereas the average incorrect volume selected was 16.5-35.4 mL (2.9-4.6%). Average run time was 19.7-79.0 s. The algorithm consistently selected large samples of the hepatic parenchyma with small amounts of erroneous extrahepatic sampling, and run times were feasible for execution on an MRI system console during exam acquisition. Copyright © 2011 Wiley Periodicals, Inc.

  4. Use of routine clinical multimodality imaging in a rabbit model of osteoarthritis--part I.

    PubMed

    Bouchgua, M; Alexander, K; d'Anjou, M André; Girard, C A; Carmel, E Norman; Beauchamp, G; Richard, H; Laverty, S

    2009-02-01

    To evaluate in vivo the evolution of osteoarthritis (OA) lesions temporally in a rabbit model of OA with clinically available imaging modalities: computed radiography (CR), helical single-slice computed tomography (CT), and 1.5 tesla (T) magnetic resonance imaging (MRI). Imaging was performed on knees of anesthetized rabbits [10 anterior cruciate ligament transection (ACLT) and contralateral sham joints and six control rabbits] at baseline and at intervals up to 12 weeks post-surgery. Osteophytosis, subchondral bone sclerosis, bone marrow lesions (BMLs), femoropatellar effusion and articular cartilage were assessed. CT had the highest sensitivity (90%) and specificity (91%) to detect osteophytes. A significant increase in total joint osteophyte score occurred at all time-points post-operatively in the ACLT group alone. BMLs were identified and occurred most commonly in the lateral femoral condyle of the ACLT joints and were not identified in the tibia. A significant increase in joint effusion was present in the ACLT joints until 8 weeks after surgery. Bone sclerosis or cartilage defects were not reliably assessed with the selected imaging modalities. Combined, clinically available CT and 1.5 T MRI allowed the assessment of most of the characteristic lesions of OA and at early time-points in the development of the disease. However, the selected 1.5 T MRI sequences and acquisition times did not permit the detection of cartilage lesions in this rabbit OA model.

  5. Homogenising time series: Beliefs, dogmas and facts

    NASA Astrophysics Data System (ADS)

    Domonkos, P.

    2010-09-01

    For obtaining reliable information about climate change and climate variability the use of high quality data series is essentially important, and one basic tool of quality improvements is the statistical homogenisation of observed time series. In the recent decades large number of homogenisation methods has been developed, but the real effects of their application on time series are still not known entirely. The ongoing COST HOME project (COST ES0601) is devoted to reveal the real impacts of homogenisation methods more detailed and with higher confidence than earlier. As part of the COST activity, a benchmark dataset was built whose characteristics approach well the characteristics of real networks of observed time series. This dataset offers much better opportunity than ever to test the wide variety of homogenisation methods, and analyse the real effects of selected theoretical recommendations. The author believes that several old theoretical rules have to be re-evaluated. Some examples of the hot questions, a) Statistically detected change-points can be accepted only with the confirmation of metadata information? b) Do semi-hierarchic algorithms for detecting multiple change-points in time series function effectively in practise? c) Is it good to limit the spatial comparison of candidate series with up to five other series in the neighbourhood? Empirical results - those from the COST benchmark, and other experiments too - show that real observed time series usually include several inhomogeneities of different sizes. Small inhomogeneities seem like part of the climatic variability, thus the pure application of classic theory that change-points of observed time series can be found and corrected one-by-one is impossible. However, after homogenisation the linear trends, seasonal changes and long-term fluctuations of time series are usually much closer to the reality, than in raw time series. The developers and users of homogenisation methods have to bear in mind that the eventual purpose of homogenisation is not to find change-points, but to have the observed time series with statistical properties those characterise well the climate change and climate variability.

  6. Registered Replication Report: Rand, Greene, and Nowak (2012).

    PubMed

    Bouwmeester, S; Verkoeijen, P P J L; Aczel, B; Barbosa, F; Bègue, L; Brañas-Garza, P; Chmura, T G H; Cornelissen, G; Døssing, F S; Espín, A M; Evans, A M; Ferreira-Santos, F; Fiedler, S; Flegr, J; Ghaffari, M; Glöckner, A; Goeschl, T; Guo, L; Hauser, O P; Hernan-Gonzalez, R; Herrero, A; Horne, Z; Houdek, P; Johannesson, M; Koppel, L; Kujal, P; Laine, T; Lohse, J; Martins, E C; Mauro, C; Mischkowski, D; Mukherjee, S; Myrseth, K O R; Navarro-Martínez, D; Neal, T M S; Novakova, J; Pagà, R; Paiva, T O; Palfi, B; Piovesan, M; Rahal, R-M; Salomon, E; Srinivasan, N; Srivastava, A; Szaszi, B; Szollosi, A; Thor, K Ø; Tinghög, G; Trueblood, J S; Van Bavel, J J; van 't Veer, A E; Västfjäll, D; Warner, M; Wengström, E; Wills, J; Wollbrant, C E

    2017-05-01

    In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of -0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.

  7. Registered Replication Report: Rand, Greene, and Nowak (2012)

    PubMed Central

    Bouwmeester, S.; Verkoeijen, P. P. J. L.; Aczel, B.; Barbosa, F.; Bègue, L.; Brañas-Garza, P.; Chmura, T. G. H.; Cornelissen, G.; Døssing, F. S.; Espín, A. M.; Evans, A. M.; Ferreira-Santos, F.; Fiedler, S.; Flegr, J.; Ghaffari, M.; Glöckner, A.; Goeschl, T.; Guo, L.; Hauser, O. P.; Hernan-Gonzalez, R.; Herrero, A.; Horne, Z.; Houdek, P.; Johannesson, M.; Koppel, L.; Kujal, P.; Laine, T.; Lohse, J.; Martins, E. C.; Mauro, C.; Mischkowski, D.; Mukherjee, S.; Myrseth, K. O. R.; Navarro-Martínez, D.; Neal, T. M. S.; Novakova, J.; Pagà, R.; Paiva, T. O.; Palfi, B.; Piovesan, M.; Rahal, R.-M.; Salomon, E.; Srinivasan, N.; Srivastava, A.; Szaszi, B.; Szollosi, A.; Thor, K. Ø.; Tinghög, G.; Trueblood, J. S.; Van Bavel, J. J.; van ‘t Veer, A. E.; Västfjäll, D.; Warner, M.; Wengström, E.; Wills, J.; Wollbrant, C. E.

    2017-01-01

    In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of −0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation. PMID:28475467

  8. An office-based emergencies course for third-year dental students.

    PubMed

    Wald, David A; Wang, Alvin; Carroll, Gerry; Trager, Jonathan; Cripe, Jane; Curtis, Michael

    2013-08-01

    Although uncommon, medical emergencies do occur in the dental office setting. This article describes the development and implementation of an office-based emergencies course for third-year dental students. The course reviews the basic management of selected medical emergencies. Background information is provided that further highlights the importance of proper training to manage medical emergencies in the dental office. Details regarding course development, implementation, logistics, and teaching points are highlighted. The article provides a starting point from which dental educators can modify and adapt this course and its objectives to fit their needs or resources. This is a timely topic that should benefit both dental students and dental educators.

  9. Image Capture and Display Based on Embedded Linux

    NASA Astrophysics Data System (ADS)

    Weigong, Zhang; Suran, Di; Yongxiang, Zhang; Liming, Li

    For the requirement of building a highly reliable communication system, SpaceWire was selected in the integrated electronic system. There was a need to test the performance of SpaceWire. As part of the testing work, the goal of this paper is to transmit image data from CMOS camera through SpaceWire and display real-time images on the graphical user interface with Qt in the embedded development platform of Linux & ARM. A point-to-point mode of transmission was chosen; the running result showed the two communication ends basically reach a consensus picture in succession. It suggests that the SpaceWire can transmit the data reliably.

  10. Re-starting smoking in the postpartum period after receiving a smoking cessation intervention: a systematic review.

    PubMed

    Jones, Matthew; Lewis, Sarah; Parrott, Steve; Wormall, Stephen; Coleman, Tim

    2016-06-01

    In pregnant smoking cessation trial participants, to estimate (1) among women abstinent at the end of pregnancy, the proportion who re-start smoking at time-points afterwards (primary analysis) and (2) among all trial participants, the proportion smoking at the end of pregnancy and at selected time-points during the postpartum period (secondary analysis). Trials identified from two Cochrane reviews plus searches of Medline and EMBASE. Twenty-seven trials were included. The included trials were randomized or quasi-randomized trials of within-pregnancy cessation interventions given to smokers who reported abstinence both at end of pregnancy and at one or more defined time-points after birth. Outcomes were validated biochemically and self-reported continuous abstinence from smoking and 7-day point prevalence abstinence. The primary random-effects meta-analysis used longitudinal data to estimate mean pooled proportions of re-starting smoking; a secondary analysis used cross-sectional data to estimate the mean proportions smoking at different postpartum time-points. Subgroup analyses were performed on biochemically validated abstinence. The pooled mean proportion re-starting at 6 months postpartum was 43% [95% confidence interval (CI) = 16-72%, I(2)  = 96.7%] (11 trials, 571 abstinent women). The pooled mean proportion smoking at the end of pregnancy was 87% (95% CI = 84-90%, I(2)  = 93.2%) and 94% (95% CI = 92-96%, I(2)  = 88%) at 6 months postpartum (23 trials, 9262 trial participants). Findings were similar when using biochemically validated abstinence. In clinical trials of smoking cessation interventions during pregnancy only 13% are abstinent at term. Of these, 43% re-start by 6 months postpartum. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  11. An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.

    PubMed

    Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel

    2016-06-01

    The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.

  12. Satellite Power Systems (SPS) concept definition study. Volume 2: SPS system requirements

    NASA Technical Reports Server (NTRS)

    Hanley, G.

    1978-01-01

    Collected data reflected the level of definition resulting from the evaluation of a broad spectrum of SPS (satellite power systems) concepts. As the various concepts matured, these requirements were updated to reflect the requirements identified for the projected satellite system/subsystem point design(s). The study established several candidate concepts which were presented to provide a basis for the selection of one or two approaches that would be given a more comprehensive examination. The two selected concepts were expanded and constitute the selected system point designs. The identified system/subsystem requirements was emphasized and information on the selected point design was provided.

  13. Bayesian dynamic modeling of time series of dengue disease case counts

    PubMed Central

    López-Quílez, Antonio; Torres-Prieto, Alexander

    2017-01-01

    The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941

  14. Causes of cine image quality deterioration in cardiac catheterization laboratories.

    PubMed

    Levin, D C; Dunham, L R; Stueve, R

    1983-10-01

    Deterioration of cineangiographic image quality can result from malfunctions or technical errors at a number of points along the cine imaging chain: generator and automatic brightness control, x-ray tube, x-ray beam geometry, image intensifier, optics, cine camera, cine film, film processing, and cine projector. Such malfunctions or errors can result in loss of image contrast, loss of spatial resolution, improper control of film optical density (brightness), or some combination thereof. While the electronic and photographic technology involved is complex, physicians who perform cardiac catheterization should be conversant with the problems and what can be done to solve them. Catheterization laboratory personnel have control over a number of factors that directly affect image quality, including radiation dose rate per cine frame, kilovoltage or pulse width (depending on type of automatic brightness control), cine run time, selection of small or large focal spot, proper object-intensifier distance and beam collimation, aperture of the cine camera lens, selection of cine film, processing temperature, processing immersion time, and selection of developer.

  15. Process for growing silicon carbide whiskers by undercooling

    DOEpatents

    Shalek, Peter D.

    1987-01-01

    A method of growing silicon carbide whiskers, especially in the .beta. form, using a heating schedule wherein the temperature of the atmosphere in the growth zone of a furnace is first heated to or beyond the growth temperature and then is cooled to or below the growth temperature to induce nucleation of whiskers at catalyst sites at a desired point in time which results in the selection.

  16. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  17. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.

  18. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  19. Precise γ-ray timing and radio observations of 17 FERMI γ-ray pulsars

    DOE PAGES

    Ray, Paul S.; Kerr, M.; Parent, D.; ...

    2011-04-29

    Here, we present precise phase-connected pulse timing solutions for 16 γ-ray-selected pulsars recently discovered using the Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope plus one very faint radio pulsar (PSR J1124–5916) that is more effectively timed with the LAT. We describe the analysis techniques including a maximum likelihood method for determining pulse times of arrival from unbinned photon data. A major result of this work is improved position determinations, which are crucial for multiwavelength follow-up. For most of the pulsars, we overlay the timing localizations on X-ray images from Swift and describe the status of X-ray counterpartmore » associations. We report glitches measured in PSRs J0007+7303, J1124–5916, and J1813–1246. We analyze a new 20 ks Chandra ACIS observation of PSR J0633+0632 that reveals an arcminute-scale X-ray nebula extending to the south of the pulsar. We were also able to precisely localize the X-ray point source counterpart to the pulsar and find a spectrum that can be described by an absorbed blackbody or neutron star atmosphere with a hard power-law component. Another Chandra ACIS image of PSR J1732–3131 reveals a faint X-ray point source at a location consistent with the timing position of the pulsar. Finally, we present a compilation of new and archival searches for radio pulsations from each of the γ-ray-selected pulsars as well as a new Parkes radio observation of PSR J1124–5916 to establish the γ-ray to radio phase offset.« less

  20. Target loads of atmospheric sulfur deposition for the protection and recovery of acid-sensitive streams in the Southern Blue Ridge Province.

    PubMed

    Sullivan, Timothy J; Cosby, Bernard J; Jackson, William A

    2011-11-01

    An important tool in the evaluation of acidification damage to aquatic and terrestrial ecosystems is the critical load (CL), which represents the steady-state level of acidic deposition below which ecological damage would not be expected to occur, according to current scientific understanding. A deposition load intended to be protective of a specified resource condition at a particular point in time is generally called a target load (TL). The CL or TL for protection of aquatic biota is generally based on maintaining surface water acid neutralizing capacity (ANC) at an acceptable level. This study included calibration and application of the watershed model MAGIC (Model of Acidification of Groundwater in Catchments) to estimate the target sulfur (S) deposition load for the protection of aquatic resources at several future points in time in 66 generally acid-sensitive watersheds in the southern Blue Ridge province of North Carolina and two adjoining states. Potential future change in nitrogen leaching is not considered. Estimated TLs for S deposition ranged from zero (ecological objective not attainable by the specified point in time) to values many times greater than current S deposition depending on the selected site, ANC endpoint, and evaluation year. For some sites, one or more of the selected target ANC critical levels (0, 20, 50, 100μeq/L) could not be achieved by the year 2100 even if S deposition was reduced to zero and maintained at that level throughout the simulation. Many of these highly sensitive streams were simulated by the model to have had preindustrial ANC below some of these target values. For other sites, the watershed soils contained sufficiently large buffering capacity that even very high sustained levels of atmospheric S deposition would not reduce stream ANC below common damage thresholds. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Insulin sensitivity indices: a proposal of cut-off points for simple identification of insulin-resistant subjects.

    PubMed

    Radikova, Z; Koska, J; Huckova, M; Ksinantova, L; Imrich, R; Vigas, M; Trnovec, T; Langer, P; Sebokova, E; Klimes, I

    2006-05-01

    Demanding measurement of insulin sensitivity using clamp methods does not simplify the identification of insulin resistant subjects in the general population. Other approaches such as fasting- or oral glucose tolerance test-derived insulin sensitivity indices were proposed and validated with the euglycemic clamp. Nevertheless, a lack of reference values for these indices prevents their wider use in epidemiological studies and clinical practice. The aim of our study was therefore to define the cut-off points of insulin resistance indices as well as the ranges of the most frequently obtained values for selected indices. A standard 75 g oral glucose tolerance test was carried out in 1156 subjects from a Caucasian rural population with no previous evidence of diabetes or other dysglycemias. Insulin resistance/sensitivity indices (HOMA-IR, HOMA-IR2, ISI Cederholm, and ISI Matsuda) were calculated. The 75th percentile value as the cut-off point to define IR corresponded with a HOMA-IR of 2.29, a HOMA-IR2 of 1.21, a 25th percentile for ISI Cederholm, and ISI Matsuda of 57 and 5.0, respectively. For the first time, the cut-off points for selected indices and their most frequently obtained values were established for groups of subjects as defined by glucose homeostasis and BMI. Thus, insulin-resistant subjects can be identified using this simple approach.

  2. Identification of human plasma metabolites exhibiting time-of-day variation using an untargeted liquid chromatography-mass spectrometry metabolomic approach.

    PubMed

    Ang, Joo Ern; Revell, Victoria; Mann, Anuska; Mäntele, Simone; Otway, Daniella T; Johnston, Jonathan D; Thumser, Alfred E; Skene, Debra J; Raynaud, Florence

    2012-08-01

    Although daily rhythms regulate multiple aspects of human physiology, rhythmic control of the metabolome remains poorly understood. The primary objective of this proof-of-concept study was identification of metabolites in human plasma that exhibit significant 24-h variation. This was assessed via an untargeted metabolomic approach using liquid chromatography-mass spectrometry (LC-MS). Eight lean, healthy, and unmedicated men, mean age 53.6 (SD ± 6.0) yrs, maintained a fixed sleep/wake schedule and dietary regime for 1 wk at home prior to an adaptation night and followed by a 25-h experimental session in the laboratory where the light/dark cycle, sleep/wake, posture, and calorific intake were strictly controlled. Plasma samples from each individual at selected time points were prepared using liquid-phase extraction followed by reverse-phase LC coupled to quadrupole time-of-flight MS analysis in positive ionization mode. Time-of-day variation in the metabolites was screened for using orthogonal partial least square discrimination between selected time points of 10:00 vs. 22:00 h, 16:00 vs. 04:00 h, and 07:00 (d 1) vs. 16:00 h, as well as repeated-measures analysis of variance with time as an independent variable. Subsequently, cosinor analysis was performed on all the sampled time points across the 24-h day to assess for significant daily variation. In this study, analytical variability, assessed using known internal standards, was low with coefficients of variation <10%. A total of 1069 metabolite features were detected and 203 (19%) showed significant time-of-day variation. Of these, 34 metabolites were identified using a combination of accurate mass, tandem MS, and online database searches. These metabolites include corticosteroids, bilirubin, amino acids, acylcarnitines, and phospholipids; of note, the magnitude of the 24-h variation of these identified metabolites was large, with the mean ratio of oscillation range over MESOR (24-h time series mean) of 65% (95% confidence interval [CI]: 49-81%). Importantly, several of these human plasma metabolites, including specific acylcarnitines and phospholipids, were hitherto not known to be 24-h variant. These findings represent an important baseline and will be useful in guiding the design and interpretation of future metabolite-based studies.

  3. Positive Changes in Perceptions and Selections of Healthful Foods by College Students after a Short-Term Point-of-Selection Intervention at a Dining Hall

    ERIC Educational Resources Information Center

    Peterson, Sharon; Duncan, Diana Poovey; Null, Dawn Bloyd; Roth, Sara Long; Gill, Lynn

    2010-01-01

    Objective: Determine the effects of a short-term, multi-faceted, point-of-selection intervention on college students' perceptions and selection of 10 targeted healthful foods in a university dining hall and changes in their self-reported overall eating behaviors. Participants: 104 college students, (age 18-23) completed pre-I and post-I surveys.…

  4. A protein-targeting strategy used to develop a selective inhibitor of the E17K point mutation in the PH domain of Akt1

    NASA Astrophysics Data System (ADS)

    Deyle, Kaycie M.; Farrow, Blake; Qiao Hee, Ying; Work, Jeremy; Wong, Michelle; Lai, Bert; Umeda, Aiko; Millward, Steven W.; Nag, Arundhati; Das, Samir; Heath, James R.

    2015-05-01

    Ligands that can bind selectively to proteins with single amino-acid point mutations offer the potential to detect or treat an abnormal protein in the presence of the wild type (WT). However, it is difficult to develop a selective ligand if the point mutation is not associated with an addressable location, such as a binding pocket. Here we report an all-chemical synthetic epitope-targeting strategy that we used to discover a 5-mer peptide with selectivity for the E17K-transforming point mutation in the pleckstrin homology domain of the Akt1 oncoprotein. A fragment of Akt1 that contained the E17K mutation and an I19[propargylglycine] substitution was synthesized to form an addressable synthetic epitope. Azide-presenting peptides that clicked covalently onto this alkyne-presenting epitope were selected from a library using in situ screening. One peptide exhibits a 10:1 in vitro selectivity for the oncoprotein relative to the WT, with a similar selectivity in cells. This 5-mer peptide was expanded into a larger ligand that selectively blocks the E17K Akt1 interaction with its PIP3 (phosphatidylinositol (3,4,5)-trisphosphate) substrate.

  5. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  6. [Clinical study of fire acupuncture with centro-square needles for knee osteoarthritis].

    PubMed

    Wang, Bing; Hu, Jing; Zhang, Ning; Wang, Jingjing; Chen, Zhongjie; Wu, Zhongchao

    2017-05-12

    To compare the efficacy difference between fire acupuncture with centro-square needles (FACSN) and filiform needling (FN) for knee osteoarthritis (KOA). Seventy-two patients were randomly assigned into an FACSN group and an FN group, 36 cases in each one. Ashi points, Xuehai (SP 10), Liangqiu (ST 34), Neixiyan (EX-LE 4), Dubi (ST 35), Zusanli (ST 36), Yanglingquan (GB 34) and Yinlingquan (SP 9) were selected in the two groups. The FACSN group was treated with FACSN, and three acupoints were selected for each treatment; the FN group was treated with FN, and all the acupoints were selected for each treatment. The cupping treatment was given after acupuncture in the two groups. The treatment was given once every other day, without treatment on Sundays. The treatment was given three times a week, 6 times as one course; totally 2 courses were provided. The visual analogue scale (VAS) and Western Ontario and McMaster Universities Arthritis Index (WOMAC) were observed in the two groups before treatment, two weeks, four weeks into treatment and at one-month follow-up visit. In addition, the comprehensive efficacy was compared between the two groups. Compared before treatment, the score of VAS and the total score of WOMAC were improved in the two groups at each time point after treatment (all P <0.01); the scores of VAS at each time point after treatment in FACSN group were lower than those in the FN group (all P <0.05); four weeks into treatment and at one-month follow-up visit, the total score of WOMAC in the FACSN group was lower than that in the FN group (both P <0.05). Two weeks into treatment, the total effective rate was 88.9% (32/36) in the FACSN group, which was higher than 61.1% (22/36) in the FN group ( P <0.01); four weeks into treatment and at one-month follow-up visit, the cured and remarkable effective rates were 66.7% (24/36) and 83.3% (30/36) in the FACSN group, which were higher than 41.7% (15/36) and 44.4% (16/36) in the FN group ( P <0.05, P <0.01), respectively. Fire acupuncture with centro-square needles has relatively high cured and remarkable effective rate for KOA, with rapid onset; as for pain relief, the efficacy is superior to filiform needling.

  7. Electronic method for autofluorography of macromolecules on two-D matrices

    DOEpatents

    Davidson, Jackson B.; Case, Arthur L.

    1983-01-01

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100-1000 times.

  8. Evaluation of wet cupping therapy on the arterial and venous blood parameters in healthy Arabian horses

    PubMed Central

    Shawaf, Turke; El-Deeb, Wael; Hussen, Jamal; Hendi, Mahmoud; Al-Bulushi, Shahab

    2018-01-01

    Aim: Recently, the complementary therapies such as cupping and acupuncture are being used in veterinary medicine. This research was carried out to determine the effects of wet cupping therapy (Hijama) on the hematological and the biochemical parameters in the healthy Arabian horses for the first time. Materials and Methods: In this study, seven clinically healthy Arabian horses were randomly selected. Four points on the animal body were selected to perform the cupping therapy. Two points were selected at the back just behind the scapula on the left and right sides; another two points were located in the rump. Cups with 4 oz (125 ml) size with narrow mouths were used. A manual pump (sucking cups) was used to create the negative pressure within the cups during cupping. Arterial and venous blood parameters and serum cortisol concentration were measured before cupping and 3 days and 2, 4, and 8 weeks after cupping. Results: No significant differences were estimated in most hematological and biochemical parameters after cupping. A significant decrease in the concentration of serum cortisol was observed in 3 and 14 days after cupping. Conclusions: Cupping induced minor changes on the hematological and biochemical parameters in Arabian horses. This is the first trial on the effects of wet cupping therapy on the different parameters in Arabian horses, which would be useful for further investigations on the role of complementary therapies in horses. Our further studies will include different disease models.

  9. Direct detection of antiprotons with the Timepix3 in a new electrostatic selection beamline

    NASA Astrophysics Data System (ADS)

    Pacifico, N.; Aghion, S.; Alozy, J.; Amsler, C.; Ariga, A.; Ariga, T.; Bonomi, G.; Bräunig, P.; Bremer, J.; Brusa, R. S.; Cabaret, L.; Caccia, M.; Campbell, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Chlouba, K.; Cialdi, S.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Dudarev, A.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Holmestad, H.; Huse, T.; Jordan, E.; Kellerbauer, A.; Kimura, M.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lawler, G.; Lebrun, P.; Llopart, X.; Malbrunot, C.; Mariazzi, S.; Marx, L.; Matveev, V.; Mazzotta, Z.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pagano, D.; Penasa, L.; Petracek, V.; Pistillo, C.; Prelz, F.; Prevedelli, M.; Ravelli, L.; Resch, L.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Smestad, L.; Sorrentino, F.; Spacek, M.; Storey, J.; Strojek, I. M.; Testera, G.; Tietje, I.; Tlustos, L.; Widmann, E.; Yzombard, P.; Zavatarelli, S.; Zmeskal, J.; Zurlo, N.

    2016-09-01

    We present here the first results obtained employing the Timepix3 for the detection and tagging of annihilations of low energy antiprotons. The Timepix3 is a recently developed hybrid pixel detector with advanced Time-of-Arrival and Time-over-Threshold capabilities and has the potential of allowing precise kinetic energy measurements of low energy charged particles from their time of flight. The tagging of the characteristic antiproton annihilation signature, already studied by our group, is enabled by the high spatial and energy resolution of this detector. In this study we have used a new, dedicated, energy selection beamline (GRACE). The line is symbiotic to the AEgIS experiment at the CERN Antiproton Decelerator and is dedicated to detector tests and possibly antiproton physics experiments. We show how the high resolution of the Timepix3 on the Time-of-Arrival and Time-over-Threshold information allows for a precise 3D reconstruction of the annihilation prongs. The presented results point at the potential use of the Timepix3 in antimatter-research experiments where a precise and unambiguous tagging of antiproton annihilations is required.

  10. Effects of dietary 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47) exposure on medaka (Oryzias latipes) swimming behavior.

    PubMed

    Sastre, Salvador; Fernández Torija, Carlos; Carbonell, Gregoria; Rodríguez Martín, José Antonio; Beltrán, Eulalia María; González-Doncel, Miguel

    2018-02-01

    A diet fortified with 2,2', 4,4'-tetrabromodiphenyl ether (BDE-47: 0, 10, 100, and 1000 ng/g) was dosed to 4-7-day-old post-hatch medaka fish for 40 days to evaluate the effects on the swimming activity of fish using a miniaturized swimming flume. Chlorpyrifos (CF)-exposed fish were selected as the positive control to assess the validity and sensitivity of the behavioral findings. After 20 and 40 days of exposure, the locomotor activity was analyzed for 6 min in a flume section (arena). The CF positive control for each time point were fish exposed to 50 ng CF/ml for 48 h. Swimming patterns, presented as two-dimensional heat maps of fish movement and positioning, were obtained by geostatistical analyses. The heat maps of the control groups at time point 20 revealed visually comparable swimming patterns to those of the BDE-47-treated groups. For the comparative fish positioning analysis, both the arenas were divided into 15 proportional areas. No statistical differences were found between residence times in the areas from the control groups and those from the BDE-47-treated groups. At time point 40, the heat map overall patterns of the control groups differed visually from that of the 100-ng BDE-47/g-treated group, but a comparative analysis of the residence times in the corresponding 15 areas did not reveal consistent differences. The relative distances traveled by the control and treated groups at time points 20 and 40 were also comparable. The heat maps of CF-treated fish at both time points showed contrasting swim patterns with respect to those of the controls. These differential patterns were statistically supported with differences in the residence times for different areas. The relative distances traveled by the CF-treated fish were also significantly shorter. These results confirm the validity of the experimental design and indicate that a dietary BDE-47 exposure does not affect forced swimming in medaka at growing stages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Quantification of glutathione transverse relaxation time T2 using echo time extension with variable refocusing selectivity and symmetry in the human brain at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Swanberg, Kelley M.; Prinsen, Hetty; Coman, Daniel; de Graaf, Robin A.; Juchem, Christoph

    2018-05-01

    Glutathione (GSH) is an endogenous antioxidant implicated in numerous biological processes, including those associated with multiple sclerosis, aging, and cancer. Spectral editing techniques have greatly facilitated the acquisition of glutathione signal in living humans via proton magnetic resonance spectroscopy, but signal quantification at 7 Tesla is still hampered by uncertainty about the glutathione transverse decay rate T2 relative to those of commonly employed quantitative references like N-acetyl aspartate (NAA), total creatine, or water. While the T2 of uncoupled singlets can be derived in a straightforward manner from exponential signal decay as a function of echo time, similar estimation of signal decay in GSH is complicated by a spin system that involves both weak and strong J-couplings as well as resonances that overlap those of several other metabolites and macromolecules. Here, we extend a previously published method for quantifying the T2 of GABA, a weakly coupled system, to quantify T2 of the strongly coupled spin system glutathione in the human brain at 7 Tesla. Using full density matrix simulation of glutathione signal behavior, we selected an array of eight optimized echo times between 72 and 322 ms for glutathione signal acquisition by J-difference editing (JDE). We varied the selectivity and symmetry parameters of the inversion pulses used for echo time extension to further optimize the intensity, simplicity, and distinctiveness of glutathione signals at chosen echo times. Pairs of selective adiabatic inversion pulses replaced nonselective pulses at three extended echo times, and symmetry of the time intervals between the two extension pulses was adjusted at one extended echo time to compensate for J-modulation, thereby resulting in appreciable signal-to-noise ratio and quantifiable signal shapes at all measured points. Glutathione signal across all echo times fit smooth monoexponential curves over ten scans of occipital cortex voxels in nine subjects. The T2 of glutathione was calculated to be 145.0 ± 20.1 ms (mean ± standard deviation); this result was robust within one standard deviation to changes in metabolite fitting baseline corrections and removal of individual data points on the signal decay curve. The measured T2 of NAA (222.1 ± 24.7 ms) and total creatine (153.0 ± 19.9 ms) were both higher than that calculated for GSH. Apparent glutathione concentration quantified relative to both reference metabolites increased by up to 32% and 6%, respectively, upon correction with calculated T2 values, emphasizing the importance of considering T2 relaxation differences in the spectroscopic measurement of these metabolites, especially at longer echo times.

  12. Quantification of glutathione transverse relaxation time T2 using echo time extension with variable refocusing selectivity and symmetry in the human brain at 7 Tesla.

    PubMed

    Swanberg, Kelley M; Prinsen, Hetty; Coman, Daniel; de Graaf, Robin A; Juchem, Christoph

    2018-05-01

    Glutathione (GSH) is an endogenous antioxidant implicated in numerous biological processes, including those associated with multiple sclerosis, aging, and cancer. Spectral editing techniques have greatly facilitated the acquisition of glutathione signal in living humans via proton magnetic resonance spectroscopy, but signal quantification at 7 Tesla is still hampered by uncertainty about the glutathione transverse decay rate T 2 relative to those of commonly employed quantitative references like N-acetyl aspartate (NAA), total creatine, or water. While the T 2 of uncoupled singlets can be derived in a straightforward manner from exponential signal decay as a function of echo time, similar estimation of signal decay in GSH is complicated by a spin system that involves both weak and strong J-couplings as well as resonances that overlap those of several other metabolites and macromolecules. Here, we extend a previously published method for quantifying the T 2 of GABA, a weakly coupled system, to quantify T 2 of the strongly coupled spin system glutathione in the human brain at 7 Tesla. Using full density matrix simulation of glutathione signal behavior, we selected an array of eight optimized echo times between 72 and 322 ms for glutathione signal acquisition by J-difference editing (JDE). We varied the selectivity and symmetry parameters of the inversion pulses used for echo time extension to further optimize the intensity, simplicity, and distinctiveness of glutathione signals at chosen echo times. Pairs of selective adiabatic inversion pulses replaced nonselective pulses at three extended echo times, and symmetry of the time intervals between the two extension pulses was adjusted at one extended echo time to compensate for J-modulation, thereby resulting in appreciable signal-to-noise ratio and quantifiable signal shapes at all measured points. Glutathione signal across all echo times fit smooth monoexponential curves over ten scans of occipital cortex voxels in nine subjects. The T 2 of glutathione was calculated to be 145.0 ± 20.1 ms (mean ± standard deviation); this result was robust within one standard deviation to changes in metabolite fitting baseline corrections and removal of individual data points on the signal decay curve. The measured T 2 of NAA (222.1 ± 24.7 ms) and total creatine (153.0 ± 19.9 ms) were both higher than that calculated for GSH. Apparent glutathione concentration quantified relative to both reference metabolites increased by up to 32% and 6%, respectively, upon correction with calculated T 2 values, emphasizing the importance of considering T 2 relaxation differences in the spectroscopic measurement of these metabolites, especially at longer echo times. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

    PubMed Central

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2010-01-01

    With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443

  14. A method for selecting potential geosites. The case of glacial geosites in the Chablais area (French and Swiss Prealps)

    NASA Astrophysics Data System (ADS)

    Perret, Amandine; Reynard, Emmanuel

    2014-05-01

    Since 2009, an Interreg IVA project (123 Chablais), dealing with the promotion of natural and cultural heritage in the Chablais area, has been developed. It is linked to the creation of the Chablais Geopark. In a context of development of smart forms of tourism, the objective was to develop a strategy promoting the glacial heritage to a wide public in an area where the glaciers have almost disappeared. The recognition of specific places as geoheritage is the result of a double process: a scientific one, based on more or less sophisticated methods, and a social one, that is the acknowledgment by the society. One of the first scientific tasks is to produce a list of "potential geosites" that will be assessed in more details. However, this selection is often a weak point of inventories. It often seems like a "black box" without any transparency. In this project (123 Chablais) we carried out an inventory of glacial geosites, using the method developed by Reynard et al. (2007, 2012). However, a method has been created to enlighten the selection process, and to enhance choices in geoheritage management. As it was not possible to consider all sites in the Chablais area, a mixed selection approach was developed, halfway between completeness and specificity (Martin, 2012). The first step was the creation of a list of "points of interest", established using different sources: literature review, fieldwork and use of GIS to cross information. A selection was then performed according to two criteria: correspondence with a glacial stage (time axis) and belonging to a type of forms (spatial axis). Finally, selected sites aimed at providing a representative overview of the regional glacial witnesses. Therefore, representative sites of the regional geology were selected as well as sites presenting regional peculiarities Temporal and spatial attributes were given to the 101 points of interest identified. From a temporal point of view, this inventory aimed at presenting the main stages of the glacial retreat since the Last Glacial Maximum. From a spatial point of view, the objective was to show the different types of glacial remnants, but also some landforms related to deglaciation processes. Finally, 32 glacial and associated geosites were selected. Each geosite was submitted to a full evaluation process, including basis information, description, explanation of morphogenesis and an evaluation of values assigned to geosites. This assessment, first qualitative, provided valuable information concerning their intrinsic interest and their management. A numerical evaluation was also assessed to classify geosites and define an order of priority for their touristic promotion. It is worth to be noted that each selected points of interest can in fact be qualified as a geosite, using a clear method of selection. In this study, the numerical evaluation is not a mean to select geosites but a way to rank one geosite to another. Some geosites can be abandoned if intrinsic values are too low. Despite a well-defined protocol, the subjectivity and authors' choices are part of the selection process and inventory. This fact is certainly not a weakness. It must be considered whenever such inventory is made. Reference Martin, S. (2012). Valoriser le géopatrimoine par la médiation indirecte et la visualisation des objets géomorphologiques (Thèse de doctorat). Université de Lausanne, Lausanne. Reynard E., Fontana G., Kozlik L., Scapozza C. (2007). A method for assessing the scientific and additional values of geomorphosites, Geographica Helvetica, 62(3), 148-158. Reynard, E., Perret, A., Grangier, L., & Kozlik, L. (2012). Methodological approach for the assessment, protection, promotion and management of geoheritage. EGU General Assembly, Vienna.

  15. Effect of Receiver Choosing on Point Positions Determination in Network RTK

    NASA Astrophysics Data System (ADS)

    Bulbul, Sercan; Inal, Cevat

    2016-04-01

    Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.

  16. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  17. Micro Ring Grating Spectrometer with Adjustable Aperture

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Choi, Sang H. (Inventor)

    2012-01-01

    A spectrometer includes a micro-ring grating device having coaxially-aligned ring gratings for diffracting incident light onto a target focal point, a detection device for detecting light intensity, one or more actuators, and an adjustable aperture device defining a circular aperture. The aperture circumscribes a target focal point, and directs a light to the detection device. The aperture device is selectively adjustable using the actuators to select a portion of a frequency band for transmission to the detection device. A method of detecting intensity of a selected band of incident light includes directing incident light onto coaxially-aligned ring gratings of a micro-ring grating device, and diffracting the selected band onto a target focal point using the ring gratings. The method includes using an actuator to adjust an aperture device and pass a selected portion of the frequency band to a detection device for measuring the intensity of the selected portion.

  18. Adaptive marker-free registration using a multiple point strategy for real-time and robust endoscope electromagnetic navigation.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian; Mori, Kensaku

    2015-02-01

    Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. A modular approach to intensity-modulated arc therapy optimization with noncoplanar trajectories

    NASA Astrophysics Data System (ADS)

    Papp, Dávid; Bortfeld, Thomas; Unkelbach, Jan

    2015-07-01

    Utilizing noncoplanar beam angles in volumetric modulated arc therapy (VMAT) has the potential to combine the benefits of arc therapy, such as short treatment times, with the benefits of noncoplanar intensity modulated radiotherapy (IMRT) plans, such as improved organ sparing. Recently, vendors introduced treatment machines that allow for simultaneous couch and gantry motion during beam delivery to make noncoplanar VMAT treatments possible. Our aim is to provide a reliable optimization method for noncoplanar isocentric arc therapy plan optimization. The proposed solution is modular in the sense that it can incorporate different existing beam angle selection and coplanar arc therapy optimization methods. Treatment planning is performed in three steps. First, a number of promising noncoplanar beam directions are selected using an iterative beam selection heuristic; these beams serve as anchor points of the arc therapy trajectory. In the second step, continuous gantry/couch angle trajectories are optimized using a simple combinatorial optimization model to define a beam trajectory that efficiently visits each of the anchor points. Treatment time is controlled by limiting the time the beam needs to trace the prescribed trajectory. In the third and final step, an optimal arc therapy plan is found along the prescribed beam trajectory. In principle any existing arc therapy optimization method could be incorporated into this step; for this work we use a sliding window VMAT algorithm. The approach is demonstrated using two particularly challenging cases. The first one is a lung SBRT patient whose planning goals could not be satisfied with fewer than nine noncoplanar IMRT fields when the patient was treated in the clinic. The second one is a brain tumor patient, where the target volume overlaps with the optic nerves and the chiasm and it is directly adjacent to the brainstem. Both cases illustrate that the large number of angles utilized by isocentric noncoplanar VMAT plans can help improve dose conformity, homogeneity, and organ sparing simultaneously using the same beam trajectory length and delivery time as a coplanar VMAT plan.

  20. Rapid Bacterial Detection via an All-Electronic CMOS Biosensor

    PubMed Central

    Nikkhoo, Nasim; Cumby, Nichole; Gulak, P. Glenn; Maxwell, Karen L.

    2016-01-01

    The timely and accurate diagnosis of infectious diseases is one of the greatest challenges currently facing modern medicine. The development of innovative techniques for the rapid and accurate identification of bacterial pathogens in point-of-care facilities using low-cost, portable instruments is essential. We have developed a novel all-electronic biosensor that is able to identify bacteria in less than ten minutes. This technology exploits bacteriocins, protein toxins naturally produced by bacteria, as the selective biological detection element. The bacteriocins are integrated with an array of potassium-selective sensors in Complementary Metal Oxide Semiconductor technology to provide an inexpensive bacterial biosensor. An electronic platform connects the CMOS sensor to a computer for processing and real-time visualization. We have used this technology to successfully identify both Gram-positive and Gram-negative bacteria commonly found in human infections. PMID:27618185

  1. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  2. a Empirical Modelation of Runoff in Small Watersheds Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Lopatin, J.; Hernández, J.; Galleguillos, M.; Mancilla, G.

    2013-12-01

    Hydrological models allow the simulation of water natural processes and also the quantification and prediction of the effects of human impacts in runoff behavior. However, obtaining the information that is need for applying these models can be costly in both time and resources, especially in large and difficult to access areas. The objective of this research was to integrate LiDAR data in the hydrological modeling of runoff in small watersheds, using derivated hydrologic, vegetation and topography variables. The study area includes 10 small head watersheds cover bay forest, between 2 and 16 ha, which are located in the south-central coastal range of Chile. In each of the former instantaneous rainfall and runoff flow of a total of 15 rainfall events were measured, between August 2012 and July 2013, yielding a total of 79 observations. In March 2011 a Harrier 54/G4 Dual System was used to obtain a LiDAR point cloud of discrete pulse with an average of 4.64 points per square meter. A Digital Terrain Model (DTM) of 1 meter resolution was obtained from the point cloud, and subsequently 55 topographic variables were derived, such as physical watershed parameters and morphometric features. At the same time, 30 vegetation descriptive variables were obtained directly from the point cloud and from a Digital Canopy Model (DCM). The classification and regression "Random Forest" (RF) algorithm was used to select the most important variables in predicting water height (liters), and the "Partial Least Squares Path Modeling" (PLS-PM) algorithm was used to fit a model using the selected set of variables. Four Latent variables were selected (outer model) related to: climate, topography, vegetation and runoff, where in each one was designated a group of the predictor variables selected by RF (inner model). The coefficient of determination (R2) and Goodnes-of-Fit (GoF) of the final model were obtained. The best results were found when modeling using only the upper 50th percentile of rainfall events. The best variables selected by the RF algorithm were three topographic variables and three vegetation related ones. We obtained an R2 of 0.82 and a GoF of 0.87 with a 95% of confidence interval. This study shows that it is possible to predict the water harvesting collected during a rainstorm event in forest environment using only LiDAR data. However, this type of methodology does not have good result in flow produced by low magnitude rainfall events, as these are more influenced by initial conditions of soil, vegetation and climate, which make their behavior slower and erratic.

  3. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  4. VizieR Online Data Catalog: HST/COS survey of z<0.9 AGNs. I. (Danforth+, 2016)

    NASA Astrophysics Data System (ADS)

    Danforth, C. W.; Keeney, B. A.; Tilton, E. M.; Shull, J. M.; Stocke, J. T.; Stevans, M.; Pieri, M. M.; Savage, B. D.; France, K.; Syphers, D.; Smith, B. D.; Green, J. C.; Froning, C.; Penton, S. V.; Osterman, S. N.

    2016-05-01

    COS is the fourth-generation UV spectrograph on board HST and is optimized for medium-resolution (R~18000, Δv~17km/s) spectroscopy of point sources in the 1135-1800Å band. To constitute our survey, we selected 82 AGN sight lines from the archive which met the selection criteria. Most of the AGNs observed in Cycles 18-20 under the Guaranteed Time Observation programs (GTO; PI-Green) are included, along with numerous archival data sets collected under various Guest Observer programs. Observational and programatic details are presented in Table 2; see also section 2.1. (5 data files).

  5. Child care choices, food intake, and children's obesity status in the United States.

    PubMed

    Mandal, Bidisha; Powell, Lisa M

    2014-07-01

    This article studies two pathways in which selection into different types of child care settings may affect likelihood of childhood obesity. Frequency of intake of high energy-dense and low energy-dense food items may vary across care settings, affecting weight outcomes. We find that increased use of paid and regulated care settings, such as center care and Head Start, is associated with higher consumption of fruits and vegetables. Among children from single-mother households, the probability of obesity increases by 15 percentage point with an increase in intake of soft drinks from four to six times a week to daily consumption and by 25 percentage point with an increase in intake of fast food from one to three times a week to four to six times a week. Among children from two-parent households, eating vegetables one additional time a day is associated with 10 percentage point decreased probability of obesity, while one additional drink of juice a day is associated with 10 percentage point increased probability of obesity. Second, variation across care types could be manifested through differences in the structure of the physical environment not captured by differences in food intake alone. This type of effect is found to be marginal and is statistically significant among children from two-parent households only. Data are used from the Early Childhood Longitudinal Study - Birth Cohort surveys (N=10,700; years=2001-2008). Children's age ranged from four to six years in the sample. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Genetic variation of growth dynamics in maize (Zea mays L.) revealed through automated non-invasive phenotyping.

    PubMed

    Muraya, Moses M; Chu, Jianting; Zhao, Yusheng; Junker, Astrid; Klukas, Christian; Reif, Jochen C; Altmann, Thomas

    2017-01-01

    Hitherto, most quantitative trait loci of maize growth and biomass yield have been identified for a single time point, usually the final harvest stage. Through this approach cumulative effects are detected, without considering genetic factors causing phase-specific differences in growth rates. To assess the genetics of growth dynamics, we employed automated non-invasive phenotyping to monitor the plant sizes of 252 diverse maize inbred lines at 11 different developmental time points; 50 k SNP array genotype data were used for genome-wide association mapping and genomic selection. The heritability of biomass was estimated to be over 71%, and the average prediction accuracy amounted to 0.39. Using the individual time point data, 12 main effect marker-trait associations (MTAs) and six pairs of epistatic interactions were detected that displayed different patterns of expression at various developmental time points. A subset of them also showed significant effects on relative growth rates in different intervals. The detected MTAs jointly explained up to 12% of the total phenotypic variation, decreasing with developmental progression. Using non-parametric functional mapping and multivariate mapping approaches, four additional marker loci affecting growth dynamics were detected. Our results demonstrate that plant biomass accumulation is a complex trait governed by many small effect loci, most of which act at certain restricted developmental phases. This highlights the need for investigation of stage-specific growth affecting genes to elucidate important processes operating at different developmental phases. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  7. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  8. [Professor WANG Fuchun's experience in the acupoint selection of clinical treatment with acupuncture and moxibustion].

    PubMed

    Jiang, Hailin; Liu, Chengyu; Ha, Lijuan; Li, Tie

    2017-11-12

    Professor WANG Fuchun 's experience in the acupoint selection of clinical treatment with acupuncture and moxibustion was summarized. The main acupoints are selected by focusing on the chief symptoms of disease, the supplementary points are selected by differentiating the disorders. The acupoints are modified in terms of the changes of sickness. The effective acupoints are selected flexibly in accordance with the specific effects of points. The summary on the acupoint selection reflects professor WANG Fuchun 's academic thoughts and clinical experience and effectively instructs the clinical practice of acupuncture and moxibustion.

  9. Measurement of reach envelopes with a four-camera Selective Spot Recognition (SELSPOT) system

    NASA Technical Reports Server (NTRS)

    Stramler, J. H., Jr.; Woolford, B. J.

    1983-01-01

    The basic Selective Spot Recognition (SELSPOT) system is essentially a system which uses infrared LEDs and a 'camera' with an infrared-sensitive photodetector, a focusing lens, and some A/D electronics to produce a digital output representing an X and Y coordinate for each LED for each camera. When the data are synthesized across all cameras with appropriate calibrations, an XYZ set of coordinates is obtained for each LED at a given point in time. Attention is given to the operating modes, a system checkout, and reach envelopes and software. The Video Recording Adapter (VRA) represents the main addition to the basic SELSPOT system. The VRA contains a microprocessor and other electronics which permit user selection of several options and some interaction with the system.

  10. Unsupervised and self-mapping category formation and semantic object recognition for mobile robot vision used in an actual environment

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Tsukada, M.; Sato, K.

    2013-07-01

    This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.

  11. Prediction and Warning of Transported Turbulence in Long-Haul Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Ellrod, Gary P. (Inventor); Spence, Mark D. (Inventor); Shipley, Scott T. (Inventor)

    2017-01-01

    An aviation flight planning system is used for predicting and warning for intersection of flight paths with transported meteorological disturbances, such as transported turbulence and related phenomena. Sensed data and transmitted data provide real time and forecast data related to meteorological conditions. Data modelling transported meteorological disturbances are applied to the received transmitted data and the sensed data to use the data modelling transported meteorological disturbances to correlate the sensed data and received transmitted data. The correlation is used to identify transported meteorological disturbances source characteristics, and identify predicted transported meteorological disturbances trajectories from source to intersection with flight path in space and time. The correlated data are provided to a visualization system that projects coordinates of a point of interest (POI) in a selected point of view (POV) to displays the flight track and the predicted transported meteorological disturbances warnings for the flight crew.

  12. Racial-ethnic identity in mid-adolescence: content and change as predictors of academic achievement.

    PubMed

    Altschul, Inna; Oyserman, Daphna; Bybee, Deborah

    2006-01-01

    Three aspects of racial-ethnic identity (REI)-feeling connected to one's racial-ethnic group (Connectedness), being aware that others may not value the in-group (Awareness of Racism), and feeling that one's in-group is characterized by academic attainment (Embedded Achievement)-were hypothesized to promote academic achievement. Youth randomly selected from 3 low-income, urban schools (n=98 African American, n=41 Latino) reported on their REI 4 times over 2 school years. Hierarchical linear modeling shows a small increase in REI and the predicted REI-grades relationship. Youth high in both REI Connectedness and Embedded Achievement attained better grade point average (GPA) at each point in time; youth high in REI Connectedness and Awareness of Racism at the beginning of 8th grade attained better GPA through 9th grade. Effects are not moderated by race-ethnicity.

  13. Recurrence plots of discrete-time Gaussian stochastic processes

    NASA Astrophysics Data System (ADS)

    Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick

    2016-09-01

    We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.

  14. High-dimensional cluster analysis with the Masked EM Algorithm

    PubMed Central

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  15. Statistical Analysis of Periodic Oscillations in LASCO Coronal Mass Ejection Speeds

    NASA Technical Reports Server (NTRS)

    Michalek, G.; Shanmugaraju, A.; Gopalswamy, N.; Yashiro, S.; Akiyama, S.

    2016-01-01

    A large set of coronal mass ejections (CMEs, 3463) has been selected to study their periodic oscillations in speed in the Solar and Heliospheric Observatory (SOHO) missions Large Angle and Spectrometric Coronagraph (LASCO) field of view. These events, reported in the SOHOLASCO catalog in the period of time 19962004, were selected based on having at least 11 height-time measurements. This selection criterion allows us to construct at least ten-point speed distance profiles and evaluate kinematic properties of CMEs with a reasonable accuracy. To identify quasi-periodic oscillations in the speed of the CMEs a sinusoidal function was fitted to speed distance profiles and the speed time profiles. Of the considered events 22 revealed periodic velocity fluctuations. These speed oscillations have on average amplitude equal to 87 kms(exp -1) and period 7.8R /241 min (in distance-time). The study shows that speed oscillations are a common phenomenon associated with CME propagation implying that all the CMEs have a similar magnetic flux-rope structure. The nature of oscillations can be explained in terms of magnetohydrodynamic (MHD) waves excited during the eruption process. More accurate detection of these modes could, in the future, enable us to characterize magnetic structures in space (space seismology).

  16. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  17. Process for growing silicon carbide whiskers by undercooling

    DOEpatents

    Shalek, P.D.

    1987-10-27

    A method of growing silicon carbide whiskers, especially in the [beta] form, is disclosed using a heating schedule wherein the temperature of the atmosphere in the growth zone of a furnace is first heated to or beyond the growth temperature and then is cooled to or below the growth temperature to induce nucleation of whiskers at catalyst sites at a desired point in time which results in the selection. 3 figs.

  18. Atmospheric Models For Over-Ocean Propagation Loss

    DTIC Science & Technology

    2015-08-24

    Radiosonde balloons are launched daily at selected loca- tions, and measure temperature, dew point temperature, and air pressure as they ascend. Radiosondes...different times of year and locations. The result was used to estimate high-reliability SHF/EHF air -to-surface radio link performance in a maritime...environment. I. INTRODUCTION Air -to-surface radio links differ from typical satellite com- munications links in that the path elevation angles are lower

  19. Laboratory Evaluation of Drop-in Solvent Alternatives to n-Propyl Bromide for Vapor Degreasing

    NASA Technical Reports Server (NTRS)

    Mitchell, Mark A.; Lowrey, Nikki M.

    2012-01-01

    Based on this limited laboratory study, solvent blends of trans-1,2 dichloroethylene with HFEs, HFCs, or PFCs appear to be viable alternatives to n-propyl bromide for vapor degreasing. The lower boiling points of these blends may lead to greater solvent loss during use. Additional factors must be considered when selecting a solvent substitute, including stability over time, VOC, GWP, toxicity, and business considerations.

  20. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  1. Influence of laser power on the penetration depth and geometry of scanning tracks in selective laser melting

    NASA Astrophysics Data System (ADS)

    Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward

    2016-12-01

    SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.

  2. Optimization of the fiber laser parameters for local high-temperature impact on metal

    NASA Astrophysics Data System (ADS)

    Yatsko, Dmitrii S.; Polonik, Marina V.; Dudko, Olga V.

    2016-11-01

    This paper presents the local laser heating process of surface layer of the metal sample. The aim is to create the molten pool with the required depth by laser thermal treatment. During the heating the metal temperature at any point of the molten zone should not reach the boiling point of the main material. The laser power, exposure time and the spot size of a laser beam are selected as the variable parameters. The mathematical model for heat transfer in a semi-infinite body, applicable to finite slab, is used for preliminary theoretical estimation of acceptable parameters values of the laser thermal treatment. The optimization problem is solved by using an algorithm based on the scanning method of the search space (the zero-order method of conditional optimization). The calculated values of the parameters (the optimal set of "laser radiation power - exposure time - spot radius") are used to conduct a series of natural experiments to obtain a molten pool with the required depth. A two-stage experiment consists of: a local laser treatment of metal plate (steel) and then the examination of the microsection of the laser irradiated region. According to the experimental results, we can judge the adequacy of the ongoing calculations within the selected models.

  3. Recent wetland land loss due to hurricanes: improved estimates based upon multiple source images

    USGS Publications Warehouse

    Kranenburg, Christine J.; Palaseanu-Lovejoy, Monica; Barras, John A.; Brock, John C.; Wang, Ping; Rosati, Julie D.; Roberts, Tiffany M.

    2011-01-01

    The objective of this study was to provide a moderate resolution 30-m fractional water map of the Chenier Plain for 2003, 2006 and 2009 by using information contained in high-resolution satellite imagery of a subset of the study area. Indices and transforms pertaining to vegetation and water were created using the high-resolution imagery, and a threshold was applied to obtain a categorical land/water map. The high-resolution data was used to train a decision-tree classifier to estimate percent water in a lower resolution (Landsat) image. Two new water indices based on the tasseled cap transformation were proposed for IKONOS imagery in wetland environments and more than 700 input parameter combinations were considered for each Landsat image classified. Final selection and thresholding of the resulting percent water maps involved over 5,000 unambiguous classified random points using corresponding 1-m resolution aerial photographs, and a statistical optimization procedure to determine the threshold at which the maximum Kappa coefficient occurs. Each selected dataset has a Kappa coefficient, percent correctly classified (PCC) water, land and total greater than 90%. An accuracy assessment using 1,000 independent random points was performed. Using the validation points, the PCC values decreased to around 90%. The time series change analysis indicated that due to Hurricane Rita, the study area lost 6.5% of marsh area, and transient changes were less than 3% for either land or water. Hurricane Ike resulted in an additional 8% land loss, although not enough time has passed to discriminate between persistent and transient changes.

  4. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  5. Analysis of the release process of phenylpropanolamine hydrochloride from ethylcellulose matrix granules III. Effects of the dissolution condition on the release process.

    PubMed

    Fukui, Atsuko; Fujii, Ryuta; Yonezawa, Yorinobu; Sunada, Hisakazu

    2006-08-01

    In the pharmaceutical preparation of a controlled release drug, it is very important and necessary to understand the entire release properties. As the first step, the dissolution test under various conditions is selected for the in vitro test, and usually the results are analyzed following Drug Approval and Licensing Procedures. In this test, 3 time points for each release ratio, such as 0.2-0.4, 0.4-0.6, and over 0.7, respectively, should be selected in advance. These are analyzed as to whether their values are inside or outside the prescribed aims at each time point. This method is very simple and useful but the details of the release properties can not be clarified or confirmed. The validity of the dissolution test in analysis using a combination of the square-root time law and cube-root law equations to understand all the drug release properties was confirmed by comparing the simulated value with that measured in the previous papers. Dissolution tests under various conditions affecting drug release properties in the human body were then examined, and the results were analyzed by both methods to identify their strengths and weaknesses. Hereafter, the control of pharmaceutical preparation, the manufacturing process, and understanding the drug release properties will be more efficient. It is considered that analysis using the combination of the square-root time law and cube-root law equations is very useful and efficient. The accuracy of predicting drug release properties in the human body was improved and clarified.

  6. Automatic detection of end-diastolic and end-systolic frames in 2D echocardiography.

    PubMed

    Zolgharni, Massoud; Negoita, Madalina; Dhutia, Niti M; Mielewczik, Michael; Manoharan, Karikaran; Sohaib, S M Afzal; Finegold, Judith A; Sacchi, Stefania; Cole, Graham D; Francis, Darrel P

    2017-07-01

    Correctly selecting the end-diastolic and end-systolic frames on a 2D echocardiogram is important and challenging, for both human experts and automated algorithms. Manual selection is time-consuming and subject to uncertainty, and may affect the results obtained, especially for advanced measurements such as myocardial strain. We developed and evaluated algorithms which can automatically extract global and regional cardiac velocity, and identify end-diastolic and end-systolic frames. We acquired apical four-chamber 2D echocardiographic video recordings, each at least 10 heartbeats long, acquired twice at frame rates of 52 and 79 frames/s from 19 patients, yielding 38 recordings. Five experienced echocardiographers independently marked end-systolic and end-diastolic frames for the first 10 heartbeats of each recording. The automated algorithm also did this. Using the average of time points identified by five human operators as the reference gold standard, the individual operators had a root mean square difference from that gold standard of 46.5 ms. The algorithm had a root mean square difference from the human gold standard of 40.5 ms (P<.0001). Put another way, the algorithm-identified time point was an outlier in 122/564 heartbeats (21.6%), whereas the average human operator was an outlier in 254/564 heartbeats (45%). An automated algorithm can identify the end-systolic and end-diastolic frames with performance indistinguishable from that of human experts. This saves staff time, which could therefore be invested in assessing more beats, and reduces uncertainty about the reliability of the choice of frame. © 2017, Wiley Periodicals, Inc.

  7. Analysis of age as a factor in NASA astronaut selection and career landmarks.

    PubMed

    Kovacs, Gregory T A; Shadden, Mark

    2017-01-01

    NASA's periodic selection of astronauts is a highly selective process accepting applications from the general population, wherein the mechanics of selection are not made public. This research was an effort to determine if biases (specifically age) exist in the process and, if so, at which points they might manifest. Two sets of analyses were conducted. The first utilized data requested via the Freedom of Information Act (FOIA) on NASA astronaut applicants for the 2009 and 2013 selection years. Using a series of multinomial and logistic regressions, the data were analyzed to uncover whether age of the applicants linearly or nonlinearly affected their likelihood of receiving an invitation, as well as their likelihood of being selected into the astronaut program. The second used public data on age at selection and age at other career milestones for every astronaut selected from 1959 to 2013 to analyze trends in age over time using ordinary least-squares (OLS) regression and Pearson's correlation. The results for the FOIA data revealed a nonlinear relationship between age and receiving an interview, as well as age and selection into the astronaut program, but the most striking observation was the loss of age diversity at each stage of selection. Applicants younger or older than approximately 40 years were significantly less likely to receive invitations for interviews and were significantly less likely to be selected as an astronaut. Analysis of the public-source data for all selections since the beginning of the astronaut program revealed significant age trends over time including a gradual increase in selectee age and decreased tenure at NASA after last flight, with average age at retirement steady over the entire history of the astronaut program at approximately 48 years.

  8. Analysis of age as a factor in NASA astronaut selection and career landmarks

    PubMed Central

    Shadden, Mark

    2017-01-01

    NASA’s periodic selection of astronauts is a highly selective process accepting applications from the general population, wherein the mechanics of selection are not made public. This research was an effort to determine if biases (specifically age) exist in the process and, if so, at which points they might manifest. Two sets of analyses were conducted. The first utilized data requested via the Freedom of Information Act (FOIA) on NASA astronaut applicants for the 2009 and 2013 selection years. Using a series of multinomial and logistic regressions, the data were analyzed to uncover whether age of the applicants linearly or nonlinearly affected their likelihood of receiving an invitation, as well as their likelihood of being selected into the astronaut program. The second used public data on age at selection and age at other career milestones for every astronaut selected from 1959 to 2013 to analyze trends in age over time using ordinary least-squares (OLS) regression and Pearson’s correlation. The results for the FOIA data revealed a nonlinear relationship between age and receiving an interview, as well as age and selection into the astronaut program, but the most striking observation was the loss of age diversity at each stage of selection. Applicants younger or older than approximately 40 years were significantly less likely to receive invitations for interviews and were significantly less likely to be selected as an astronaut. Analysis of the public-source data for all selections since the beginning of the astronaut program revealed significant age trends over time including a gradual increase in selectee age and decreased tenure at NASA after last flight, with average age at retirement steady over the entire history of the astronaut program at approximately 48 years. PMID:28749968

  9. Nutrition Report Cards: An Opportunity to Improve School Lunch Selection

    PubMed Central

    Wansink, Brian; Just, David R.; Patterson, Richard W.; Smith, Laura E.

    2013-01-01

    Objective To explore the feasibility and implementation efficiency of Nutritional Report Cards(NRCs) in helping children make healthier food choices at school. Methods Pilot testing was conducted in a rural New York school district (K-12). Over a five-week period, 27 parents received a weekly e-mail containing a NRC listing how many meal components (fruits, vegetables, starches, milk), snacks, and a-la-carte foods their child selected. We analyzed choices of students in the NRC group vs. the control group, both prior to and during the intervention period. Point-of-sale system data for a-la-carte items was analyzed using Generalized Least Squares regressions with clustered standard errors. Results NRCs encouraged more home conversations about nutrition and more awareness of food selections. Despite the small sample, the NRC was associated with reduced selection of some items, such as the percentage of those selecting cookies which decreased from 14.3 to 6.5 percent. Additionally, despite requiring new keys on the check-out registers to generate the NRC, checkout times increased by only 0.16 seconds per transaction, and compiling and sending the NRCs required a total weekly investment of 30 minutes of staff time. Conclusions This test of concept suggests that NRCs are a feasible and inexpensive tool to guide children towards healthier choices. PMID:24098324

  10. Nutrition Report Cards: an opportunity to improve school lunch selection.

    PubMed

    Wansink, Brian; Just, David R; Patterson, Richard W; Smith, Laura E

    2013-01-01

    To explore the feasibility and implementation efficiency of Nutritional Report Cards (NRCs) in helping children make healthier food choices at school. Pilot testing was conducted in a rural New York school district (K-12). Over a five-week period, 27 parents received a weekly e-mail containing a NRC listing how many meal components (fruits, vegetables, starches, milk), snacks, and a-la-carte foods their child selected. We analyzed choices of students in the NRC group vs. the control group, both prior to and during the intervention period. Point-of-sale system data for a-la-carte items was analyzed using Generalized Least Squares regressions with clustered standard errors. NRCs encouraged more home conversations about nutrition and more awareness of food selections. Despite the small sample, the NRC was associated with reduced selection of some items, such as the percentage of those selecting cookies which decreased from 14.3 to 6.5 percent. Additionally, despite requiring new keys on the check-out registers to generate the NRC, checkout times increased by only 0.16 seconds per transaction, and compiling and sending the NRCs required a total weekly investment of 30 minutes of staff time. This test of concept suggests that NRCs are a feasible and inexpensive tool to guide children towards healthier choices.

  11. Multi-star processing and gyro filtering for the video inertial pointing system

    NASA Technical Reports Server (NTRS)

    Murphy, J. P.

    1976-01-01

    The video inertial pointing (VIP) system is being developed to satisfy the acquisition and pointing requirements of astronomical telescopes. The VIP system uses a single video sensor to provide star position information that can be used to generate three-axis pointing error signals (multi-star processing) and for input to a cathode ray tube (CRT) display of the star field. The pointing error signals are used to update the telescope's gyro stabilization system (gyro filtering). The CRT display facilitates target acquisition and positioning of the telescope by a remote operator. Linearized small angle equations are used for the multistar processing and a consideration of error performance and singularities lead to star pair location restrictions and equation selection criteria. A discrete steady-state Kalman filter which uses the integration of the gyros is developed and analyzed. The filter includes unit time delays representing asynchronous operations of the VIP microprocessor and video sensor. A digital simulation of a typical gyro stabilized gimbal is developed and used to validate the approach to the gyro filtering.

  12. Trajectories of affective states in adolescent hockey players: turning point and motivational antecedents.

    PubMed

    Gaudreau, Patrick; Amiot, Catherine E; Vallerand, Robert J

    2009-03-01

    This study examined longitudinal trajectories of positive and negative affective states with a sample of 265 adolescent elite hockey players followed across 3 measurement points during the 1st 11 weeks of a season. Latent class growth modeling, incorporating a time-varying covariate and a series of predictors assessed at the onset of the season, was used to chart out distinct longitudinal trajectories of affective states. Results provided evidence for 3 trajectories of positive affect and 3 trajectories of negative affect. Two of these trajectories were deflected by team selection, a seasonal turning point occurring after the 1st measurement point. Furthermore, the trajectories of positive and negative affective states were predicted by theoretically driven predictors assessed at the start of the season (i.e., self-determination, need satisfaction, athletic identity, and school identity). These results contribute to a better understanding of the motivational, social, and identity-related processes associated with the distinct affective trajectories of athletes participating in elite sport during adolescence.

  13. Design and implementation of flexible TWDM-PON with PtP WDM overlay based on WSS for next-generation optical access networks

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Yin, Hongxi; Qin, Jie; Liu, Chang; Liu, Anliang; Shao, Qi; Xu, Xiaoguang

    2016-09-01

    Aiming at the increasing demand of the diversification services and flexible bandwidth allocation of the future access networks, a flexible passive optical network (PON) scheme combining time and wavelength division multiplexing (TWDM) with point-to-point wavelength division multiplexing (PtP WDM) overlay is proposed for the next-generation optical access networks in this paper. A novel software-defined optical distribution network (ODN) structure is designed based on wavelength selective switches (WSS), which can implement wavelength and bandwidth dynamical allocations and suits for the bursty traffic. The experimental results reveal that the TWDM-PON can provide 40 Gb/s downstream and 10 Gb/s upstream data transmission, while the PtP WDM-PON can support 10 GHz point-to-point dedicated bandwidth as the overlay complement system. The wavelengths of the TWDM-PON and PtP WDM-PON are allocated dynamically based on WSS, which verifies the feasibility of the proposed structure.

  14. A Data Filter for Identifying Steady-State Operating Points in Engine Flight Data for Condition Monitoring Applications

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Litt, Jonathan S.

    2010-01-01

    This paper presents an algorithm that automatically identifies and extracts steady-state engine operating points from engine flight data. It calculates the mean and standard deviation of select parameters contained in the incoming flight data stream. If the standard deviation of the data falls below defined constraints, the engine is assumed to be at a steady-state operating point, and the mean measurement data at that point are archived for subsequent condition monitoring purposes. The fundamental design of the steady-state data filter is completely generic and applicable for any dynamic system. Additional domain-specific logic constraints are applied to reduce data outliers and variance within the collected steady-state data. The filter is designed for on-line real-time processing of streaming data as opposed to post-processing of the data in batch mode. Results of applying the steady-state data filter to recorded helicopter engine flight data are shown, demonstrating its utility for engine condition monitoring applications.

  15. A fast analytical undulator model for realistic high-energy FEL simulations

    NASA Astrophysics Data System (ADS)

    Tatchyn, R.; Cremer, T.

    1997-02-01

    A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.

  16. Performance Evaluation of Remote Memory Access (RMA) Programming on Shared Memory Parallel Computers

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    The purpose of this study is to evaluate the feasibility of remote memory access (RMA) programming on shared memory parallel computers. We discuss different RMA based implementations of selected CFD application benchmark kernels and compare them to corresponding message passing based codes. For the message-passing implementation we use MPI point-to-point and global communication routines. For the RMA based approach we consider two different libraries supporting this programming model. One is a shared memory parallelization library (SMPlib) developed at NASA Ames, the other is the MPI-2 extensions to the MPI Standard. We give timing comparisons for the different implementation strategies and discuss the performance.

  17. Selection on worker honeybee responses to queen pheromone (Apis mellifera L.)

    NASA Astrophysics Data System (ADS)

    Pankiw, T.; Winston, Mark L.; Fondrk, M. Kim; Slessor, Keith N.

    Disruptive selection for responsiveness to queen mandibular gland pheromone (QMP) in the retinue bioassay resulted in the production of high and low QMP responding strains of honeybees (Apis mellifera L.). Strains differed significantly in their retinue response to QMP after one generation of selection. By the third generation the high strain was on average at least nine times more responsive than the low strain. The strains showed seasonal phenotypic plasticity such that both strains were more responsive to the pheromone in the spring than in the fall. Directional selection for low seasonal variation indicated that phenotypic plasticity was an additional genetic component to retinue response to QMP. Selection for high and low retinue responsiveness to QMP was not an artifact of the synthetic blend because both strains were equally responsive or non-responsive to whole mandibular gland extracts compared with QMP. The use of these strains clearly pointed to an extra-mandibular source of retinue pheromones (Pankiw et al. 1995; Slessor et al. 1998; Keeling et al. 1999).

  18. The Euclid AOCS science mode design

    NASA Astrophysics Data System (ADS)

    Bacchetta, A.; Saponara, M.; Torasso, A.; Saavedra Criado, G.; Girouart, B.

    2015-06-01

    Euclid is a Medium-Class mission of the ESA Cosmic Vision 2015-2025 plan. Thales Alenia Space Italy has been selected as prime contractor for the Euclid design and implementation. The spacecraft will be launched in 2020 on a Soyuz launch vehicle from Kourou, to a large-amplitude orbit around the sun-earth libration point L2. The objective of Euclid is to understand the origin of the Universe's accelerating expansion, by mapping large-scale structure over a cosmic time covering the last 10 billion years. The mission requires the ability to survey a large fraction of the extragalactic sky (i.e. portion of sky with latitude higher than 30 deg with respect to galactic plane) over its lifetime, with very high system stability (telescope, focal plane, spacecraft pointing) to minimize systematic effects. The AOCS is a key element to meet the scientific requirements. The AOCS design drivers are pointing performance and image quality (Relative Pointing Error over 700 s less than 25 m as, 68 % confidence level), and minimization of slew time between observation fields to meet the goal of completing the Wide Extragalactic Survey in 6 years. The first driver demands a Fine Guidance Sensor in the telescope focal plane for accurate attitude measurement and actuators with low noise and fine command resolution. The second driver requires high-torque actuators and an extended attitude control bandwidth. In the design, reaction wheels (RWL) and cold-gas micro-propulsion (MPS) are used in a synergetic and complementary way during different operational phases of the science mode. The RWL are used for performing the field slews, whereas during scientific observation they are stopped not to perturb the pointing by additional mechanical noise. The MPS is used for maintaining the reference attitude with high pointing accuracy during the scientific observation. This unconventional concept achieves the pointing performance with the shortest maneuver times, with significant mass savings with respect to the MPS-only solution.

  19. Novel point estimation from a semiparametric ratio estimator (SPRE): long-term health outcomes from short-term linear data, with application to weight loss in obesity.

    PubMed

    Weissman-Miller, Deborah

    2013-11-02

    Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.

  20. Born at the Wrong Time: Selection Bias in the NHL Draft

    PubMed Central

    Deaner, Robert O.; Lowen, Aaron; Cobley, Stephen

    2013-01-01

    Relative age effects (RAEs) occur when those who are relatively older for their age group are more likely to succeed. RAEs occur reliably in some educational and athletic contexts, yet the causal mechanisms remain unclear. Here we provide the first direct test of one mechanism, selection bias, which can be defined as evaluators granting fewer opportunities to relatively younger individuals than is warranted by their latent ability. Because RAEs are well-established in hockey, we analyzed National Hockey League (NHL) drafts from 1980 to 2006. Compared to those born in the first quarter (i.e., January–March), those born in the third and fourth quarters were drafted more than 40 slots later than their productivity warranted, and they were roughly twice as likely to reach career benchmarks, such as 400 games played or 200 points scored. This selection bias in drafting did not decrease over time, apparently continues to occur, and reduces the playing opportunities of relatively younger players. This bias is remarkable because it is exhibited by professional decision makers evaluating adults in a context where RAEs have been widely publicized. Thus, selection bias based on relative age may be pervasive. PMID:23460902

  1. Selective synthesis and characterization of single-crystal silver molybdate/tungstate nanowires by a hydrothermal process.

    PubMed

    Cui, Xianjin; Yu, Shu-Hong; Li, Lingling; Biao, Liu; Li, Huabin; Mo, Maosong; Liu, Xian-Ming

    2004-01-05

    Selective synthesis of uniform single crystalline silver molybdate/tungstate nanorods/nanowires in large scale can be easily realized by a facile hydrothermal recrystallization technique. The synthesis is strongly dependent on the pH conditions, temperature, and reaction time. The phase transformation was examined in details. Pure Ag(2)MoO(4) and Ag(6)Mo(10)O(33) can be easily obtained under neutral condition and pH 2, respectively, whereas other mixed phases of Mo(17)O(47), Ag(2)Mo(2)O(7,) Ag(6)Mo(10)O(33) were observed under different pH conditions. Ag(6)Mo(10)O(33) nanowires with uniform diameter 50-60 nm and length up to several hundred micrometers were synthesized in large scale for the first time at 140 degrees C. The melting point of Ag(6)Mo(10)O(33) nanowires were found to be about 238 degrees C. Similarly, Ag(2)WO(4), and Ag(2)W(2)O(7) nanorods/nanowires can be selectively synthesized by controlling pH value. The results demonstrated that this route could be a potential mild way to selectively synthesize various molybdate nanowires with various phases in large scale.

  2. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    PubMed Central

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  3. Camera system considerations for geomorphic applications of SfM photogrammetry

    USGS Publications Warehouse

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.

  4. Microbial community changes in hydraulic fracturing fluids and produced water from shale gas extraction.

    PubMed

    Murali Mohan, Arvind; Hartsock, Angela; Bibby, Kyle J; Hammack, Richard W; Vidic, Radisav D; Gregory, Kelvin B

    2013-11-19

    Microbial communities associated with produced water from hydraulic fracturing are not well understood, and their deleterious activity can lead to significant increases in production costs and adverse environmental impacts. In this study, we compared the microbial ecology in prefracturing fluids (fracturing source water and fracturing fluid) and produced water at multiple time points from a natural gas well in southwestern Pennsylvania using 16S rRNA gene-based clone libraries, pyrosequencing, and quantitative PCR. The majority of the bacterial community in prefracturing fluids constituted aerobic species affiliated with the class Alphaproteobacteria. However, their relative abundance decreased in produced water with an increase in halotolerant, anaerobic/facultative anaerobic species affiliated with the classes Clostridia, Bacilli, Gammaproteobacteria, Epsilonproteobacteria, Bacteroidia, and Fusobacteria. Produced water collected at the last time point (day 187) consisted almost entirely of sequences similar to Clostridia and showed a decrease in bacterial abundance by 3 orders of magnitude compared to the prefracturing fluids and produced water samplesfrom earlier time points. Geochemical analysis showed that produced water contained higher concentrations of salts and total radioactivity compared to prefracturing fluids. This study provides evidence of long-term subsurface selection of the microbial community introduced through hydraulic fracturing, which may include significant implications for disinfection as well as reuse of produced water in future fracturing operations.

  5. Pursuing Excellence: The Power of Selection Science to Provide Meaningful Data and Enhance Efficiency in Selecting Surgical Trainees.

    PubMed

    Gardner, Aimee K; Dunkin, Brian J

    2018-05-01

    As current screening methods for selecting surgical trainees are receiving increasing scrutiny, development of a more efficient and effective selection system is needed. We describe the process of creating an evidence-based selection system and examine its impact on screening efficiency, faculty perceptions, and improving representation of underrepresented minorities. The program partnered with an expert in organizational science to identify fellowship position requirements and associated competencies. Situational judgment tests, personality profiles, structured interviews, and technical skills assessments were used to measure these competencies. The situational judgment test and personality profiles were administered online and used to identify candidates to invite for on-site structured interviews and skills testing. A final rank list was created based on all data points and their respective importance. All faculty completed follow-up surveys regarding their perceptions of the process. Candidate demographic and experience data were pulled from the application website. Fifty-five of 72 applicants met eligibility requirements and were invited to take the online assessment, with 50 (91%) completing it. Average time to complete was 42 ± 12 minutes. Eighteen applicants (35%) were invited for on-site structured interviews and skills testing-a greater than 50% reduction in number of invites compared to prior years. Time estimates reveal that the process will result in a time savings of 68% for future iterations, compared to traditional methodologies. Fellowship faculty (N = 5) agreed on the value and efficiency of the process. Underrepresented minority candidates increased from an initial 70% to 92% being invited for an interview and ranked using the new screening tools. Applying selection science to the process of choosing surgical trainees is feasible, efficient, and well-received by faculty for making selection decisions.

  6. Piecing together stakeholder puzzles-puzzling about (opioid substitute treatment---OST) stakeholders and their pieces: a rambling point-of-view.

    PubMed

    Einstein, Stan

    2013-08-01

    This point-of-view presentation explores "stakeholders" and "opioid substitute treatment," their dimensions, selected enabling necessary conditions to operate, or not, implications, and consequences from a range of selected perspectives.

  7. Correlation of the ionisation response at selected points of IC sensitive regions with SEE sensitivity parameters under pulsed laser irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordienko, A V; Mavritskii, O B; Egorov, A N

    2014-12-31

    The statistics of the ionisation response amplitude measured at selected points and their surroundings within sensitive regions of integrated circuits (ICs) under focused femtosecond laser irradiation is obtained for samples chosen from large batches of two types of ICs. A correlation between these data and the results of full-chip scanning is found for each type. The criteria for express validation of IC single-event effect (SEE) hardness based on ionisation response measurements at selected points are discussed. (laser applications and other topics in quantum electronics)

  8. Three-dimensional virtual bone bank system for selecting massive bone allograft in orthopaedic oncology.

    PubMed

    Wu, Zhigang; Fu, Jun; Wang, Zhen; Li, Xiangdong; Li, Jing; Pei, Yanjun; Pei, Guoxian; Li, Dan; Guo, Zheng; Fan, Hongbin

    2015-06-01

    Although structural bone allografts have been used for years to treat large defects caused by tumour or trauma, selecting the most appropriate allograft is still challenging. The objectives of this study were to: (1) describe the establishment of a visual bone bank system and workflow of allograft selection, and (2) show mid-term follow-up results of patients after allograft implantation. Allografts were scanned and stored in Digital Imaging and Communications in Medicine (DICOM) files. Then, image segmentation was conducted and 3D model reconstructed to establish a visual bone bank system. Based on the volume registration method, allografts were selected after a careful matching process. From November 2010 to June 2013, with the help of the Computer-assisted Orthopaedic Surgery (CAOS) navigation system, the allografts were implanted in 14 patients to fill defects after tumour resection. By combining the virtual bone bank and CAOS, selection time was reduced and matching accuracy was increased. After 27.5 months of follow-up, the mean Musculoskeletal Tumor Society (MSTS) 93 functional score was 25.7 ± 1.1 points. Except for two patients with pulmonary metastases, 12 patents were alive without evidence of disease at the time this report was written. The virtual bone bank system was helpful for allograft selection, tumour excision and bone reconstruction, thereby improving the safety and effectiveness of limb-salvage surgery.

  9. 30 s Response Time of K+ Ion-Selective Hydrogels Functionalized with 18-Crown-6 Ether Based on QCM Sensor.

    PubMed

    Zhang, Zhenxiao; Dou, Qian; Gao, Hongkai; Bai, Bing; Zhang, Yongmei; Hu, Debo; Yetisen, Ali K; Butt, Haider; Yang, Xiaoxia; Li, Congju; Dai, Qing

    2018-03-01

    Potassium detection is critical in monitoring imbalances in electrolytes and physiological status. The development of rapid and robust potassium sensors is desirable in clinical chemistry and point-of-care applications. In this study, composite supramolecular hydrogels are investigated: polyethylene glycol methacrylate and acrylamide copolymer (P(PEGMA-co-AM)) are functionalized with 18-crown-6 ether by employing surface initiated polymerization. Real-time potassium ion monitoring is realized by combining these compounds with quartz crystal microbalance. The device demonstrates a rapid response time of ≈30 s and a concentration detection range from 0.5 to 7.0 × 10 -3 m. These hydrogels also exhibit high reusability and K + ion selectivity relative to other cations in biofluids such as Na + , NH 4 + , Mg 2+ , and Ca 2+ . These results provide a new approach for sensing alkali metal ions using P(PEGMA-co-AM) hydrogels. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Influence of wholesale lamb marketing options and merchandising styles on retail yield and fabrication time.

    PubMed

    Lorenzen, C L; Martin, A M; Griffin, D B; Dockerty, T R; Walter, J P; Johnson, H K; Savell, J W

    1997-01-01

    Lamb carcasses (n = 94) from five packing plants, selected to vary in weight class and fat thickness, were used to determine retail yield and labor requirements of wholesale lamb fabrication. Carcasses were allotted randomly according to weight class to be fabricated as whole carcasses (n = 20), three-piece boxes (n = 22), or subprimals (n = 52). Processing times (seconds) were recorded and wholesale and retail weights (kilograms) were obtained to calculate retail yield. Subprimals were fabricated into bone-in retail cuts or boneless or semi-boneless retail cuts. Retail yield for subprimal lamb legs decreased from 85.3 +/- .6% for bone-in to 68.0 +/- .7% for a completely boneless retail product. Correspondingly, processing times increased from 126.1 +/- 5.4 s to 542.0 +/- 19.2 s for bone-in and boneless legs, respectively. For all subprimals, retail yield percentage tended to decrease and total processing time increase as cuts were fabricated to boneless or semi-boneless end points compared with a bone-in end point. Percentage retail yield did not differ (P > .05) among whole carcass, three-piece box, and subprimal marketing methods. Total processing time was shorter for subprimals (P < .05) than for the other two marketing methods.

  11. Evaluation of a time efficient immunization strategy for anti-PAH antibody development

    PubMed Central

    Li, Xin; Kaattari, Stephen L.; Vogelbein, Mary Ann; Unger, Michael A.

    2016-01-01

    The development of monoclonal antibodies (mAb) with affinity to small molecules can be a time-consuming process. To evaluate shortening the time for mAb production, we examined mouse antisera at different time points post-immunization to measure titer and to evaluate the affinity to the immunogen PBA (pyrene butyric acid). Fusions were also conducted temporally to evaluate antibody production success at various time periods. We produced anti-PBA antibodies 7 weeks post-immunization and selected for anti-PAH reactivity during the hybridoma screening process. Moreover, there were no obvious sensitivity differences relative to antibodies screened from a more traditional 18 week schedule. Our results demonstrate a more time efficient immunization strategy for anti-PAH antibody development that may be applied to other small molecules. PMID:27282486

  12. Point Clouds to Indoor/outdoor Accessibility Diagnosis

    NASA Astrophysics Data System (ADS)

    Balado, J.; Díaz-Vilariño, L.; Arias, P.; Garrido, I.

    2017-09-01

    This work presents an approach to automatically detect structural floor elements such as steps or ramps in the immediate environment of buildings, elements that may affect the accessibility to buildings. The methodology is based on Mobile Laser Scanner (MLS) point cloud and trajectory information. First, the street is segmented in stretches along the trajectory of the MLS to work in regular spaces. Next, the lower region of each stretch (the ground zone) is selected as the ROI and normal, curvature and tilt are calculated for each point. With this information, points in the ROI are classified in horizontal, inclined or vertical. Points are refined and grouped in structural elements using raster process and connected components in different phases for each type of previously classified points. At last, the trajectory data is used to distinguish between road and sidewalks. Adjacency information is used to classify structural elements in steps, ramps, curbs and curb-ramps. The methodology is tested in a real case study, consisting of 100 m of an urban street. Ground elements are correctly classified in an acceptable computation time. Steps and ramps also are exported to GIS software to enrich building models from Open Street Map with information about accessible/inaccessible entrances and their locations.

  13. Nonuniform multiview color texture mapping of image sequence and three-dimensional model for faded cultural relics with sift feature points

    NASA Astrophysics Data System (ADS)

    Li, Na; Gong, Xingyu; Li, Hongan; Jia, Pengtao

    2018-01-01

    For faded relics, such as Terracotta Army, the 2D-3D registration between an optical camera and point cloud model is an important part for color texture reconstruction and further applications. This paper proposes a nonuniform multiview color texture mapping for the image sequence and the three-dimensional (3D) model of point cloud collected by Handyscan3D. We first introduce nonuniform multiview calibration, including the explanation of its algorithm principle and the analysis of its advantages. We then establish transformation equations based on sift feature points for the multiview image sequence. At the same time, the selection of nonuniform multiview sift feature points is introduced in detail. Finally, the solving process of the collinear equations based on multiview perspective projection is given with three steps and the flowchart. In the experiment, this method is applied to the color reconstruction of the kneeling figurine, Tangsancai lady, and general figurine. These results demonstrate that the proposed method provides an effective support for the color reconstruction of the faded cultural relics and be able to improve the accuracy of 2D-3D registration between the image sequence and the point cloud model.

  14. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  15. The Motor Subsystem as a Predictor of Success in Young Football Talents: A Person-Oriented Study

    PubMed Central

    Zibung, Marc; Zuber, Claudia; Conzelmann, Achim

    2016-01-01

    Motor tests play a key role in talent selection in football. However, individual motor tests only focus on specific areas of a player’s complex performance. To evaluate his or her overall performance during a game, the current study takes a holistic perspective and uses a person-oriented approach. In this approach, several factors are viewed together as a system, whose state is analysed longitudinally. Based on this idea, six motor tests were aggregated to form the Motor Function subsystem. 104 young, top-level, male football talents were tested three times (2011, 2012, 2013; Mage, t2011 = 12.26, SD = 0.29), and their overall level of performance was determined one year later (2014). The data were analysed using the LICUR method, a pattern-analytical procedure for person-oriented approaches. At all three measuring points, four patterns could be identified, which remained stable over time. One of the patterns found at the third measuring point identified more subsequently successful players than random selection would. This pattern is characterised by above-average, but not necessarily the best, performance on the tests. Developmental paths along structurally stable patterns that occur more often than predicted by chance indicate that the Motor Function subsystem is a viable means of forecasting in the age range of 12–15 years. Above-average, though not necessary outstanding, performance both on fitness and technical tests appears to be particularly promising. These findings underscore the view that a holistic perspective may be profitable in talent selection. PMID:27508929

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaunak, S.K.; Soni, B.K.

    With research interests shifting away from primarily military or industrial applications to more environmental applications, the area of ocean modelling has become an increasingly popular and exciting area of research. This paper presents a CIPS (Computation Field Simulation) system customized for the solution of oceanographic problems. This system deals primarily with the generation of simple, yet efficient grids for coastal areas. The two primary grid approaches are both structured in methodology. The first approach is a standard approach which is used in such popular grid generation softwares as GE-NIE++, EAGLEVIEW, and TIGER, where the user defines boundaries via points, lines,more » or curves, varies the distribution of points along these boundaries and then creates the interior grid. The second approach is to allow the user to interactively select points on the screen to form the boundary curves and then create the interior grid from these spline curves. The program has been designed with the needs of the ocean modeller in mind so that the modeller can obtain results in a timely yet elegant manner. The modeller performs four basic steps in using the program. First, he selects a region of interest from a popular database. Then, he creates a grid for that region. Next, he sets up boundary and input conditions and runs a circulation model. Finally, the modeller visualizes the output.« less

  17. PSIDD3: Post-Scan Ultrasonic Data Display System for the Windows-Based PC Including Fuzzy Logic Analysis

    NASA Technical Reports Server (NTRS)

    Lovelace, Jeffrey J.; Cios, Krzysztof J.; Roth, Don J.; Cao, Wei

    2000-01-01

    Post-Scan Interactive Data Display (PSIDD) III is a user-oriented Windows-based system that facilitates the display and comparison of ultrasonic contact data. The system is optimized to compare ultrasonic measurements made at different locations within a material or at different stages of material degradation. PSIDD III provides complete analysis of the primary wave forms in the time and frequency domains along with the calculation of several frequency dependent properties including Phase Velocity and Attenuation Coefficient and several frequency independent properties, like the Cross Correlation Velocity. The system allows image generation on all of the frequency dependent properties at any available frequency (limited by the bandwidth used in the scans) and on any of the frequency independent properties. From ultrasonic contact scans, areas of interest on an image can be studied with regard to underlying raw waveforms and derived ultrasonic properties by simply selecting the point on the image. The system offers various modes of in-depth comparison between scan points. Up to five scan points can be selected for comparative analysis at once. The system was developed with Borland Delphi software (Visual Pascal) and is based on a SQL database. It is ideal for classification of material properties, or location of microstructure variations in materials.

  18. Primary Outcomes for Resuscitation Science Studies

    PubMed Central

    Becker, Lance B.; Aufderheide, Tom P.; Geocadin, Romergryko G.; Callaway, Clifton W.; Lazar, Ronald M.; Donnino, Michael W.; Nadkarni, Vinay M.; Abella, Benjamin S.; Adrie, Christophe; Berg, Robert A.; Merchant, Raina M.; O'Connor, Robert E.; Meltzer, David O.; Holm, Margo B.; Longstreth, William T.; Halperin, Henry R.

    2013-01-01

    Background and Purpose The guidelines presented in this consensus statement are intended to serve researchers, clinicians, reviewers, and regulators in the selection of the most appropriate primary outcome for a clinical trial of cardiac arrest therapies. The American Heart Association guidelines for the treatment of cardiac arrest depend on high-quality clinical trials, which depend on the selection of a meaningful primary outcome. Because this selection process has been the subject of much controversy, a consensus conference was convened with national and international experts, the National Institutes of Health, and the US Food and Drug Administration. Methods The Research Working Group of the American Heart Association Emergency Cardiovascular Care Committee nominated subject leaders, conference attendees, and writing group members on the basis of their expertise in clinical trials and a diverse perspective of cardiovascular and neurological outcomes (see the online-only Data Supplement). Approval was obtained from the Emergency Cardiovascular Care Committee and the American Heart Association Manuscript Oversight Committee. Preconference position papers were circulated for review; the conference was held; and postconference consensus documents were circulated for review and comments were invited from experts, conference attendees, and writing group members. Discussions focused on (1) when after cardiac arrest the measurement time point should occur; (2) what cardiovascular, neurological, and other physiology should be assessed; and (3) the costs associated with various end points. The final document underwent extensive revision and peer review by the Emergency Cardiovascular Care Committee, the American Heart Association Science Advisory and Coordinating Committee, and oversight committees. Results There was consensus that no single primary outcome is appropriate for all studies of cardiac arrest. The best outcome measure is the pairing of a time point and physiological condition that will best answer the question under study. Conference participants were asked to assign an outcome to each of 4 hypothetical cases; however, there was not complete agreement on an ideal outcome measure even after extensive discussion and debate. There was general consensus that it is appropriate for earlier studies to enroll fewer patients and to use earlier time points such as return of spontaneous circulation, simple “alive versus dead,” hospital mortality, or a hemodynamic parameter. For larger studies, a longer time point after arrest should be considered because neurological assessments fluctuate for at least 90 days after arrest. For large trials designed to have a major impact on public health policy, longer-term end points such as 90 days coupled with neurocognitive and quality-of-life assessments should be considered, as should the additional costs of this approach. For studies that will require regulatory oversight, early discussions with regulatory agencies are strongly advised. For neurological assessment of post–cardiac arrest patients, researchers may wish to use the Cerebral Performance Categories or modified Rankin Scale for global outcomes. Conclusions Although there is no single recommended outcome measure for trials of cardiac arrest care, the simple Cerebral Performance Categories or modified Rankin Scale after 90 days provides a reasonable outcome parameter for many trials. The lack of an easy-to-administer neurological functional outcome measure that is well validated in post–cardiac arrest patients is a major limitation to the field and should be a high priority for future development. PMID:21969010

  19. Primary outcomes for resuscitation science studies: a consensus statement from the American Heart Association.

    PubMed

    Becker, Lance B; Aufderheide, Tom P; Geocadin, Romergryko G; Callaway, Clifton W; Lazar, Ronald M; Donnino, Michael W; Nadkarni, Vinay M; Abella, Benjamin S; Adrie, Christophe; Berg, Robert A; Merchant, Raina M; O'Connor, Robert E; Meltzer, David O; Holm, Margo B; Longstreth, William T; Halperin, Henry R

    2011-11-08

    The guidelines presented in this consensus statement are intended to serve researchers, clinicians, reviewers, and regulators in the selection of the most appropriate primary outcome for a clinical trial of cardiac arrest therapies. The American Heart Association guidelines for the treatment of cardiac arrest depend on high-quality clinical trials, which depend on the selection of a meaningful primary outcome. Because this selection process has been the subject of much controversy, a consensus conference was convened with national and international experts, the National Institutes of Health, and the US Food and Drug Administration. The Research Working Group of the American Heart Association Emergency Cardiovascular Care Committee nominated subject leaders, conference attendees, and writing group members on the basis of their expertise in clinical trials and a diverse perspective of cardiovascular and neurological outcomes (see the online-only Data Supplement). Approval was obtained from the Emergency Cardiovascular Care Committee and the American Heart Association Manuscript Oversight Committee. Preconference position papers were circulated for review; the conference was held; and postconference consensus documents were circulated for review and comments were invited from experts, conference attendees, and writing group members. Discussions focused on (1) when after cardiac arrest the measurement time point should occur; (2) what cardiovascular, neurological, and other physiology should be assessed; and (3) the costs associated with various end points. The final document underwent extensive revision and peer review by the Emergency Cardiovascular Care Committee, the American Heart Association Science Advisory and Coordinating Committee, and oversight committees. There was consensus that no single primary outcome is appropriate for all studies of cardiac arrest. The best outcome measure is the pairing of a time point and physiological condition that will best answer the question under study. Conference participants were asked to assign an outcome to each of 4 hypothetical cases; however, there was not complete agreement on an ideal outcome measure even after extensive discussion and debate. There was general consensus that it is appropriate for earlier studies to enroll fewer patients and to use earlier time points such as return of spontaneous circulation, simple "alive versus dead," hospital mortality, or a hemodynamic parameter. For larger studies, a longer time point after arrest should be considered because neurological assessments fluctuate for at least 90 days after arrest. For large trials designed to have a major impact on public health policy, longer-term end points such as 90 days coupled with neurocognitive and quality-of-life assessments should be considered, as should the additional costs of this approach. For studies that will require regulatory oversight, early discussions with regulatory agencies are strongly advised. For neurological assessment of post-cardiac arrest patients, researchers may wish to use the Cerebral Performance Categories or modified Rankin Scale for global outcomes. Although there is no single recommended outcome measure for trials of cardiac arrest care, the simple Cerebral Performance Categories or modified Rankin Scale after 90 days provides a reasonable outcome parameter for many trials. The lack of an easy-to-administer neurological functional outcome measure that is well validated in post-cardiac arrest patients is a major limitation to the field and should be a high priority for future development.

  20. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.

  1. Effect of a Starting Model on the Solution of a Travel Time Seismic Tomography Problem

    NASA Astrophysics Data System (ADS)

    Yanovskaya, T. B.; Medvedev, S. V.; Gobarenko, V. S.

    2018-03-01

    In the problems of three-dimensional (3D) travel time seismic tomography where the data are travel times of diving waves and the starting model is a system of plane layers where the velocity is a function of depth alone, the solution turns out to strongly depend on the selection of the starting model. This is due to the fact that in the different starting models, the rays between the same points can intersect different layers, which makes the tomography problem fundamentally nonlinear. This effect is demonstrated by the model example. Based on the same example, it is shown how the starting model should be selected to ensure a solution close to the true velocity distribution. The starting model (the average dependence of the seismic velocity on depth) should be determined by the method of successive iterations at each step of which the horizontal velocity variations in the layers are determined by solving the two-dimensional tomography problem. An example illustrating the application of this technique to the P-wave travel time data in the region of the Black Sea basin is presented.

  2. Strategy-aligned fuzzy approach for market segment evaluation and selection: a modular decision support system by dynamic network process (DNP)

    NASA Astrophysics Data System (ADS)

    Mohammadi Nasrabadi, Ali; Hosseinpour, Mohammad Hossein; Ebrahimnejad, Sadoullah

    2013-05-01

    In competitive markets, market segmentation is a critical point of business, and it can be used as a generic strategy. In each segment, strategies lead companies to their targets; thus, segment selection and the application of the appropriate strategies over time are very important to achieve successful business. This paper aims to model a strategy-aligned fuzzy approach to market segment evaluation and selection. A modular decision support system (DSS) is developed to select an optimum segment with its appropriate strategies. The suggested DSS has two main modules. The first one is SPACE matrix which indicates the risk of each segment. Also, it determines the long-term strategies. The second module finds the most preferred segment-strategies over time. Dynamic network process is applied to prioritize segment-strategies according to five competitive force factors. There is vagueness in pairwise comparisons, and this vagueness has been modeled using fuzzy concepts. To clarify, an example is illustrated by a case study in Iran's coffee market. The results show that success possibility of segments could be different, and choosing the best ones could help companies to be sure in developing their business. Moreover, changing the priority of strategies over time indicates the importance of long-term planning. This fact has been supported by a case study on strategic priority difference in short- and long-term consideration.

  3. Intraperitoneal pressure and volume of gas injected as effective parameters of the correct position of the Veress needle during creation of pneumoperitoneum.

    PubMed

    Azevedo, João L M C; Azevedo, Otavio C; Sorbello, Albino A; Becker, Otavio M; Hypolito, Otavio; Freire, Dalmer; Miyahira, Susana; Guedes, Afonso; Azevedo, Glicia C

    2009-12-01

    The aim of this work was to establish reliable parameters of the correct position of the Veress needle in the peritoneal cavity during creation of pneumoperitoneum. The Veress needle was inserted into the peritoneal cavity of 100 selected patients, and a carbon-dioxide flow rate of 1.2 L/min and a maximum pressure of 12 mm Hg were established. Intraperitoneal pressure (IP) and the volume of gas injected (VG) were recorded at the beginning of insufflation and at every 20 seconds. Correlations were established for pressure and volume in function of time. Values of IP and VG were predicted at 1, 2, 3, and 4 minutes of insufflation, by applying the following formulas: IP = 2.3083 + 0.0266 x time +8.3 x 10(-5) x time(2) - 2.44 x 10(-7) x time(3); and VG = 0.813 + 0.0157 x time. A strong correlation was observed between IP and preestablished time points during creation of the pneumoperitoneum, as well as between VG and preestablished time points during creation of the pneumoperitoneum, with a coefficient of determination of 0.8011 for IP and of 0.9604 for VG. The predicted values were as follows: 1 minute = 4.15; 2 minutes = 6.27; 3 minutes = 8.36; and 4 minutes = 10.10 for IP (mm Hg); and 1 minute = 1.12; 2 minutes = 2.07; 3 minutes = 3.01; and 4 minutes = 3.95 for VG (L). Values of IP and VG at given time points during insufflation for creation of the pneumoperitoneum, using the Veress needle, can be effective parameters to determine whether the needle is correctly positioned in the peritoneal cavity.

  4. Selective Attention Modulates the Direction of Audio-Visual Temporal Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes. PMID:25004132

  5. Selective attention modulates the direction of audio-visual temporal recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  6. Providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer

    DOEpatents

    Archer, Charles J; Faraj, Ahmad A; Inglett, Todd A; Ratterman, Joseph D

    2013-04-16

    Methods, apparatus, and products are disclosed for providing full point-to-point communications among compute nodes of an operational group in a global combining network of a parallel computer, each compute node connected to each adjacent compute node in the global combining network through a link, that include: receiving a network packet in a compute node, the network packet specifying a destination compute node; selecting, in dependence upon the destination compute node, at least one of the links for the compute node along which to forward the network packet toward the destination compute node; and forwarding the network packet along the selected link to the adjacent compute node connected to the compute node through the selected link.

  7. Removing the Taboo on the Surgical Violation (Cut-Through) of Cancer.

    PubMed

    Robbins, K Thomas; Bradford, Carol R; Rodrigo, Juan P; Suárez, Carlos; de Bree, Remco; Kowalski, Luiz P; Rinaldo, Alessandra; Silver, Carl E; Lund, Valerie J; Nibu, Ken-Ichi; Ferlito, Alfio

    2016-10-01

    The surgical dictum of en bloc resection without violating cancer tissue has been challenged by novel treatments in head and neck cancer. An analysis of treatment outcomes involving piecemeal removal of sinonasal, laryngeal, oropharyngeal, and hypopharyngeal cancer shows that it did not compromise tumor control. The rationale for the evolution toward use of this technique is outlined. While complete resection with clear margins remains a key end point in surgical oncology, we believe it is time to acknowledge that this time-honored dictum of avoiding tumor violation is no longer valid in selected situations.

  8. Prospective memory: A comparative perspective

    PubMed Central

    Crystal, Jonathon D.; Wilson, A. George

    2014-01-01

    Prospective memory consists of forming a representation of a future action, temporarily storing that representation in memory, and retrieving it at a future time point. Here we review the recent development of animal models of prospective memory. We review experiments using rats that focus on the development of time-based and event-based prospective memory. Next, we review a number of prospective-memory approaches that have been used with a variety of non-human primates. Finally, we review selected approaches from the human literature on prospective memory to identify targets for development of animal models of prospective memory. PMID:25101562

  9. Intraoperative optical biopsy for brain tumors using spectro-lifetime properties of intrinsic fluorophores

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; Kittle, David S.; Nie, Zhaojun; Falcone, Christina; Patil, Chirag G.; Chu, Ray M.; Mamelak, Adam N.; Black, Keith L.; Butte, Pramod V.

    2016-04-01

    We have developed and tested a system for real-time intra-operative optical identification and classification of brain tissues using time-resolved fluorescence spectroscopy (TRFS). A supervised learning algorithm using linear discriminant analysis (LDA) employing selected intrinsic fluorescence decay temporal points in 6 spectral bands was employed to maximize statistical significance difference between training groups. The linear discriminant analysis on in vivo human tissues obtained by TRFS measurements (N = 35) were validated by histopathologic analysis and neuronavigation correlation to pre-operative MRI images. These results demonstrate that TRFS can differentiate between normal cortex, white matter and glioma.

  10. Combined Quarterly Technical Report Number 23. SATNET Development and Operation. Pluribus Satellite IMP Development. Remote Site Maintenance. Internet Operations and Maintenance. Mobile Access Terminal Network. TCP for the HP3000. TCP-TAC. TCP for VAX-UNIX

    DTIC Science & Technology

    1981-11-01

    evaluate and test these ideas in the Internet research context. 4. Field third-generation gateways. At this point in time, we purposely avoid selecting a...plan to cover the period from now until the time when a gateway system can be fielded which implements the results of the current work in the Internet ... research community. The current gateway system is inadequate from both a functionality and a• performance standpoint, and therefore the transition

  11. Robust modular product family design

    NASA Astrophysics Data System (ADS)

    Jiang, Lan; Allada, Venkat

    2001-10-01

    This paper presents a modified Taguchi methodology to improve the robustness of modular product families against changes in customer requirements. The general research questions posed in this paper are: (1) How to effectively design a product family (PF) that is robust enough to accommodate future customer requirements. (2) How far into the future should designers look to design a robust product family? An example of a simplified vacuum product family is used to illustrate our methodology. In the example, customer requirements are selected as signal factors; future changes of customer requirements are selected as noise factors; an index called quality characteristic (QC) is set to evaluate the product vacuum family; and the module instance matrix (M) is selected as control factor. Initially a relation between the objective function (QC) and the control factor (M) is established, and then the feasible M space is systemically explored using a simplex method to determine the optimum M and the corresponding QC values. Next, various noise levels at different time points are introduced into the system. For each noise level, the optimal values of M and QC are computed and plotted on a QC-chart. The tunable time period of the control factor (the module matrix, M) is computed using the QC-chart. The tunable time period represents the maximum time for which a given control factor can be used to satisfy current and future customer needs. Finally, a robustness index is used to break up the tunable time period into suitable time periods that designers should consider while designing product families.

  12. Recursive regularization for inferring gene networks from time-course gene expression profiles

    PubMed Central

    Shimamura, Teppei; Imoto, Seiya; Yamaguchi, Rui; Fujita, André; Nagasaki, Masao; Miyano, Satoru

    2009-01-01

    Background Inferring gene networks from time-course microarray experiments with vector autoregressive (VAR) model is the process of identifying functional associations between genes through multivariate time series. This problem can be cast as a variable selection problem in Statistics. One of the promising methods for variable selection is the elastic net proposed by Zou and Hastie (2005). However, VAR modeling with the elastic net succeeds in increasing the number of true positives while it also results in increasing the number of false positives. Results By incorporating relative importance of the VAR coefficients into the elastic net, we propose a new class of regularization, called recursive elastic net, to increase the capability of the elastic net and estimate gene networks based on the VAR model. The recursive elastic net can reduce the number of false positives gradually by updating the importance. Numerical simulations and comparisons demonstrate that the proposed method succeeds in reducing the number of false positives drastically while keeping the high number of true positives in the network inference and achieves two or more times higher true discovery rate (the proportion of true positives among the selected edges) than the competing methods even when the number of time points is small. We also compared our method with various reverse-engineering algorithms on experimental data of MCF-7 breast cancer cells stimulated with two ErbB ligands, EGF and HRG. Conclusion The recursive elastic net is a powerful tool for inferring gene networks from time-course gene expression profiles. PMID:19386091

  13. Influences of rolling method on deformation force in cold roll-beating forming process

    NASA Astrophysics Data System (ADS)

    Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan

    2018-03-01

    In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.

  14. Assessment of Li/SOCL2 Battery Technology; Reserve, Thin-Cell Design. Volume 3

    DTIC Science & Technology

    1990-06-01

    power density and efficiency of an operating electrochemical system . The method is general - the examples to illustrate the selected points pertain to... System : Design, Manufacturing and QC Considerations), S. Szpak, P. A. Mosier-Boss, and J. J. Smith, 34th International Power Sources Symposium, Cherry...I) the computer time required to evaluate the integral in Eqn. Ill, and (iii the lack of generality in the attainable lineshapes. However, since this

  15. How Many Grid Points are Required for Time Accurate Simulations Scheme Selection and Scale-Discriminant Stabilization

    DTIC Science & Technology

    2015-11-24

    spatial concerns: ¤ how well are gradients captured? (resolution requirement) spatial/temporal concerns: ¤ dispersion and dissipation error...distribution is unlimited. Gradient Capture vs. Resolution: Single Mode FFT: Solution/Derivative: Convergence: f x( )= sin(x) with x∈[0,2π ] df dx...distribution is unlimited. Gradient Capture vs. Resolution: 
 Multiple Modes FFT: Solution/Derivative: Convergence: 6 __ CD02 __ CD04 __ CD06

  16. Bayesian Framework Approach for Prognostic Studies in Electrolytic Capacitor under Thermal Overstress Conditions

    DTIC Science & Technology

    2012-09-01

    make end of life ( EOL ) and remaining useful life (RUL) estimations. Model-based prognostics approaches perform these tasks with the help of first...in parameters Degradation Modeling Parameter estimation Prediction Thermal / Electrical Stress Experimental Data State Space model RUL EOL ...distribution at given single time point kP , and use this for multi-step predictions to EOL . There are several methods which exits for selecting the sigma

  17. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm

    PubMed Central

    Yan, Li; Xie, Hong; Chen, Changjun

    2017-01-01

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%. PMID:28850100

  18. Automatic Registration of TLS-TLS and TLS-MLS Point Clouds Using a Genetic Algorithm.

    PubMed

    Yan, Li; Tan, Junxiang; Liu, Hua; Xie, Hong; Chen, Changjun

    2017-08-29

    Registration of point clouds is a fundamental issue in Light Detection and Ranging (LiDAR) remote sensing because point clouds scanned from multiple scan stations or by different platforms need to be transformed to a uniform coordinate reference frame. This paper proposes an efficient registration method based on genetic algorithm (GA) for automatic alignment of two terrestrial LiDAR scanning (TLS) point clouds (TLS-TLS point clouds) and alignment between TLS and mobile LiDAR scanning (MLS) point clouds (TLS-MLS point clouds). The scanning station position acquired by the TLS built-in GPS and the quasi-horizontal orientation of the LiDAR sensor in data acquisition are used as constraints to narrow the search space in GA. A new fitness function to evaluate the solutions for GA, named as Normalized Sum of Matching Scores, is proposed for accurate registration. Our method is divided into five steps: selection of matching points, initialization of population, transformation of matching points, calculation of fitness values, and genetic operation. The method is verified using a TLS-TLS data set and a TLS-MLS data set. The experimental results indicate that the RMSE of registration of TLS-TLS point clouds is 3~5 mm, and that of TLS-MLS point clouds is 2~4 cm. The registration integrating the existing well-known ICP with GA is further proposed to accelerate the optimization and its optimizing time decreases by about 50%.

  19. Breath analysis using external cavity diode lasers: a review

    NASA Astrophysics Data System (ADS)

    Bayrakli, Ismail

    2017-04-01

    Most techniques that are used for diagnosis and therapy of diseases are invasive. Reliable noninvasive methods are always needed for the comfort of patients. Owing to its noninvasiveness, ease of use, and easy repeatability, exhaled breath analysis is a very good candidate for this purpose. Breath analysis can be performed using different techniques, such as gas chromatography mass spectrometry (MS), proton transfer reaction-MS, and selected ion flow tube-MS. However, these devices are bulky and require complicated procedures for sample collection and preconcentration. Therefore, these are not practical for routine applications in hospitals. Laser-based techniques with small size, robustness, low cost, low response time, accuracy, precision, high sensitivity, selectivity, low detection limit, real-time, and point-of-care detection have a great potential for routine use in hospitals. In this review paper, the recent advances in the fields of external cavity lasers and breath analysis for detection of diseases are presented.

  20. Stimulation of a turbofan engine for evaluation of multivariable optimal control concepts. [(computerized simulation)

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1976-01-01

    The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.

  1. Selection of the optimum font type and size interface for on screen continuous reading by young adults: an ergonomic approach.

    PubMed

    Banerjee, Jayeeta; Bhattacharyya, Moushum

    2011-12-01

    There is a rapid shifting of media: from printed paper to computer screens. This transition is modifying the process of how we read and understand text. The efficiency of reading is dependent on how ergonomically the visual information is presented. Font types and size characteristics have been shown to affect reading. A detailed investigation of the effect of the font type and size on reading on computer screens has been carried out by using subjective, objective and physiological evaluation methods on young adults. A group of young participants volunteered for this study. Two types of fonts were used: Serif fonts (Times New Roman, Georgia, Courier New) and Sans serif fonts (Verdana, Arial, Tahoma). All fonts were presented in 10, 12 and 14 point sizes. This study used a 6 X 3 (font type X size) design matrix. Participants read 18 passages of approximately the same length and reading level on a computer monitor. Reading time, ranking and overall mental workload were measured. Eye movements were recorded by a binocular eye movement recorder. Reading time was minimum for Courier New l4 point. The participants' ranking was highest and mental workload was least for Verdana 14 point. The pupil diameter, fixation duration and gaze duration were least for Courier New 14 point. The present study recommends using 14 point sized fonts for reading on computer screen. Courier New is recommended for fast reading while for on screen presentation Verdana is recommended. The outcome of this study will help as a guideline to all the PC users, software developers, web page designers and computer industry as a whole.

  2. X-ray bright points and He I lambda 10830 dark points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Harvey, K. L.; Herant, M.; Webb, D. F.

    1989-01-01

    Using near-simultaneous full disk Solar X-ray images and He I 10830 lambda, spectroheliograms from three recent rocket flights, dark points identified on the He I maps were compared with X-ray bright points identified on the X-ray images. It was found that for the largest and most obvious features there is a strong correlation: most He I dark points correspond to X-ray bright points. However, about 2/3 of the X-ray bright points were not identified on the basis of the helium data alone. Once an X-ray feature is identified it is almost always possible to find an underlying dark patch of enhanced He I absorption which, however, would not a priori have been selected as a dark point. Therefore, the He I dark points, using current selection criteria, cannot be used as a one-to-one proxy for the X-ray data. He I dark points do, however, identify the locations of the stronger X-ray bright points.

  3. X-ray bright points and He I lambda 10830 dark points

    NASA Technical Reports Server (NTRS)

    Golub, L.; Harvey, K. L.; Herant, M.; Webb, D. F.

    1989-01-01

    Using near-simultaneous full disk Solar X-ray images and He I 10830 lambda, spectroheliograms from three recent rocket flights, dark points identified on the He I maps were compared with x-ray bright points identified on the X-ray images. It was found that for the largest and most obvious features there is a strong correlation: most He I dark points correspond to X-ray bright points. However, about 2/3 of the X-ray bright points were not identified on the basis of the helium data alone. Once an X-ray feature is identified it is almost always possible to find an underlying dark patch of enhanced He I absorption which, however, would not a priori have been selected as a dark point. Therefore, the He I dark points, using current selection criteria, cannot be used as a one-to-one proxy for the X-ray data. He I dark points do, however, identify the locations of the stronger X-ray bright points.

  4. Improved operative efficiency using a real-time MRI-guided stereotactic platform for laser amygdalohippocampotomy.

    PubMed

    Ho, Allen L; Sussman, Eric S; Pendharkar, Arjun V; Le, Scheherazade; Mantovani, Alessandra; Keebaugh, Alaine C; Drover, David R; Grant, Gerald A; Wintermark, Max; Halpern, Casey H

    2018-04-01

    OBJECTIVE MR-guided laser interstitial thermal therapy (MRgLITT) is a minimally invasive method for thermal destruction of benign or malignant tissue that has been used for selective amygdalohippocampal ablation for the treatment of temporal lobe epilepsy. The authors report their initial experience adopting a real-time MRI-guided stereotactic platform that allows for completion of the entire procedure in the MRI suite. METHODS Between October 2014 and May 2016, 17 patients with mesial temporal sclerosis were selected by a multidisciplinary epilepsy board to undergo a selective amygdalohippocampal ablation for temporal lobe epilepsy using MRgLITT. The first 9 patients underwent standard laser ablation in 2 phases (operating room [OR] and MRI suite), whereas the next 8 patients underwent laser ablation entirely in the MRI suite with the ClearPoint platform. A checklist specific to the real-time MRI-guided laser amydalohippocampal ablation was developed and used for each case. For both cohorts, clinical and operative information, including average case times and accuracy data, was collected and analyzed. RESULTS There was a learning curve associated with using this real-time MRI-guided system. However, operative times decreased in a linear fashion, as did total anesthesia time. In fact, the total mean patient procedure time was less in the MRI cohort (362.8 ± 86.6 minutes) than in the OR cohort (456.9 ± 80.7 minutes). The mean anesthesia time was significantly shorter in the MRI cohort (327.2 ± 79.9 minutes) than in the OR cohort (435.8 ± 78.4 minutes, p = 0.02). CONCLUSIONS The real-time MRI platform for MRgLITT can be adopted in an expedient manner. Completion of MRgLITT entirely in the MRI suite may lead to significant advantages in procedural times.

  5. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures.

    PubMed

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2017-04-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9 m -by-10 m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user's wrist is stationary.

  6. SeleCon: Scalable IoT Device Selection and Control Using Hand Gestures

    PubMed Central

    Alanwar, Amr; Alzantot, Moustafa; Ho, Bo-Jhang; Martin, Paul; Srivastava, Mani

    2018-01-01

    Although different interaction modalities have been proposed in the field of human-computer interface (HCI), only a few of these techniques could reach the end users because of scalability and usability issues. Given the popularity and the growing number of IoT devices, selecting one out of many devices becomes a hurdle in a typical smarthome environment. Therefore, an easy-to-learn, scalable, and non-intrusive interaction modality has to be explored. In this paper, we propose a pointing approach to interact with devices, as pointing is arguably a natural way for device selection. We introduce SeleCon for device selection and control which uses an ultra-wideband (UWB) equipped smartwatch. To interact with a device in our system, people can point to the device to select it then draw a hand gesture in the air to specify a control action. To this end, SeleCon employs inertial sensors for pointing gesture detection and a UWB transceiver for identifying the selected device from ranging measurements. Furthermore, SeleCon supports an alphabet of gestures that can be used for controlling the selected devices. We performed our experiment in a 9m-by-10m lab space with eight deployed devices. The results demonstrate that SeleCon can achieve 84.5% accuracy for device selection and 97% accuracy for hand gesture recognition. We also show that SeleCon is power efficient to sustain daily use by turning off the UWB transceiver, when a user’s wrist is stationary. PMID:29683151

  7. Real-Time GNSS Positioning with JPL's new GIPSYx Software

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Y. E.

    2016-12-01

    The JPL Global Differential GPS (GDGPS) System is now producing real-time orbit and clock solutions for GPS, GLONASS, BeiDou, and Galileo. The operations are based on JPL's next generation geodetic analysis and data processing software, GIPSYx (also known at RTGx). We will examine the impact of the nascent GNSS constellations on real-time kinematic positioning for earthquake monitoring, and assess the marginal benefits from each constellation. We will discus the options for signal selection, inter-signal bias modeling, and estimation strategies in the context of real-time point positioning. We will provide a brief overview of the key features and attributes of GIPSYx. Finally we will describe the current natural hazard monitoring services from the GDGPS System.

  8. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  9. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  10. Evolution of Randomized Trials in Advanced/Metastatic Soft Tissue Sarcoma: End Point Selection, Surrogacy, and Quality of Reporting.

    PubMed

    Zer, Alona; Prince, Rebecca M; Amir, Eitan; Abdul Razak, Albiruni

    2016-05-01

    Randomized controlled trials (RCTs) in soft tissue sarcoma (STS) have used varying end points. The surrogacy of intermediate end points, such as progression-free survival (PFS), response rate (RR), and 3-month and 6-month PFS (3moPFS and 6moPFS) with overall survival (OS), remains unknown. The quality of efficacy and toxicity reporting in these studies is also uncertain. A systematic review of systemic therapy RCTs in STS was performed. Surrogacy between intermediate end points and OS was explored using weighted linear regression for the hazard ratio for OS with the hazard ratio for PFS or the odds ratio for RR, 3moPFS, and 6moPFS. The quality of reporting for efficacy and toxicity was also evaluated. Fifty-two RCTs published between 1974 and 2014, comprising 9,762 patients, met the inclusion criteria. There were significant correlations between PFS and OS (R = 0.61) and between RR and OS (R = 0.51). Conversely, there were nonsignificant correlations between 3moPFS and 6moPFS with OS. A reduction in the use of RR as the primary end point was observed over time, favoring time-based events (P for trend = .02). In 14% of RCTs, the primary end point was not met, but the study was reported as being positive. Toxicity was comprehensively reported in 47% of RCTs, whereas 14% inadequately reported toxicity. In advanced STS, PFS and RR seem to be appropriate surrogates for OS. There is poor correlation between OS and both 3moPFS and 6moPFS. As such, caution is urged with the use of these as primary end points in randomized STS trials. The quality of toxicity reporting and interpretation of results is suboptimal. © 2016 by American Society of Clinical Oncology.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Jun-Ho; Ahn, Soon Kil; YOUAI Co., Ltd., Suwon-Si, Gyeonggi-Do 443-766

    Highlights: Black-Right-Pointing-Pointer We recently discovered a potent and selective B-Raf inhibitor, UI-152. Black-Right-Pointing-Pointer UI-152 displayed a selective cytotoxicity toward v-Ha-ras transformed cells. Black-Right-Pointing-Pointer UI-152-induced growth inhibition was largely meditated by autophagy. Black-Right-Pointing-Pointer UI-152 induced paradoxical activation of Raf-1. -- Abstract: In human cancers, B-Raf is the most frequently mutated protein kinase in the MAPK signaling cascade, making it an important therapeutic target. We recently discovered a potent and selective B-Raf inhibitor, UI-152, by using a structure-based drug design strategy. In this study, we examined whether B-Raf inhibition by UI-152 may be an effective therapeutic strategy for eliminating cancer cells transformedmore » with v-Ha-ras (Ras-NIH 3T3). UI-152 displayed selective cytotoxicity toward Ras-NIH 3T3 cells while having little to no effect on non-transformed NIH 3T3 cells. We found that treatment with UI-152 markedly increased autophagy and, to a lesser extent, apoptosis. However, inhibition of autophagy by addition of 3-MA failed to reverse the cytotoxic effects of UI-152 on Ras-NIH 3T3 cells, demonstrating that apoptosis and autophagy can act as cooperative partners to induce growth inhibition in Ras-NIH 3T3 cells treated with UI-152. Most interestingly, cell responses to UI-152 appear to be paradoxical. Here, we showed that although UI-152 inhibited ERK, it induced B-Raf binding to Raf-1 as well as Raf-1 activation. This paradoxical activation of Raf-1 by UI-152 is likely to be coupled with the inhibition of the mTOR pathway, an intracellular signaling pathway involved in autophagy. We also showed for the first time that, in multi-drug resistant cells, the combination of UI-152 with verapamil significantly decreased cell proliferation and increased autophagy. Thus, our findings suggest that the inhibition of autophagy, in combination with UI-152, offers a more effective therapeutic strategy for v-Ha-ras-transformed cells harboring wild-type B-Raf.« less

  12. Height changes along selected lines through the Death Valley region, California and Nevada, 1905-1984

    USGS Publications Warehouse

    Castle, Robert O.; Gilmore, Thomas D.; Walker, James P.; Castle, Susan A.

    2005-01-01

    Comparisons among repeated levelings along selected lines through the Death Valley region of California and adjacent parts of Nevada have disclosed surprisingly large vertical displacements. The vertical control data in this lightly populated area is sparse; moreover, as much as a third of the recovered data is so thoroughly contaminated by systematic error and survey blunders that no attempt was made to correct these data and they were simply discarded. In spite of these limitations, generally episodic, commonly large vertical displacements are disclosed along a number of lines. Displacements in excess of 0.4 m, with respect to our selected control point at Beatty, Nevada, and differential displacements of about 0.7 m apparently occurred during the earlier years of the 20th century and continued episodically through at least 1943. While this area contains abundant evidence of continuing tectonic activity through latest Quaternary time, it is virtually devoid of historic seismicity. We have detected no clear connection between the described vertical displacements and fault zones reportedly active during Holocene time, although we sense some association with several more broadly defined tectonic features.

  13. Csf Based Non-Ground Points Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Shen, A.; Zhang, W.; Shi, H.

    2017-09-01

    Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points

  14. Non-inferiority cancer clinical trials: scope and purposes underlying their design.

    PubMed

    Riechelmann, R P; Alex, A; Cruz, L; Bariani, G M; Hoff, P M

    2013-07-01

    Non-inferiority clinical trials (NIFCTs) aim to demonstrate that the experimental therapy has advantages over the standard of care, with acceptable loss of efficacy. We evaluated the purposes underlying the selection of a non-inferiority design in oncology and the size of their non-inferiority margins (NIFm's). All NIFCTs of cancer-directed therapies and supportive care agents published in a 10-year period were eligible. Two investigators extracted the data and independently classified the trials by their purpose to choose a non-inferiority design. Seventy-five were included: 43% received funds from industry, overall survival was the most common primary end point and 73% reported positive results. The most frequent purposes underlying the selection of a non-inferiority design were to test more conveniently administered schedules and/or less toxic treatments. In 13 (17%) trials, a clear purpose was not identified. Among the trials that reported a pre-specified NIFm, the median value was 12.5% (range 4%-25%) for trials with binary primary end points and Hazard Ratio of 1.25 (range 1.10-1.50) for trials that used time-to-event primary outcomes. Cancer NIFCT harbor serious methodological and ethical issues. Many use large NIFm and nearly one-fifth did not state a clear purpose for selecting a non-inferiority design.

  15. In-line inspection of unpiggable buried live gas pipes using circumferential EMAT guided waves

    NASA Astrophysics Data System (ADS)

    Ren, Baiyang; Xin, Junjun

    2018-04-01

    Unpiggable buried gas pipes need to be inspected to ensure their structural integrity and safe operation. The CIRRIS XITM robot, developed and operated by ULC Robotics, conducts in-line nondestructive inspection of live gas pipes. With the no-blow launching system, the inspection operation has reduced disruption to the public and by eliminating the need to dig trenches, has minimized the site footprint. This provides a highly time and cost effective solution for gas pipe maintenance. However, the current sensor on the robot performs a point-by-point measurement of the pipe wall thickness which cannot cover the whole volume of the pipe in a reasonable timeframe. The study of ultrasonic guided wave technique is discussed to improve the volume coverage as well as the scanning speed. Circumferential guided wave is employed to perform axial scanning. Mode selection is discussed in terms of sensitivity to different defects and defect characterization capability. To assist with the mode selection, finite element analysis is performed to evaluate the wave-defect interaction and to identify potential defect features. Pulse-echo and through-transmission mode are evaluated and compared for their pros and cons in axial scanning. Experiments are also conducted to verify the mode selection and detect and characterize artificial defects introduced into pipe samples.

  16. Photocurable Polymers for Ion Selective Field Effect Transistors. 20 Years of Applications

    PubMed Central

    Abramova, Natalia; Bratov, Andrei

    2009-01-01

    Application of photocurable polymers for encapsulation of ion selective field effect transistors (ISFET) and for membrane formation in chemical sensitive field effect transistors (ChemFET) during the last 20 years is discussed. From a technological point of view these materials are quite interesting because they allow the use of standard photo-lithographic processes, which reduces significantly the time required for sensor encapsulation and membrane deposition and the amount of manual work required for this, all items of importance for sensor mass production. Problems associated with the application of this kind of polymers in sensors are analysed and estimation of future trends in this field of research are presented. PMID:22399988

  17. Toxic industrial chemicals and chemical weapons: exposure, identification, and management by syndrome.

    PubMed

    Tomassoni, Anthony J; French, Robert N E; Walter, Frank G

    2015-02-01

    Toxidromes aid emergency care providers in the context of the patient presenting with suspected poisoning, unexplained altered mental status, unknown hazardous materials or chemical weapons exposure, or the unknown overdose. The ability to capture an adequate chemical exposure history and to recognize toxidromes may reduce dependence on laboratory tests, speed time to delivery of specific antidote therapy, and improve selection of supportive care practices tailored to the etiologic agent. This article highlights elements of the exposure history and presents selected toxidromes that may be caused by toxic industrial chemicals and chemical weapons. Specific antidotes for toxidromes and points regarding their use, and special supportive measures, are presented. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Time and Energy, Exploring Trajectory Options Between Nodes in Earth-Moon Space

    NASA Technical Reports Server (NTRS)

    Martinez, Roland; Condon, Gerald; Williams, Jacob

    2012-01-01

    The Global Exploration Roadmap (GER) was released by the International Space Exploration Coordination Group (ISECG) in September of 2011. It describes mission scenarios that begin with the International Space Station and utilize it to demonstrate necessary technologies and capabilities prior to deployment of systems into Earth-Moon space. Deployment of these systems is an intermediate step in preparation for more complex deep space missions to near-Earth asteroids and eventually Mars. In one of the scenarios described in the GER, "Asteroid Next", there are activities that occur in Earth-Moon space at one of the Earth-Moon Lagrange (libration) points. In this regard, the authors examine the possible role of an intermediate staging point in an effort to illuminate potential trajectory options for conducting missions in Earth-Moon space of increasing duration, ultimately leading to deep space missions. This paper will describe several options for transits between Low Earth Orbit (LEO) and the libration points, transits between libration points, and transits between the libration points and interplanetary trajectories. The solution space provided will be constrained by selected orbital mechanics design techniques and physical characteristics of hardware to be used in both crewed missions and uncrewed missions. The relationships between time and energy required to transfer hardware between these locations will provide a better understanding of the potential trade-offs mission planners could consider in the development of capabilities, individual missions, and mission series in the context of the ISECG GER.

  19. Three-year customer satisfaction survey in laboratory medicine in a Chinese university hospital.

    PubMed

    Guo, Siqi; Duan, Yifei; Liu, Xiaojuan; Jiang, Yongmei

    2018-04-25

    Customer satisfaction is a key quality indicator of laboratory service. Patients and physicians are the ultimate customers in medical laboratory, and their opinions are essential components in developing a customer-oriented laboratory. A longitudinal investigation of customer satisfaction was conducted through questionnaires. We designed two different questionnaires and selected 1200 customers (600 outpatients and 600 physicians) to assess customer satisfaction every other year from 2012 to 2016. Items with scores <4 were considered unsatisfactory, and corrective actions should be taken. The completion rates of physicians were 96.8% in 2012, 97% in 2014 and 96.5% in 2016, whereas the rates of patients were 95.3%, 96.2% and 95.2%, respectively. In 2012, the most dissatisfaction items were test turnaround time (3.77 points) and service attitude (3.87 points) from physicians, whereas waiting time (3.58 points) and examination environment (3.64 points) were the most dissatisfaction items from patients. After corrective actions were taken, the result of satisfaction in 2014 was better, which illustrated our strategy was effective. However, some items remained to be less than 4, so we repeated the survey after modifying questionnaires in 2016. However, the general satisfaction points of the physicians and patients reduced in 2016, which reminded us of some influential factors we had neglected. By using dynamic survey of satisfaction, we can continuously find deficiencies in our laboratory services and take suitable corrective actions, thereby improving our service quality.

  20. Time-dependent changes in protein expression in rainbow trout muscle following hypoxia.

    PubMed

    Wulff, Tune; Jokumsen, Alfred; Højrup, Peter; Jessen, Flemming

    2012-04-18

    Adaptation to hypoxia is a complex process, and individual proteins will be up- or down-regulated in order to address the main challenges at any given time. To investigate the dynamics of the adaptation, rainbow trout (Oncorhynchus mykiss) was exposed to 30% of normal oxygen tension for 1, 2, 5 and 24 h respectively, after which muscle samples were taken. The successful investigation of numerous proteins in a single study was achieved by selectively separating the sarcoplasmic proteins using 2-DE. In total 46 protein spots were identified as changing in abundance in response to hypoxia using one-way ANOVA and multivariate data analysis. Proteins of interest were subsequently identified by MS/MS following tryptic digestion. The observed regulation following hypoxia in skeletal muscle was determined to be time specific, as only a limited number of proteins were regulated in response to more than one time point. The cellular response to hypoxia included regulation of proteins involved in maintaining iron homeostasis, energy levels and muscle structure. In conclusion, this proteome-based study presents a comprehensive investigation of the expression profiles of numerous proteins at four different time points. This increases our understanding of timed changes in protein expression in rainbow trout muscle following hypoxia. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Influence of bench angle on upper extremity muscular activation during bench press exercise.

    PubMed

    Lauver, Jakob D; Cayot, Trent E; Scheuermann, Barry W

    2016-01-01

    This study compared the muscular activation of the pectoralis major, anterior deltoid and triceps brachii during a free-weight barbell bench press performed at 0°, 30°, 45° and -15° bench angles. Fourteen healthy resistance trained males (age 21.4 ± 0.4 years) participated in this study. One set of six repetitions for each bench press conditions at 65% one repetition maximum were performed. Surface electromyography (sEMG) was utilised to examine the muscular activation of the selected muscles during the eccentric and concentric phases. In addition, each phase was subdivided into 25% contraction durations, resulting in four separate time points for comparison between bench conditions. The sEMG of upper pectoralis displayed no difference during any of the bench conditions when examining the complete concentric contraction, however differences during 26-50% contraction duration were found for both the 30° [122.5 ± 10.1% maximal voluntary isometric contraction (MVIC)] and 45° (124 ± 9.1% MVIC) bench condition, resulting in greater sEMG compared to horizontal (98.2 ± 5.4% MVIC) and -15 (96.1 ± 5.5% MVIC). The sEMG of lower pectoralis was greater during -15° (100.4 ± 5.7% MVIC), 30° (86.6 ± 4.8% MVIC) and horizontal (100.1 ± 5.2% MVIC) bench conditions compared to the 45° (71.9 ± 4.5% MVIC) for the whole concentric contraction. The results of this study support the use of a horizontal bench to achieve muscular activation of both the upper and lower heads of the pectoralis. However, a bench incline angle of 30° or 45° resulted in greater muscular activation during certain time points, suggesting that it is important to consider how muscular activation is affected at various time points when selecting bench press exercises.

  2. Development of a Novel Floating In-situ Gelling System for Stomach Specific Drug Delivery of the Narrow Absorption Window Drug Baclofen.

    PubMed

    R Jivani, Rishad; N Patel, Chhagan; M Patel, Dashrath; P Jivani, Nurudin

    2010-01-01

    The present study deals with development of a floating in-situ gel of the narrow absorption window drug baclofen. Sodium alginate-based in-situ gelling systems were prepared by dissolving various concentrations of sodium alginate in deionized water, to which varying concentrations of drug and calcium bicarbonate were added. Fourier transform infrared spectroscopy (FTIR) and differential scanning calorimetry (DSC) were used to check the presence of any interaction between the drug and the excipients. A 3(2) full factorial design was used for optimization. The concentrations of sodium alginate (X1) and calcium bicarbonate (X2) were selected as the independent variables. The amount of the drug released after 1 h (Q1) and 10 h (Q10) and the viscosity of the solution were selected as the dependent variables. The gels were studied for their viscosity, in-vitro buoyancy and drug release. Contour plots were drawn for each dependent variable and check-point batches were prepared in order to get desirable release profiles. The drug release profiles were fitted into different kinetic models. The floating lag time and floating time found to be 2 min and 12 h respectively. A decreasing trend in drug release was observed with increasing concentrations of CaCO3. The computed values of Q1 and Q10 for the check-point batch were 25% and 86% respectively, compared to the experimental values of 27.1% and 88.34%. The similarity factor (f 2) for the check-point batch being 80.25 showed that the two dissolution profiles were similar. The drug release from the in-situ gel follows the Higuchi model, which indicates a diffusion-controlled release. A stomach specific in-situ gel of baclofen could be prepared using floating mechanism to increase the residence time of the drug in stomach and thereby increase the absorption.

  3. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  4. An Investigation on the Contribution of GLONASS to the Precise Point Positioning for Short Time Observations

    NASA Astrophysics Data System (ADS)

    Ulug, R.; Ozludemir, M. T.

    2016-12-01

    After 2011, through the modernization process of GLONASS, the number of satellites increased rapidly. This progress has made the GLONASS the only fully operational system alternative to GPS in point positioning. So far, many researches have been conducted to investigate the contribution of GLONASS to point positioning considering different methods such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP). The latter one, PPP, is a method that performs precise position determination using a single GNSS receiver. PPP method has become very attractive since the early 2000s and it provided great advantages for engineering and scientific applications. However, PPP method needs at least 2 hours observation time and the required observation length may be longer depending on several factors, such as the number of satellites, satellite configuration etc. The more satellites, the less observation time. Nevertheless the impact of the number of satellites included must be known very well. In this study, to determine the contribution of GLONASS on PPP, GLONASS satellite observations were added one by one from 1 to 5 satellite in 2, 4 and 6 hours of observations. For this purpose, the data collected at the IGS site ISTA was used. Data processing has been done for Day of Year (DOY) 197 in 2016. 24 hours GPS observations have been processed by Bernese 5.2 PPP module and the output was selected as the reference while 2, 4 and 6 hours GPS and GPS/GLONASS observations have been processed by magic GNSS PPP module. The results clearly showed that GPS/GLONASS observations improved positional accuracy, precision, dilution of precision and convergence to the reference coordinates. In this context, coordinate differences between 24 hours GPS observations and 6 hours GPS/GLONASS observations have been obtained as less than 2 cm.

  5. Alcohol consumption during adolescence is associated with reduced grey matter volumes.

    PubMed

    Heikkinen, Noora; Niskanen, Eini; Könönen, Mervi; Tolmunen, Tommi; Kekkonen, Virve; Kivimäki, Petri; Tanila, Heikki; Laukkanen, Eila; Vanninen, Ritva

    2017-04-01

    Cognitive impairment has been associated with excessive alcohol use, but its neural basis is poorly understood. Chronic excessive alcohol use in adolescence may lead to neuronal loss and volumetric changes in the brain. Our objective was to compare the grey matter volumes of heavy- and light-drinking adolescents. This was a longitudinal study: heavy-drinking adolescents without an alcohol use disorder and their light-drinking controls were followed-up for 10 years using questionnaires at three time-points. Magnetic resonance imaging was conducted at the last time-point. The area near Kuopio University Hospital, Finland. The 62 participants were aged 22-28 years and included 35 alcohol users and 27 controls who had been followed-up for approximately 10 years. Alcohol use was measured by the Alcohol Use Disorders Identification Test (AUDIT)-C at three time-points during 10 years. Participants were selected based on their AUDIT-C score. Magnetic resonance imaging was conducted at the last time-point. Grey matter volume was determined and compared between heavy- and light-drinking groups using voxel-based morphometry on three-dimensional T1-weighted magnetic resonance images using predefined regions of interest and a threshold of P < 0.05, with small volume correction applied on cluster level. Grey matter volumes were significantly smaller among heavy-drinking participants in the bilateral anterior cingulate cortex, right orbitofrontal and frontopolar cortex, right superior temporal gyrus and right insular cortex compared to the control group (P < 0.05, family-wise error-corrected cluster level). Excessive alcohol use during adolescence appears to be associated with an abnormal development of the brain grey matter. Moreover, the structural changes detected in the insula of alcohol users may reflect a reduced sensitivity to alcohol's negative subjective effects. © 2016 Society for the Study of Addiction.

  6. Compressed storage of arterial pressure waveforms by selection of significant points.

    PubMed

    de Graaf, P M; van Goudoever, J; Wesseling, K H

    1997-09-01

    Continuous records of arterial blood pressure can be obtained non-invasively with Finapres, even for periods of 24 hours. Increasingly, storage of such records is done digitally, requiring large disc capacities. It is therefore necessary to find methods to store blood pressure waveforms in compressed form. The method of selection of significant points known from ECG data compression is adapted. Points are selected as significant wherever the first derivative of the pressure wave changes sign. As a second stage recursive partitioning is used to select additional points such that the difference between the selected points, linearly interpolated, and the original curve remains below a maximum. This method is tested on finger arterial pressure waveform epochs of 60 s duration taken from 32 patients with a wide range of blood pressures and heart rates. An average compression factor of 4.6 (SD 1.0) is obtained when accepting a maximum difference of 3 mmHg. The root mean squared error is 1 mmHg averaged over the group of patient waveforms. Clinically relevant parameters such as systolic, diastolic and mean pressure are reproduced with an offset error of less than 0.5 (0.3) mmHg and scatter less than 0.6 (0.1) mmHg. It is concluded that a substantial compression factor can be achieved with a simple and computationally fast algorithm and little deterioration in waveform quality and pressure level accuracy.

  7. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  8. Three-dimensional computer-assisted study model analysis of long-term oral-appliance wear. Part 1: Methodology.

    PubMed

    Chen, Hui; Lowe, Alan A; de Almeida, Fernanda Riberiro; Wong, Mary; Fleetham, John A; Wang, Bangkang

    2008-09-01

    The aim of this study was to test a 3-dimensional (3D) computer-assisted dental model analysis system that uses selected landmarks to describe tooth movement during treatment with an oral appliance. Dental casts of 70 patients diagnosed with obstructive sleep apnea and treated with oral appliances for a mean time of 7 years 4 months were evaluated with a 3D digitizer (MicroScribe-3DX, Immersion, San Jose, Calif) compatible with the Rhinoceros modeling program (version 3.0 SR3c, Robert McNeel & Associates, Seattle, Wash). A total of 86 landmarks on each model were digitized, and 156 variables were calculated as either the linear distance between points or the distance from points to reference planes. Four study models for each patient (maxillary baseline, mandibular baseline, maxillary follow-up, and mandibular follow-up) were superimposed on 2 sets of reference points: 3 points on the palatal rugae for maxillary model superimposition, and 3 occlusal contact points for the same set of maxillary and mandibular model superimpositions. The patients were divided into 3 evaluation groups by 5 orthodontists based on the changes between baseline and follow-up study models. Digital dental measurements could be analyzed, including arch width, arch length, curve of Spee, overbite, overjet, and the anteroposterior relationship between the maxillary and mandibular arches. A method error within 0.23 mm in 14 selected variables was found for the 3D system. The statistical differences in the 3 evaluation groups verified the division criteria determined by the orthodontists. The system provides a method to record 3D measurements of study models that permits computer visualization of tooth position and movement from various perspectives.

  9. A presentation system for just-in-time learning in radiology.

    PubMed

    Kahn, Charles E; Santos, Amadeu; Thao, Cheng; Rock, Jayson J; Nagy, Paul G; Ehlers, Kevin C

    2007-03-01

    There is growing interest in bringing medical educational materials to the point of care. We sought to develop a system for just-in-time learning in radiology. A database of 34 learning modules was derived from previously published journal articles. Learning objectives were specified for each module, and multiple-choice test items were created. A web-based system-called TEMPO-was developed to allow radiologists to select and view the learning modules. Web services were used to exchange clinical context information between TEMPO and the simulated radiology work station. Preliminary evaluation was conducted using the System Usability Scale (SUS) questionnaire. TEMPO identified learning modules that were relevant to the age, sex, imaging modality, and body part or organ system of the patient being viewed by the radiologist on the simulated clinical work station. Users expressed a high degree of satisfaction with the system's design and user interface. TEMPO enables just-in-time learning in radiology, and can be extended to create a fully functional learning management system for point-of-care learning in radiology.

  10. Accelerated antioxidant bioavailability of OPC-3 bioflavonoids administered as isotonic solution.

    PubMed

    Cesarone, Maria R; Grossi, Maria Giovanni; Di Renzo, Andrea; Errichi, Silvia; Schönlau, Frank; Wilmer, James L; Lange, Mark; Blumenfeld, Julian

    2009-06-01

    The degree of absorption of bioflavonoids, a diverse and complex group of plant derived phytonutrients, has been a frequent debate among scientists. Monomeric flavonoid species are known to be absorbed within 2 h. The kinetics of plasma reactive oxygen species, a reflection of bioactivity, of a commercial blend of flavonoids, OPC-3 was investigated. OPC-3 was selected to compare absorption of an isotonic flavonoid solution vs tablet form with the equivalent amount of fluid. In the case of isotonic OPC-3 the reactive oxygen species of the subject's plasma decreased significantly (p < 0.05), six times greater than OPC-3 tablets by 10 min post-consumption. After 20 min the isotonic formulation was approximately four times more bioavailable and after 40 min twice as bioavailable as the tablet, respectively. At time points 1 h and later, both isotonic and tablet formulations lowered oxidative stress, although the isotonic formulation values remained significantly better throughout the investigation period of 4 h. These findings point to a dramatically accelerated bioavailability of flavonoids delivered in an isotonic formulation. (c) 2009 John Wiley & Sons, Ltd.

  11. Electronic method for autofluorography of macromolecules on two-D matrices. [Patent application

    DOEpatents

    Davidson, J.B.; Case, A.L.

    1981-12-30

    A method for detecting, localizing, and quantifying macromolecules contained in a two-dimensional matrix is provided which employs a television-based position sensitive detection system. A molecule-containing matrix may be produced by conventional means to produce spots of light at the molecule locations which are detected by the television system. The matrix, such as a gel matrix, is exposed to an electronic camera system including an image-intensifier and secondary electron conduction camera capable of light integrating times of many minutes. A light image stored in the form of a charge image on the camera tube target is scanned by conventional television techniques, digitized, and stored in a digital memory. Intensity of any point on the image may be determined from the number at the memory address of the point. The entire image may be displayed on a television monitor for inspection and photographing or individual spots may be analyzed through selected readout of the memory locations. Compared to conventional film exposure methods, the exposure time may be reduced 100 to 1000 times.

  12. A soft-computing methodology for noninvasive time-spatial temperature estimation.

    PubMed

    Teixeira, César A; Ruano, Maria Graça; Ruano, António E; Pereira, Wagner C A

    2008-02-01

    The safe and effective application of thermal therapies is restricted due to lack of reliable noninvasive temperature estimators. In this paper, the temporal echo-shifts of backscattered ultrasound signals, collected from a gel-based phantom, were tracked and assigned with the past temperature values as radial basis functions neural networks input information. The phantom was heated using a piston-like therapeutic ultrasound transducer. The neural models were assigned to estimate the temperature at different intensities and points arranged across the therapeutic transducer radial line (60 mm apart from the transducer face). Model inputs, as well as the number of neurons were selected using the multiobjective genetic algorithm (MOGA). The best attained models present, in average, a maximum absolute error less than 0.5 degrees C, which is pointed as the borderline between a reliable and an unreliable estimator in hyperthermia/diathermia. In order to test the spatial generalization capacity, the best models were tested using spatial points not yet assessed, and some of them presented a maximum absolute error inferior to 0.5 degrees C, being "elected" as the best models. It should be also stressed that these best models present implementational low-complexity, as desired for real-time applications.

  13. Alternator control for battery charging

    DOEpatents

    Brunstetter, Craig A.; Jaye, John R.; Tallarek, Glen E.; Adams, Joseph B.

    2015-07-14

    In accordance with an aspect of the present disclosure, an electrical system for an automotive vehicle has an electrical generating machine and a battery. A set point voltage, which sets an output voltage of the electrical generating machine, is set by an electronic control unit (ECU). The ECU selects one of a plurality of control modes for controlling the alternator based on an operating state of the vehicle as determined from vehicle operating parameters. The ECU selects a range for the set point voltage based on the selected control mode and then sets the set point voltage within the range based on feedback parameters for that control mode. In an aspect, the control modes include a trickle charge mode and battery charge current is the feedback parameter and the ECU controls the set point voltage within the range to maintain a predetermined battery charge current.

  14. Effects of one versus two bouts of moderate intensity physical activity on selective attention during a school morning in Dutch primary schoolchildren: A randomized controlled trial.

    PubMed

    Altenburg, Teatske M; Chinapaw, Mai J M; Singh, Amika S

    2016-10-01

    Evidence suggests that physical activity is positively related to several aspects of cognitive functioning in children, among which is selective attention. To date, no information is available on the optimal frequency of physical activity on cognitive functioning in children. The current study examined the acute effects of one and two bouts of moderate-intensity physical activity on children's selective attention. Randomized controlled trial (ISRCTN97975679). Thirty boys and twenty-six girls, aged 10-13 years, were randomly assigned to three conditions: (A) sitting all morning working on simulated school tasks; (B) one 20-min physical activity bout after 90min; and (C) two 20-min physical activity bouts, i.e. at the start and after 90min. Selective attention was assessed at five time points during the morning (i.e. at baseline and after 20, 110, 130 and 220min), using the 'Sky Search' subtest of the 'Test of Selective Attention in Children'. We used GEE analysis to examine differences in Sky Search scores between the three experimental conditions, adjusting for school, baseline scores, self-reported screen time and time spent in sports. Children who performed two 20-min bouts of moderate-intensity physical activity had significantly better Sky Search scores compared to children who performed one physical activity bout or remained seated the whole morning (B=-0.26; 95% CI=[-0.52; -0.00]). Our findings support the importance of repeated physical activity during the school day for beneficial effects on selective attention in children. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  15. Recommending blood glucose monitors, a pharmacy perspective.

    PubMed

    Carter, Alan

    2007-03-01

    Selection of what blood glucose monitoring system to utilize has become an issue for physicians, diabetes educators, pharmacists, and patients. The field of competing makes and models of blood glucose monitoring systems has become crowded, with manufacturers touting improvements in accuracy, ease of use/alternate site options, stored results capacity, software evaluation tools, and/or price point. Personal interviews of 12 pharmacists from community and academic practice settings about monitor preference, as well as results from a national survey of pharmacist recommendations, were compared to actual wholesale sales data to estimate the impact of such recommendations on final monitor selection by the patient. Accu-Chek monitors were recommended 34.65% of the time and represented 28.58% of sales, with a success rate of 82.48% of being the monitor selected. OneTouch monitors had 27.72% of recommendations but represented 31.43% of sales, indicating possible patient brand loyalty or formulary preference for that product. FreeStyle(R) monitors came in third for pharmacist recommendations and were selected by the patient 61.68% of the time when recommended. The category of "other monitor" choices was selected 60.89% of the time by patients given those suggestions. Included in the "other monitor" category was the new disposable monitor marketed as the Sidekick. Based on sales data provided, the Sidekick made up 2.87% of "other monitor" category sales, representing 68% of the "other monitor" segment. While patients frequently follow pharmacist monitoring system suggestions, the ultimate deciding factor is most often the final out-of-pocket cost to the patient. As a result, cost of supplies often becomes the most important determining factor in final monitor selection at the patient level. If the patient cannot afford to perform the recommended daily testing intervals, all other determining factors and suggestions become moot.

  16. Trends: Bearding the Proverbial Lion.

    ERIC Educational Resources Information Center

    Greckel, Wil

    1989-01-01

    Describes the use of television commercials to teach classical music. Points out that a large number of commercials use classical selections which can serve as a starting point for introducing students to this form. Urges music educators to broaden their views and use these truncated selections to transmit our cultural heritage. (KO)

  17. Evaluation of methods for rapid determination of freezing point of aviation fuels

    NASA Technical Reports Server (NTRS)

    Mathiprakasam, B.

    1982-01-01

    Methods for identification of the more promising concepts for the development of a portable instrument to rapidly determine the freezing point of aviation fuels are described. The evaluation process consisted of: (1) collection of information on techniques previously used for the determination of the freezing point, (2) screening and selection of these techniques for further evaluation of their suitability in a portable unit for rapid measurement, and (3) an extensive experimental evaluation of the selected techniques and a final selection of the most promising technique. Test apparatuses employing differential thermal analysis and the change in optical transparency during phase change were evaluated and tested. A technique similar to differential thermal analysis using no reference fuel was investigated. In this method, the freezing point was obtained by digitizing the data and locating the point of inflection. Results obtained using this technique compare well with those obtained elsewhere using different techniques. A conceptual design of a portable instrument incorporating this technique is presented.

  18. Developmental differences between boys and girls result in sex-specific physical fitness changes from fourth to fifth grade.

    PubMed

    Flanagan, Shawn D; Dunn-Lewis, Courtenay; Hatfield, Disa L; Distefano, Lindsay J; Fragala, Maren S; Shoap, Mark; Gotwald, Mary; Trail, John; Gomez, Ana L; Volek, Jeff S; Cortis, Cristina; Comstock, Brett A; Hooper, David R; Szivak, Tunde K; Looney, David P; DuPont, William H; McDermott, Danielle M; Gaudiose, Michael C; Kraemer, William J

    2015-01-01

    To better understand how developmental differences impact performance on a broad selection of common physical fitness measures, we examined changes in boys and girls from fourth to fifth grade. Subjects included 273 boys (age, 9.5 ± 0.6 years; height, 139.86 ± 7.52 cm; mass, 38.00 ± 9.55 kg) and 295 girls (age, 9.6 ± 0.5 years; height, 139.30 ± 7.19 cm; weight, 37.44 ± 9.35 kg). We compared anthropometrics, cardiorespiratory and local muscular endurance, flexibility, power, and strength. A mixed-method analysis of variance was used to compare boys and girls at the 2 time points. Pearson correlation coefficients were used to examine relationships between anthropometric and fitness measurements. Significance was set at p ≤ 0.05. Weight increased significantly (>10%) in both sexes, and girls became significantly taller than boys after growing 4.9% by fifth grade (vs. 3.5%). Both groups improved cardiorespiratory endurance and power, although boys performed better than girls at both time points. Boys were stronger in fourth grade, but girls improved more, leading to similar fifth-grade values. Girls were more flexible in fourth grade, but their significant decreases (∼32.4%) coupled with large improvements in boys (∼105%) resulted in similar fifth-grade scores. Body mass index (BMI) was positively correlated with run time regardless of grade or sex. Power was negatively correlated with BMI and run time in fourth grade. In conclusion, sex-specific differences in physical fitness are apparent before pubescence. Furthermore, this selection of measures reveals sexually dimorphic changes, which likely reflect the onset of puberty in girls. Coaches and teachers should account these developmental differences and their effects on anthropometrics and fitness in boys and girls.

  19. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  20. Emotion Estimation Algorithm from Facial Image Analyses of e-Learning Users

    NASA Astrophysics Data System (ADS)

    Shigeta, Ayuko; Koike, Takeshi; Kurokawa, Tomoya; Nosu, Kiyoshi

    This paper proposes an emotion estimation algorithm from e-Learning user's facial image. The algorithm characteristics are as follows: The criteria used to relate an e-Learning use's emotion to a representative emotion were obtained from the time sequential analysis of user's facial expressions. By examining the emotions of the e-Learning users and the positional change of the facial expressions from the experiment results, the following procedures are introduce to improve the estimation reliability; (1) some effective features points are chosen by the emotion estimation (2) dividing subjects into two groups by the change rates of the face feature points (3) selection of the eigenvector of the variance-co-variance matrices (cumulative contribution rate>=95%) (4) emotion calculation using Mahalanobis distance.

  1. The virtual terrorism response academy: training for high-risk, low-frequency threats.

    PubMed

    Henderson, Joseph V

    2005-01-01

    The Virtual Terrorism Response Academy is a reusable virtual learning environment to prepare emergency responders to deal with high-risk, low-frequency events in general, terrorist attacks in particular. The principal learning strategy is a traditional one: apprenticeship. Trainees enter the Academy and travel through its halls, selecting different learning experiences under the guidance of instructors who are simultaneously master practitioners and master trainers. The mentors are real individuals who have been videotaped according to courseware designs; they are subsequently available at any time or location via broadband Internet or CD-ROM. The Academy features a Simulation Area where trainees are briefed on a given scenario, select appropriate resources (e.g., protective equipment and hazmat instruments), then enter a 3-dimensional space where they must deal with various situations. Simulations are done under the guidance of a master trainer who functions as a coach, asking questions, pointing out things, explaining his reasoning at various points in the simulation. This is followed by a debriefing and discussion of lessons that could be learned from the simulation and the trainee's decisions.

  2. Cocrystals to facilitate delivery of poorly soluble compounds beyond-rule-of-5.

    PubMed

    Kuminek, Gislaine; Cao, Fengjuan; Bahia de Oliveira da Rocha, Alanny; Gonçalves Cardoso, Simone; Rodríguez-Hornedo, Naír

    2016-06-01

    Besides enhancing aqueous solubilities, cocrystals have the ability to fine-tune solubility advantage over drug, supersaturation index, and bioavailability. This review presents important facts about cocrystals that set them apart from other solid-state forms of drugs, and a quantitative set of rules for the selection of additives and solution/formulation conditions that predict cocrystal solubility, supersaturation index, and transition points. Cocrystal eutectic constants are shown to be the most important cocrystal property that can be measured once a cocrystal is discovered, and simple relationships are presented that allow for prediction of cocrystal behavior as a function of pH and drug solubilizing agents. Cocrystal eutectic constant is a stability or supersatuation index that: (a) reflects how close or far from equilibrium a cocrystal is, (b) establishes transition points, and (c) provides a quantitative scale of cocrystal true solubility changes over drug. The benefit of this strategy is that a single measurement, that requires little material and time, provides a principled basis to tailor cocrystal supersaturation index by the rational selection of cocrystal formulation, dissolution, and processing conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Channel morphodynamics in four reaches of the Lower Missouri River, 2006-07

    USGS Publications Warehouse

    Elliott, Caroline M.; Reuter, Joanna M.; Jacobson, Robert B.

    2009-01-01

    Channel morphodynamics in response to flow modifications from Gavins Point Dam are examined in four reaches of the Lower Missouri River. Measures include changes in channel morphology and indicators of sediment transport in four 6 kilometer long reaches located downstream from Gavins Point Dam, near Yankton, South Dakota, Kenslers Bend, Nebraska, Little Sioux, Iowa, and Miami, Missouri. Each of the four reaches was divided into 300 transects with a 20-meter spacing and surveyed during the summer in 2006 and 2007. A subset of 30 transects was randomly selected and surveyed 7-10 times in 2006-07 over a wide range of discharges including managed and natural flow events. Hydroacoustic mapping used a survey-grade echosounder and a Real Time Kinematic Global Positioning System to evaluate channel change. Acoustic Doppler current profiler measurements were used to evaluate bed-sediment velocity. Results indicate varying amounts of deposition, erosion, net change, and sediment transport in the four Lower Missouri River reaches. The Yankton reach was the most stable over monthly and annual time-frames. The Kenslers Bend and Little Sioux reaches exhibited substantial amounts of deposition and erosion, although net change was generally low in both reaches. Total, or gross geomorphic change was greatest in the Kenslers Bend reach. The Miami reach exhibited varying rates of deposition and erosion, and low net change. The Yankton, Kenslers Bend, and Miami reaches experienced net erosion during the time period that bracketed the managed May 2006 spring rise event from Gavins Point Dam.

  4. Upper Limb Kinematics in Stroke and Healthy Controls Using Target-to-Target Task in Virtual Reality.

    PubMed

    Hussain, Netha; Alt Murphy, Margit; Sunnerhagen, Katharina S

    2018-01-01

    Kinematic analysis using virtual reality (VR) environment provides quantitative assessment of upper limb movements. This technique has rarely been used in evaluating motor function in stroke despite its availability in stroke rehabilitation. To determine the discriminative validity of VR-based kinematics during target-to-target pointing task in individuals with mild or moderate arm impairment following stroke and in healthy controls. Sixty-seven participants with moderate (32-57 points) or mild (58-65 points) stroke impairment as assessed with Fugl-Meyer Assessment for Upper Extremity were included from the Stroke Arm Longitudinal study at the University of Gothenburg-SALGOT cohort of non-selected individuals within the first year of stroke. The stroke groups and 43 healthy controls performed the target-to-target pointing task, where 32 circular targets appear one after the other and disappear when pointed at by the haptic handheld stylus in a three-dimensional VR environment. The kinematic parameters captured by the stylus included movement time, velocities, and smoothness of movement. The movement time, mean velocity, and peak velocity were discriminative between groups with moderate and mild stroke impairment and healthy controls. The movement time was longer and mean and peak velocity were lower for individuals with stroke. The number of velocity peaks, representing smoothness, was also discriminative and significantly higher in both stroke groups (mild, moderate) compared to controls. Movement trajectories in stroke more frequently showed clustering (spider's web) close to the target indicating deficits in movement precision. The target-to-target pointing task can provide valuable and specific information about sensorimotor impairment of the upper limb following stroke that might not be captured using traditional clinical scale. The trial was registered with register number NCT01115348 at clinicaltrials.gov, on May 4, 2010. URL: https://clinicaltrials.gov/ct2/show/NCT01115348.

  5. Optical selection and collection of DNA fragments

    DOEpatents

    Roslaniec, Mary C.; Martin, John C.; Jett, James H.; Cram, L. Scott

    1998-01-01

    Optical selection and collection of DNA fragments. The present invention includes the optical selection and collection of large (>.mu.g) quantities of clonable, chromosome-specific DNA from a sample of chromosomes. Chromosome selection is based on selective, irreversible photoinactivation of unwanted chromosomal DNA. Although more general procedures may be envisioned, the invention is demonstrated by processing chromosomes in a conventional flow cytometry apparatus, but where no droplets are generated. All chromosomes in the sample are first stained with at least one fluorescent analytic dye and bonded to a photochemically active species which can render chromosomal DNA unclonable if activated. After passing through analyzing light beam(s), unwanted chromosomes are irradiated using light which is absorbed by the photochemically active species, thereby causing photoinactivation. As desired chromosomes pass this photoinactivation point, the inactivating light source is deflected by an optical modulator; hence, desired chromosomes are not photoinactivated and remain clonable. The selection and photoinactivation processes take place on a microsecond timescale. By eliminating droplet formation, chromosome selection rates 50 times greater than those possible with conventional chromosome sorters may be obtained. Thus, usable quantities of clonable DNA from any source thereof may be collected.

  6. Perturbation of bile acid homeostasis is an early pathogenesis event of drug induced liver injury in rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Makoto; Miyake, Manami; Sato, Hiroko

    2013-04-01

    Drug-induced liver injury (DILI) is a significant consideration for drug development. Current preclinical DILI assessment relying on histopathology and clinical chemistry has limitations in sensitivity and discordance with human. To gain insights on DILI pathogenesis and identify potential biomarkers for improved DILI detection, we performed untargeted metabolomic analyses on rats treated with thirteen known hepatotoxins causing various types of DILI: necrosis (acetaminophen, bendazac, cyclosporine A, carbon tetrachloride, ethionine), cholestasis (methapyrilene and naphthylisothiocyanate), steatosis (tetracycline and ticlopidine), and idiosyncratic (carbamazepine, chlorzoxasone, flutamide, and nimesulide) at two doses and two time points. Statistical analysis and pathway mapping of the nearly 1900 metabolitesmore » profiled in the plasma, urine, and liver revealed diverse time and dose dependent metabolic cascades leading to DILI by the hepatotoxins. The most consistent change induced by the hepatotoxins, detectable even at the early time point/low dose, was the significant elevations of a panel of bile acids in the plasma and urine, suggesting that DILI impaired hepatic bile acid uptake from the circulation. Furthermore, bile acid amidation in the hepatocytes was altered depending on the severity of the hepatotoxin-induced oxidative stress. The alteration of the bile acids was most evident by the necrosis and cholestasis hepatotoxins, with more subtle effects by the steatosis and idiosyncratic hepatotoxins. Taking together, our data suggest that the perturbation of bile acid homeostasis is an early event of DILI. Upon further validation, selected bile acids in the circulation could be potentially used as sensitive and early DILI preclinical biomarkers. - Highlights: ► We used metabolomics to gain insights on drug induced liver injury (DILI) in rats. ► We profiled rats treated with thirteen hepatotoxins at two doses and two time points. ► The toxins decreased the liver's ability to uptake bile acid from the circulation. ► Oxidative stress induced by the toxins altered bile acid biosynthesis in the liver. ► Selected bile acids in the plasma and urine could be sensitive DILI biomarkers.« less

  7. Lighting Condition Analysis for Mars Moon Phobos

    NASA Technical Reports Server (NTRS)

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; De Carufel, Guy

    2016-01-01

    A manned mission to Phobos may be an important precursor and catalyst for the human exploration of Mars, as it will fully demonstrate the technologies for a successful Mars mission. A comprehensive understanding of Phobos' environment such as lighting condition and gravitational acceleration are essential to the mission success. The lighting condition is one of many critical factors for landing zone selection, vehicle power subsystem design, and surface mobility vehicle path planning. Due to the orbital characteristic of Phobos, the lighting condition will change dramatically from one Martian season to another. This study uses high fidelity computer simulation to investigate the lighting conditions, specifically the solar radiation flux over the surface, on Phobos. Ephemeris data from the Jet Propulsion Laboratory (JPL) DE405 model was used to model the state of the Sun, the Earth, and Mars. An occultation model was developed to simulate Phobos' self-shadowing and its solar eclipses by Mars. The propagated Phobos' state was compared with data from JPL's Horizon system to ensure the accuracy of the result. Results for Phobos lighting condition over one Martian year are presented in this paper, which include length of solar eclipse, average solar radiation intensity, surface exposure time, total maximum solar energy, and total surface solar energy (constrained by incident angle). The results show that Phobos' solar eclipse time changes throughout the Martian year with the maximum eclipse time occurring during the Martian spring and fall equinox and no solar eclipse during the Martian summer and winter solstice. Solar radiation intensity is close to minimum at the summer solstice and close to maximum at the winter solstice. Total surface exposure time is longer near the north pole and around the anti- Mars point. Total maximum solar energy is larger around the anti-Mars point. Total surface solar energy is higher around the anti-Mars point near the equator. The results from this study and others like it will be important in determining landing site selection, vehicle system design and mission operations for the human exploration of Phobos and subsequently Mars.

  8. Genetic Gain and Inbreeding from Genomic Selection in a Simulated Commercial Breeding Program for Perennial Ryegrass.

    PubMed

    Lin, Zibei; Cogan, Noel O I; Pembleton, Luke W; Spangenberg, German C; Forster, John W; Hayes, Ben J; Daetwyler, Hans D

    2016-03-01

    Genomic selection (GS) provides an attractive option for accelerating genetic gain in perennial ryegrass () improvement given the long cycle times of most current breeding programs. The present study used simulation to investigate the level of genetic gain and inbreeding obtained from GS breeding strategies compared with traditional breeding strategies for key traits (persistency, yield, and flowering time). Base population genomes were simulated through random mating for 60,000 generations at an effective population size of 10,000. The degree of linkage disequilibrium (LD) in the resulting population was compared with that obtained from empirical studies. Initial parental varieties were simulated to match diversity of current commercial cultivars. Genomic selection was designed to fit into a company breeding program at two selection points in the breeding cycle (spaced plants and miniplot). Genomic estimated breeding values (GEBVs) for productivity traits were trained with phenotypes and genotypes from plots. Accuracy of GEBVs was 0.24 for persistency and 0.36 for yield for single plants, while for plots it was lower (0.17 and 0.19, respectively). Higher accuracy of GEBVs was obtained for flowering time (up to 0.7), partially as a result of the larger reference population size that was available from the clonal row stage. The availability of GEBVs permit a 4-yr reduction in cycle time, which led to at least a doubling and trebling genetic gain for persistency and yield, respectively, than the traditional program. However, a higher rate of inbreeding per cycle among varieties was also observed for the GS strategy. Copyright © 2016 Crop Science Society of America.

  9. AI-based (ANN and SVM) statistical downscaling methods for precipitation estimation under climate change scenarios

    NASA Astrophysics Data System (ADS)

    Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid

    2017-04-01

    Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.

  10. The relationship of body size to survivorship of hatchling snapping turtles (Chelydra serpentina): an evaluation of the "bigger is better" hypothesis.

    PubMed

    Congdon, Justin D; Nagle, Roy D; Dunham, Arthur E; Beck, Chirstopher W; Kinney, Owen M; Yeomans, S Rebecca

    1999-11-01

    In many organisms, body size is positively correlated with traits that are presumably related to fitness. If directional selection frequently favors larger offspring (the "bigger is better" hypothesis), the results of such selection should be detectable with field experiments. We tested the "bigger is better" hypothesis in hatchling snapping turtles (Chelydra serpentina) by conducting one long-term and three short-term experiments on the University of Michigan E.S. George Reserve in southeastern Michigan. In the fall of 1995 and 1996, we released hatchlings at artificial nests separated from the nearest wetland by fences. We recorded the proportion of hatchlings recaptured, the time it took hatchlings to move to fences from artificial nests 45, 55, and 80 m away, and dispersion along the fence. We determined whether the response variables and probability of recapture at fences were associated with hatchling body size. During 1995, average travel times of hatchlings from the experimental nests were not related to distance from the fence; however, time to recapture was positively correlated with dispersion from the zero point on the fence, and the maximum time to reach the fence was almost twice as long for hatchlings from the 80-m nest compared to those from the 45-m nest. Sixty-seven percent of the hatchlings reached the fence and the proportions doing so from each nest were not different. Body size was not significantly related to probability of recapture in either of the 1995 experiments. In 1996, 59% of released hatchlings were recaptured. Time to recapture was not related to dispersion from the zero point or to body size. Cubic spline analysis suggested stabilizing selection on body size. We also conducted a set of long-term hatchling release experiments between 1980-1993 to compare the survival of hatchlings released at nest sites to that of hatchlings released directly into marshes, and we looked for relationships between survivorship and hatchling body size. During 7 years in which more than 30 hatchlings were released, 413 hatchlings were released directly into the marsh and 262 were released at nests: their probability of survival did not differ. Over all years, for both release groups combined and for each group separately, survival was not related to body size. In 1983 alone, survival was also not related to body size for either group or for both groups combined. In our three short-term experiments and one long-term experiment, we found no evidence to support the "bigger is better" hypothesis. When selection on body size did occur, selection was stabilizing, not directional for larger size.

  11. Effects of Darwinian Selection and Mutability on Rate of Broadly Neutralizing Antibody Evolution during HIV-1 Infection

    PubMed Central

    Sheng, Zizhang; Schramm, Chaim A.; Connors, Mark; Morris, Lynn; Mascola, John R.; Kwong, Peter D.; Shapiro, Lawrence

    2016-01-01

    Accumulation of somatic mutations in antibody variable regions is critical for antibody affinity maturation, with HIV-1 broadly neutralizing antibodies (bnAbs) generally requiring years to develop. We recently found that the rate at which mutations accumulate decreases over time, but the mechanism governing this slowing is unclear. In this study, we investigated whether natural selection and/or mutability of the antibody variable region contributed significantly to observed decrease in rate. We used longitudinally sampled sequences of immunoglobulin transcripts of single lineages from each of 3 donors, as determined by next generation sequencing. We estimated the evolutionary rates of the complementarity determining regions (CDRs), which are most significant for functional selection, and found they evolved about 1.5- to 2- fold faster than the framework regions. We also analyzed the presence of AID hotspots and coldspots at different points in lineage development and observed an average decrease in mutability of less than 10 percent over time. Altogether, the correlation between Darwinian selection strength and evolutionary rate trended toward significance, especially for CDRs, but cannot fully explain the observed changes in evolutionary rate. The mutability modulated by AID hotspots and coldspots changes correlated only weakly with evolutionary rates. The combined effects of Darwinian selection and mutability contribute substantially to, but do not fully explain, evolutionary rate change for HIV-1-targeting bnAb lineages. PMID:27191167

  12. Transcranial direct current stimulation to primary motor area improves hand dexterity and selective attention in chronic stroke.

    PubMed

    Au-Yeung, Stephanie S Y; Wang, Juliana; Chen, Ye; Chua, Eldrich

    2014-12-01

    The aim of this study was to determine whether transcranial direct current stimulation (tDCS) applied to the primary motor hand area modulates hand dexterity and selective attention after stroke. This study was a double-blind, placebo-controlled, randomized crossover trial involving subjects with chronic stroke. Ten stroke survivors with some pinch strength in the paretic hand received three different tDCS interventions assigned in random order in separate sessions-anodal tDCS targeting the primary motor area of the lesioned hemisphere (M1lesioned), cathodal tDCS applied to the contralateral hemisphere (M1nonlesioned), and sham tDCS-each for 20 mins. The primary outcome measures were Purdue pegboard test scores for hand dexterity and response time in the color-word Stroop test for selective attention. Pinch strength of the paretic hand was the secondary outcome. Cathodal tDCS to M1nonlesioned significantly improved affected hand dexterity (by 1.1 points on the Purdue pegboard unimanual test, P = 0.014) and selective attention (0.6 secs faster response time on the level 3 Stroop interference test for response inhibition, P = 0.017), but not pinch strength. The outcomes were not improved with anodal tDCS to M1lesioned or sham tDCS. Twenty minutes of cathodal tDCS to M1nonlesioned can promote both paretic hand dexterity and selective attention in people with chronic stroke.

  13. Identification and Evaluation of Methods to Determine Ability Requirements for Air Force Occupational Specialties

    DTIC Science & Technology

    1989-08-01

    specific elements identified as useful for selection (Primoff & Eyde , 1988). The JEM approach uses a slightly different scale when test development is not...related to GATB scores and tests developed to measure specific elements (Primoff & Eyde , 1988). Of the ability taxonomies reviewed in this study, the...only formative at this point in time, the results of recent research in cognitive psychology and in use of the computer as a testing medium have not

  14. Combat Policing: The Application of Selected Law Enforcement Techniques to Enhance Infantry Operations

    DTIC Science & Technology

    2012-05-09

    time, moving closer to a point where they try to achieve peer status with the occupying force.5 The initial Iraqi insurgency was largely composed of...into the nearest shop he could observe which sold phones. After a moment of strangely tense conversation with the shopkeeper, he peered behind the...Cartels: Insurgents Defeated with a Counterinsurgency Strategy.” Marine Corps Gazette. Jan 2012. Combat Hunter Homepage. U.S. Marine Corps Mobile

  15. Outcomes of Cutaneous Scar Revision During Surgical Implant Removal in Children with Cerebral Palsy.

    PubMed

    Davids, Jon R; Diaz, Kevin; Leba, Thu-Ba; Adams, Samuel; Westberry, David E; Bagley, Anita M

    2016-08-17

    Children who have had surgery involving the placement of an implant frequently undergo a subsequent surgery for hardware removal. The cosmesis of surgical scars following initial and subsequent surgeries is unpredictable. Scar incision (subsequent surgical incision through the initial scar) or excision (around the initial scar) is selected on the basis of the quality of the initial scar. The outcomes following these techniques have not been determined. This prospective, consecutive case series was designed to compare outcomes following surgical scar incision versus excision at the time of implant removal in children with cerebral palsy. Photographs of the scars were made preoperatively and at 6 and 12 months following implant removal and were graded for scar quality utilizing the modified Stony Brook Scar Evaluation Scale (SBSES). Parental assessment of scar appearance was performed at the same time points utilizing a visual analog cosmetic scale (VACS). The scars that were selected for incision had significantly worse SBSES scores at 6 and 12 months following the second surgery compared with preoperative values. However, parents' VACS scores of the incised scars, although worse at 6 months, were comparable with preoperative scores at 12 months. Scars that were selected for excision had significantly worse SBSES scores at 6 months but scores that were comparable with preoperative values at 12 months. VACS scores for the excised scars were comparable at the 3 time points. Surgical incisions that initially healed with good scar quality generally healed well (from the parents' perspective) following subsequent incision through the previous scar. Surgical incisions that initially healed with poor scar quality did not heal better following excision of the previous scar. In such situations, surgical excision of the existing scar should occur in conjunction with additional adjuvant therapies to improve cosmesis. Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

  16. Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.

    PubMed

    Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore

    2015-12-01

    A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Characteristics of strain-sensitive photonic crystal cavities in a flexible substrate.

    PubMed

    No, You-Shin; Choi, Jae-Hyuck; Kim, Kyoung-Ho; Park, Hong-Gyu

    2016-11-14

    High-index semiconductor photonic crystal (PhC) cavities in a flexible substrate support strong and tunable optical resonances that can be used for highly sensitive and spatially localized detection of mechanical deformations in physical systems. Here, we report theoretical studies and fundamental understandings of resonant behavior of an optical mode excited in strain-sensitive rod-type PhC cavities consisting of high-index dielectric nanorods embedded in a low-index flexible polymer substrate. Using the three-dimensional finite-difference time-domain simulation method, we calculated two-dimensional transverse-electric-like photonic band diagrams and the three-dimensional dispersion surfaces near the first Γ-point band edge of unidirectionally strained PhCs. A broken rotational symmetry in the PhCs modifies the photonic band structures and results in the asymmetric distributions and different levels of changes in normalized frequencies near the first Γ-point band edge in the reciprocal space, which consequently reveals strain-dependent directional optical losses and selected emission patterns. The calculated electric fields, resonant wavelengths, and quality factors of the band-edge modes in the strained PhCs show an excellent agreement with the results of qualitative analysis of modified dispersion surfaces. Furthermore, polarization-resolved time-averaged Poynting vectors exhibit characteristic dipole-like emission patterns with preferentially selected linear polarizations, originating from the asymmetric band structures in the strained PhCs.

  18. The Selection of Computed Tomography Scanning Schemes for Lengthy Symmetric Objects

    NASA Astrophysics Data System (ADS)

    Trinh, V. B.; Zhong, Y.; Osipov, S. P.

    2017-04-01

    . The article describes the basic computed tomography scan schemes for lengthy symmetric objects: continuous (discrete) rotation with a discrete linear movement; continuous (discrete) rotation with discrete linear movement to acquire 2D projection; continuous (discrete) linear movement with discrete rotation to acquire one-dimensional projection and continuous (discrete) rotation to acquire of 2D projection. The general method to calculate the scanning time is discussed in detail. It should be extracted the comparison principle to select a scanning scheme. This is because data are the same for all scanning schemes: the maximum energy of the X-ray radiation; the power of X-ray radiation source; the angle of the X-ray cone beam; the transverse dimension of a single detector; specified resolution and the maximum time, which is need to form one point of the original image and complies the number of registered photons). It demonstrates the possibilities of the above proposed method to compare the scanning schemes. Scanning object was a cylindrical object with the mass thickness is 4 g/cm2, the effective atomic number is 15 and length is 1300 mm. It analyzes data of scanning time and concludes about the efficiency of scanning schemes. It examines the productivity of all schemes and selects the effective one.

  19. Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

    PubMed Central

    Adam, Asrul; Mohd Tumari, Mohd Zaidi; Mohamad, Mohd Saberi

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model. PMID:25243236

  20. Automated selection of computed tomography display parameters using neural networks

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Neu, Scott; Valentino, Daniel J.

    2001-07-01

    A collection of artificial neural networks (ANN's) was trained to identify simple anatomical structures in a set of x-ray computed tomography (CT) images. These neural networks learned to associate a point in an image with the anatomical structure containing the point by using the image pixels located on the horizontal and vertical lines that ran through the point. The neural networks were integrated into a computer software tool whose function is to select an index into a list of CT window/level values from the location of the user's mouse cursor. Based upon the anatomical structure selected by the user, the software tool automatically adjusts the image display to optimally view the structure.

  1. Microbial Community Changes in Hydraulic Fracturing Fluids and Produced Water from Shale Gas Extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, Arvind Murali; Hartsock, Angela; Bibby, Kyle J

    2013-11-19

    Microbial communities associated with produced water from hydraulic fracturing are not well understood, and their deleterious activity can lead to significant increases in production costs and adverse environmental impacts. In this study, we compared the microbial ecology in prefracturing fluids (fracturing source water and fracturing fluid) and produced water at multiple time points from a natural gas well in southwestern Pennsylvania using 16S rRNA gene-based clone libraries, pyrosequencing, and quantitative PCR. The majority of the bacterial community in prefracturing fluids constituted aerobic species affiliated with the class Alphaproteobacteria. However, their relative abundance decreased in produced water with an increase inmore » halotolerant, anaerobic/facultative anaerobic species affiliated with the classes Clostridia, Bacilli, Gammaproteobacteria, Epsilonproteobacteria, Bacteroidia, and Fusobacteria. Produced water collected at the last time point (day 187) consisted almost entirely of sequences similar to Clostridia and showed a decrease in bacterial abundance by 3 orders of magnitude compared to the prefracturing fluids and produced water samplesfrom earlier time points. Geochemical analysis showed that produced water contained higher concentrations of salts and total radioactivity compared to prefracturing fluids. This study provides evidence of long-term subsurface selection of the microbial community introduced through hydraulic fracturing, which may include significant implications for disinfection as well as reuse of produced water in future fracturing operations.« less

  2. Concentrate Supplement Modifies the Feeding Behavior of Simmental Cows Grazing in Two High Mountain Pastures.

    PubMed

    Romanzin, Alberto; Corazzin, Mirco; Piasentier, Edi; Bovolenta, Stefano

    2018-05-16

    During grazing on Alpine pastures, the use of concentrates in dairy cows' diet leads to a reduction of the environmental sustainability of farms, and influences the selective pressure on some plant species. In order to minimize the use of concentrates, it is imperative to obtain data on the grazing behavior of cows. The aim of this study was to assess the effect of concentrate levels on the behavior of dairy cows during grazing. One hundred and ten lactating Italian Simmental cows, that sequentially grazed two pastures characterized by Poion alpinae (Poion) and Seslerion caeruleae (Seslerion) alliance, were considered. For each pasture, eight cows were selected and assigned to two groups: High and Low, supplemented with 4 kg/head/d, and 1 kg/head/d of concentrate respectively. Cows were equipped with a noseband pressure sensor and a pedometer (RumiWatch system, ITIN-HOCH GmbH) to assess grazing, ruminating, and walking behavior. In addition, the plant selection of the animals was assessed. On Poion, increased supplement intake caused a more intense selection of legumes, without affecting feeding and walking times. On Seslerion, grazing time was higher in Low than High. Grazing management in alpine region must take into account the great variability of pastures that largely differ from a floristic and nutritional point of view.

  3. Molecular population genetics of the insulin/TOR signal transduction pathway: a network-level analysis in Drosophila melanogaster.

    PubMed

    Alvarez-Ponce, David; Guirao-Rico, Sara; Orengo, Dorcas J; Segarra, Carmen; Rozas, Julio; Aguadé, Montserrat

    2012-01-01

    The IT-insulin/target of rapamycin (TOR)-signal transduction pathway is a relatively well-characterized pathway that plays a central role in fundamental biological processes. Network-level analyses of DNA divergence in Drosophila and vertebrates have revealed a clear gradient in the levels of purifying selection along this pathway, with the downstream genes being the most constrained. Remarkably, this feature does not result from factors known to affect selective constraint such as gene expression, codon bias, protein length, and connectivity. The present work aims to establish whether the selective constraint gradient detected along the IT pathway at the between-species level can also be observed at a shorter time scale. With this purpose, we have surveyed DNA polymorphism in Drosophila melanogaster and divergence from D. simulans along the IT pathway. Our network-level analysis shows that DNA polymorphism exhibits the same polarity in the strength of purifying selection as previously detected at the divergence level. This equivalent feature detected both within species and between closely and distantly related species points to the action of a general mechanism, whose action is neither organism specific nor evolutionary time dependent. The detected polarity would be, therefore, intrinsic to the IT pathway architecture and function.

  4. Focused microwave-assisted extraction combined with solid-phase microextraction and gas chromatography-mass spectrometry for the selective analysis of cocaine from coca leaves.

    PubMed

    Bieri, Stefan; Ilias, Yara; Bicchi, Carlo; Veuthey, Jean-Luc; Christen, Philippe

    2006-04-21

    An effective combination of focused microwave-assisted extraction (FMAE) with solid-phase microextraction (SPME) prior to gas chromatography (GC) is described for the selective extraction and quantitative analysis of cocaine from coca leaves (Erythroxylum coca). This approach required switching from an organic extraction solvent to an aqueous medium more compatible with SPME liquid sampling. SPME was performed in the direct immersion mode with a universal 100 microm polydimethylsiloxane (PDMS) coated fibre. Parameters influencing this extraction step, such as solution pH, sampling time and temperature are discussed. Furthermore, the overall extraction process takes into account the stability of cocaine in alkaline aqueous solutions at different temperatures. Cocaine degradation rate was determined by capillary electrophoresis using the short end injection procedure. In the selected extraction conditions, less than 5% of cocaine was degraded after 60 min. From a qualitative point of view, a significant gain in selectivity was obtained with the incorporation of SPME in the extraction procedure. As a consequence of SPME clean-up, shorter columns could be used and analysis time was reduced to 6 min compared to 35 min with conventional GC. Quantitative results led to a cocaine content of 0.70 +/- 0.04% in dry leaves (RSD <5%) which agreed with previous investigations.

  5. Predicting the Impacts of Climate Change on Runoff and Sediment Processes in Agricultural Watersheds: A Case Study from the Sunflower Watershed in the Lower Mississippi Basin

    NASA Astrophysics Data System (ADS)

    Elkadiri, R.; Momm, H.; Yasarer, L.; Armour, G. L.

    2017-12-01

    Climatic conditions play a major role in physical processes impacting soil and agrochemicals detachment and transportation from/in agricultural watersheds. In addition, these climatic conditions are projected to significantly vary spatially and temporally in the 21st century, leading to vast uncertainties about the future of sediment and non-point source pollution transport in agricultural watersheds. In this study, we selected the sunflower basin in the lower Mississippi River basin, USA to contribute in the understanding of how climate change affects watershed processes and the transport of pollutant loads. The climate projections used in this study were retrieved from the archive of World Climate Research Programme's (WCRP) Coupled Model Intercomparison Phase 5 (CMIP5) project. The CMIP5 dataset was selected because it contains the most up-to-date spatially downscaled and bias corrected climate projections. A subset of ten GCMs representing a range in projected climate were spatially downscaled for the sunflower watershed. Statistics derived from downscaled GCM output representing the 2011-2040, 2041-2070 and 2071-2100 time periods were used to generate maximum/minimum temperature and precipitation on a daily time step using the USDA Synthetic Weather Generator, SYNTOR. These downscaled climate data were then utilized as inputs to run in the Annualized Agricultural Non-Point Source (AnnAGNPS) pollution watershed model to estimate time series of runoff, sediment, and nutrient loads produced from the watershed. For baseline conditions a validated simulation of the watershed was created and validated using historical data from 2000 until 2015.

  6. Vendor compliance with Ontario's tobacco point of sale legislation.

    PubMed

    Dubray, Jolene M; Schwartz, Robert M; Garcia, John M; Bondy, Susan J; Victor, J Charles

    2009-01-01

    On May 31, 2006, Ontario joined a small group of international jurisdictions to implement legislative restrictions on tobacco point of sale promotions. This study compares the presence of point of sale promotions in the retail tobacco environment from three surveys: one prior to and two following implementation of the legislation. Approximately 1,575 tobacco vendors were randomly selected for each survey. Each regionally-stratified sample included equal numbers of tobacco vendors categorized into four trade classes: chain convenience, independent convenience and discount, gas stations, and grocery. Data regarding the six restricted point of sale promotions were collected using standardized protocols and inspection forms. Weighted estimates and 95% confidence intervals were produced at the provincial, regional and vendor trade class level using the bootstrap method for estimating variance. At baseline, the proportion of tobacco vendors who did not engage in each of the six restricted point of sale promotions ranged from 41% to 88%. Within four months following implementation of the legislation, compliance with each of the six restricted point of sale promotions exceeded 95%. Similar levels of compliance were observed one year later. Grocery stores had the fewest point of sale promotions displayed at baseline. Compliance rates did not differ across vendor trade classes at either follow-up survey. Point of sale promotions did not differ across regions in any of the three surveys. Within a short period of time, a high level of compliance with six restricted point of sale promotions was achieved.

  7. Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Lohani, B.

    2014-05-01

    Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results

  8. Item response theory analysis of the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised in the Pooled Resource Open-Access ALS Clinical Trials Database.

    PubMed

    Bacci, Elizabeth D; Staniewska, Dorota; Coyne, Karin S; Boyer, Stacey; White, Leigh Ann; Zach, Neta; Cedarbaum, Jesse M

    2016-01-01

    Our objective was to examine dimensionality and item-level performance of the Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-R) across time using classical and modern test theory approaches. Confirmatory factor analysis (CFA) and Item Response Theory (IRT) analyses were conducted using data from patients with amyotrophic lateral sclerosis (ALS) Pooled Resources Open-Access ALS Clinical Trials (PRO-ACT) database with complete ALSFRS-R data (n = 888) at three time-points (Time 0, Time 1 (6-months), Time 2 (1-year)). Results demonstrated that in this population of 888 patients, mean age was 54.6 years, 64.4% were male, and 93.7% were Caucasian. The CFA supported a 4* individual-domain structure (bulbar, gross motor, fine motor, and respiratory domains). IRT analysis within each domain revealed misfitting items and overlapping item response category thresholds at all time-points, particularly in the gross motor and respiratory domain items. Results indicate that many of the items of the ALSFRS-R may sub-optimally distinguish among varying levels of disability assessed by each domain, particularly in patients with less severe disability. Measure performance improved across time as patient disability severity increased. In conclusion, modifications to select ALSFRS-R items may improve the instrument's specificity to disability level and sensitivity to treatment effects.

  9. Longitudinal assessment of local and global functional connectivity following sports-related concussion.

    PubMed

    Meier, Timothy B; Bellgowan, Patrick S F; Mayer, Andrew R

    2017-02-01

    Growing evidence suggests that sports-related concussions (SRC) may lead to acute changes in intrinsic functional connectivity, although most studies to date have been cross-sectional in nature with relatively modest sample sizes. We longitudinally assessed changes in local and global resting state functional connectivity using metrics that do not require a priori seed or network selection (regional homogeneity; ReHo and global brain connectivity; GBC, respectively). A large sample of collegiate athletes (N = 43) was assessed approximately one day (1.74 days post-injury, N = 34), one week (8.44 days, N = 34), and one month post-concussion (32.47 days, N = 30). Healthy contact sport-athletes served as controls (N = 51). Concussed athletes showed improvement in mood symptoms at each time point (p's < 0.05), but had significantly higher mood scores than healthy athletes at every time point (p's < 0.05). In contrast, self-reported symptoms and cognitive deficits improved over time following concussion (p's < 0.001), returning to healthy levels by one week post-concussion. ReHo in sensorimotor, visual, and temporal cortices increased over time post-concussion, and was greatest at one month post-injury. Conversely, ReHo in the frontal cortex decreased over time following SRC, with the greatest decrease evident at one month post-concussion. Differences in ReHo relative to healthy athletes were primarily observed at one month post-concussion rather than the more acute time points. Contrary to our hypothesis, no significant cross-sectional or longitudinal differences in GBC were observed. These results are suggestive of a delayed onset of local connectivity changes following SRC.

  10. Pose estimation for augmented reality applications using genetic algorithm.

    PubMed

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-12-01

    This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.

  11. Some analysis on the diurnal variation of rainfall over the Atlantic Ocean

    NASA Technical Reports Server (NTRS)

    Gill, T.; Perng, S.; Hughes, A.

    1981-01-01

    Data collected from the GARP Atlantic Tropical Experiment (GATE) was examined. The data were collected from 10,000 grid points arranged as a 100 x 100 array; each grid covered a 4 square km area. The amount of rainfall was measured every 15 minutes during the experiment periods using c-band radars. Two types of analyses were performed on the data: analysis of diurnal variation was done on each of grid points based on the rainfall averages at noon and at midnight, and time series analysis on selected grid points based on the hourly averages of rainfall. Since there are no known distribution model which best describes the rainfall amount, nonparametric methods were used to examine the diurnal variation. Kolmogorov-Smirnov test was used to test if the rainfalls at noon and at midnight have the same statistical distribution. Wilcoxon signed-rank test was used to test if the noon rainfall is heavier than, equal to, or lighter than the midnight rainfall. These tests were done on each of the 10,000 grid points at which the data are available.

  12. Access to Mars from Earth-Moon Libration Point Orbits:. [Manifold and Direct Options

    NASA Technical Reports Server (NTRS)

    Kakoi, Masaki; Howell, Kathleen C.; Folta, David

    2014-01-01

    This investigation is focused specifically on transfers from Earth-Moon L(sub 1)/L(sub 2) libration point orbits to Mars. Initially, the analysis is based in the circular restricted three-body problem to utilize the framework of the invariant manifolds. Various departure scenarios are compared, including arcs that leverage manifolds associated with the Sun-Earth L(sub 2) orbits as well as non-manifold trajectories. For the manifold options, ballistic transfers from Earth-Moon L(sub 2) libration point orbits to Sun-Earth L(sub 1)/L(sub 2) halo orbits are first computed. This autonomous procedure applies to both departure and arrival between the Earth-Moon and Sun-Earth systems. Departure times in the lunar cycle, amplitudes and types of libration point orbits, manifold selection, and the orientation/location of the surface of section all contribute to produce a variety of options. As the destination planet, the ephemeris position for Mars is employed throughout the analysis. The complete transfer is transitioned to the ephemeris model after the initial design phase. Results for multiple departure/arrival scenarios are compared.

  13. Communication target object recognition for D2D connection with feature size limit

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Kim, Soochang; Kim, Young-hoon; Lee, Chulhee

    2015-03-01

    Recently, a new concept of device-to-device (D2D) communication, which is called "point-and-link communication" has attracted great attentions due to its intuitive and simple operation. This approach enables user to communicate with target devices without any pre-identification information such as SSIDs, MAC addresses by selecting the target image displayed on the user's own device. In this paper, we present an efficient object matching algorithm that can be applied to look(point)-and-link communications for mobile services. Due to the limited channel bandwidth and low computational power of mobile terminals, the matching algorithm should satisfy low-complexity, low-memory and realtime requirements. To meet these requirements, we propose fast and robust feature extraction by considering the descriptor size and processing time. The proposed algorithm utilizes a HSV color histogram, SIFT (Scale Invariant Feature Transform) features and object aspect ratios. To reduce the descriptor size under 300 bytes, a limited number of SIFT key points were chosen as feature points and histograms were binarized while maintaining required performance. Experimental results show the robustness and the efficiency of the proposed algorithm.

  14. SigVox - A 3D feature matching algorithm for automatic street object recognition in mobile laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Wang, Jinhu; Lindenbergh, Roderik; Menenti, Massimo

    2017-06-01

    Urban road environments contain a variety of objects including different types of lamp poles and traffic signs. Its monitoring is traditionally conducted by visual inspection, which is time consuming and expensive. Mobile laser scanning (MLS) systems sample the road environment efficiently by acquiring large and accurate point clouds. This work proposes a methodology for urban road object recognition from MLS point clouds. The proposed method uses, for the first time, shape descriptors of complete objects to match repetitive objects in large point clouds. To do so, a novel 3D multi-scale shape descriptor is introduced, that is embedded in a workflow that efficiently and automatically identifies different types of lamp poles and traffic signs. The workflow starts by tiling the raw point clouds along the scanning trajectory and by identifying non-ground points. After voxelization of the non-ground points, connected voxels are clustered to form candidate objects. For automatic recognition of lamp poles and street signs, a 3D significant eigenvector based shape descriptor using voxels (SigVox) is introduced. The 3D SigVox descriptor is constructed by first subdividing the points with an octree into several levels. Next, significant eigenvectors of the points in each voxel are determined by principal component analysis (PCA) and mapped onto the appropriate triangle of a sphere approximating icosahedron. This step is repeated for different scales. By determining the similarity of 3D SigVox descriptors between candidate point clusters and training objects, street furniture is automatically identified. The feasibility and quality of the proposed method is verified on two point clouds obtained in opposite direction of a stretch of road of 4 km. 6 types of lamp pole and 4 types of road sign were selected as objects of interest. Ground truth validation showed that the overall accuracy of the ∼170 automatically recognized objects is approximately 95%. The results demonstrate that the proposed method is able to recognize street furniture in a practical scenario. Remaining difficult cases are touching objects, like a lamp pole close to a tree.

  15. Electrochemical Selective and Simultaneous Detection of Diclofenac and Ibuprofen in Aqueous Solution Using HKUST-1 Metal-Organic Framework-Carbon Nanofiber Composite Electrode.

    PubMed

    Motoc, Sorina; Manea, Florica; Iacob, Adriana; Martinez-Joaristi, Alberto; Gascon, Jorge; Pop, Aniela; Schoonman, Joop

    2016-10-17

    In this study, the detection protocols for the individual, selective, and simultaneous determination of ibuprofen (IBP) and diclofenac (DCF) in aqueous solutions have been developed using HKUST-1 metal-organic framework-carbon nanofiber composite (HKUST-CNF) electrode. The morphological and electrical characterization of modified composite electrode prepared by film casting was studied by scanning electronic microscopy and four-point-probe methods. The electrochemical characterization of the electrode by cyclic voltammetry (CV) was considered the reference basis for the optimization of the operating conditions for chronoamperometry (CA) and multiple-pulsed amperometry (MPA). This electrode exhibited the possibility to selectively detect IBP and DCF by simple switching the detection potential using CA. However, the MPA operated under optimum working conditions of four potential levels selected based on CV shape in relation to the potential value, pulse time, and potential level number, and order allowed the selective/simultaneous detection of IBP and DCF characterized by the enhanced detection performance. For this application, the HKUST-CNF electrode exhibited a good stability and reproducibility of the results was achieved.

  16. Electrochemical Selective and Simultaneous Detection of Diclofenac and Ibuprofen in Aqueous Solution Using HKUST-1 Metal-Organic Framework-Carbon Nanofiber Composite Electrode

    PubMed Central

    Motoc, Sorina; Manea, Florica; Iacob, Adriana; Martinez-Joaristi, Alberto; Gascon, Jorge; Pop, Aniela; Schoonman, Joop

    2016-01-01

    In this study, the detection protocols for the individual, selective, and simultaneous determination of ibuprofen (IBP) and diclofenac (DCF) in aqueous solutions have been developed using HKUST-1 metal-organic framework-carbon nanofiber composite (HKUST-CNF) electrode. The morphological and electrical characterization of modified composite electrode prepared by film casting was studied by scanning electronic microscopy and four-point-probe methods. The electrochemical characterization of the electrode by cyclic voltammetry (CV) was considered the reference basis for the optimization of the operating conditions for chronoamperometry (CA) and multiple-pulsed amperometry (MPA). This electrode exhibited the possibility to selectively detect IBP and DCF by simple switching the detection potential using CA. However, the MPA operated under optimum working conditions of four potential levels selected based on CV shape in relation to the potential value, pulse time, and potential level number, and order allowed the selective/simultaneous detection of IBP and DCF characterized by the enhanced detection performance. For this application, the HKUST-CNF electrode exhibited a good stability and reproducibility of the results was achieved. PMID:27763509

  17. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  18. [Professor ZHAO Jiping's meridian diagnosis and treatment for primary dysmenorrhea].

    PubMed

    Tan, Cheng; Zhang, Chang; Zhang, Jiajia; Wang, Jun

    2016-03-01

    For the treatment of primary dysmenorrhea, professor ZHAO Jiping focuses on meridian diagnosis and inspection, and uses pressing methods to locate the response points along the meridian, including acupoints and aishi points. During the stage of attack, it is essential to press along the spleen meridian, mainly Sanyinjiao (SP 6), Diji (SP 8) and Yinlingquan (SP 9); during the stage of remission, it is essential to press along the bladder meridian and stomach meridian, mainly Ganshu (BL 18), Pishu (BL 20), Weishu (BL 21), Shenshu (BL 23) and Zusanli (ST 36). The differences between deficiency syndrome and excess syndrome lead to the different feelings of doctors and patients. Combined with the results of meridian diagnosis and inspection, the aim of treatment can be achieved by different acupuncture methods. Professor ZHAO pays attention to the treatment of accompanied symptoms and timing of treatment, since the relief of accompanied syndromes and selection of timing are keys to relieving patient's pain.

  19. Achieving reliability - The evolution of redundancy in American manned spacecraft computers

    NASA Technical Reports Server (NTRS)

    Tomayko, J. E.

    1985-01-01

    The Shuttle is the first launch system deployed by NASA with full redundancy in the on-board computer systems. Fault-tolerance, i.e., restoring to a backup with less capabilities, was the method selected for Apollo. The Gemini capsule was the first to carry a computer, which also served as backup for Titan launch vehicle guidance. Failure of the Gemini computer resulted in manual control of the spacecraft. The Apollo system served vehicle flight control and navigation functions. The redundant computer on Skylab provided attitude control only in support of solar telescope pointing. The STS digital, fly-by-wire avionics system requires 100 percent reliability. The Orbiter carries five general purpose computers, four being fully-redundant and the fifth being soley an ascent-descent tool. The computers are synchronized at input and output points at a rate of about six times a second. The system is projected to cause a loss of an Orbiter only four times in a billion flights.

  20. Early arthroscopic release in stiff shoulder

    PubMed Central

    Sabat, Dhananjaya; Kumar, Vinod

    2008-01-01

    Purpose: To evaluate the results of early arthroscopic release in the patients of stiff shoulder Methods: Twenty patients of stiff shoulder, who had symptoms for at least three months and failed to improve with steroid injections and physical therapy of 6 weeks duration, underwent arthroscopic release. The average time between onset of symptoms and the time of surgery was 4 months and 2 weeks. The functional outcome was evaluated using ASES and Constant and Murley scoring systems. Results: All the patients showed significant improvement in the range of motion and relief of pain by end of three months following the procedure. At 12 months, mean improvement in ASES score is 38 points and Constant and Murley score is 4O.5 points. All patients returned to work by 3-5 months (average -4.5 months). Conclusion: Early arthroscopic release showed promising results with reliable increase in range of motion, early relief of symptoms and consequent early return to work. So it is highly recommended in properly selected patients. Level of evidence: Level IV PMID:20300309

Top